id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2307.15854
Some Quantitative Properties of Solutions to the Buckling Type Equation
In this paper, we investigate the quantitative unique continuation, propagation of smallness and measure bounds of nodal sets of solutions to the Buckling type equation $\triangle^2u+\lambda\triangle u-k^2u=0$ in a bounded analytic domain $\Omega\subseteq\mathbb{R}^n$ with the homogeneous boundary conditions $u=0$ and $\frac{\partial u}{\partial\nu}=0$ on $\partial\Omega$, where $\lambda,\ k$ are nonnegative real constants, and $\nu$ is the outer unit normal vector on $\partial\Omega$. We obtain that, the upper bounds for the maximal vanishing order of $u$ and the $n-1$ dimensional Hausdorff measure of the nodal set of $u$ are both $C(\sqrt{\lambda}+\sqrt{k}+1)$, where $C$ is a positive constant only depending on $n$ and $\Omega$. Moreover, we also give a quantitative result of the propagation of smallness of $u$.
Long Tian, Xiaoping Yang
2023-07-29T01:09:30Z
http://arxiv.org/abs/2307.15854v1
# Some Quantitative Properties of Solutions to the Buckling Type Equation # Some Quantitative Properties of Solutions to the Buckling Type Equation Long Tian School of Mathematics and Statistics, Nanjing University of Science & Technology, Jiangsu, Nanjing 210094, China. Email: [email protected] Xiaoping Yang Department of Mathematics, Nanjing University, Jiangsu, Nanjing 210008, China. Email: [email protected] **Abstract:** In this paper, we investigate the quantitative unique continuation, propagation of smallness and measure bounds of nodal sets of solutions to the Buckling type equation \(\triangle^{2}u+\lambda\triangle u-k^{2}u=0\) in a bounded analytic domain \(\Omega\subseteq\mathbb{R}^{n}\) with the homogeneous boundary conditions \(u=0\) and \(\frac{\partial u}{\partial v}=0\) on \(\partial\Omega\), where \(\lambda,\ k\) are nonnegative real constants, and \(v\) is the outer unit normal vector on \(\partial\Omega\). We obtain that, the upper bounds for the maximal vanishing order of \(u\) and the \(n-1\) dimensional Hausdorff measure of the nodal set of \(u\) are both \(C(\sqrt{\lambda}+\sqrt{k}+1)\), where \(C\) is a positive constant only depending on \(n\) and \(\Omega\). Moreover, we also give a quantitative result of the propagation of smallness of \(u\). **Key Words:** Frequency, Doubling index, Nodal sets, Quantitative unique continuation, Buckling type equation, Measure estimates, Propagation of smallness. _MSC_(2020): 58E10, 35J30. ## 1 Introduction In this paper, we will consider the quantitative unique continuation property and upper bounds of the nodal sets of solutions to the Buckling type equation with homogeneous boundary conditions in some bounded analytic domain \(\Omega\subseteq\mathbb{R}^{n}\). Here, a bounded domain \(\Omega\) is said to be analytic if there exists a positive constant \(\delta\) such that for any \(x_{0}\in\partial\Omega\), \(B_{\delta}(x_{0})\cap\partial\Omega\) is an \((n-1)-\)dimensional analytic hypersurface of \(\mathbb{R}^{n}\). The Buckling type equation with homogeneous boundary conditions is as follows: \[\begin{cases}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0. uniformly elliptic equations, and also derived the upper measure bound of the conjecture for any dimensional analytic manifolds. In 1990, H. Donnelly and C. Fefferman in [8] obtained that, for any two dimensional \(C^{\infty}\) manifold without boundary, the upper measure bound is \(C\lambda^{\frac{3}{4}}\). It was improved by A. Logunov and E. Malinnikova in [26] to \(C\lambda^{\frac{3}{4}-\epsilon}\) for some positive constant \(\epsilon\). In 1989, R. Hardt and L. Simon studied the high dimensional \(C^{\infty}\) case and showed that the upper measure bound is \(\lambda^{C\sqrt{4}}\). In 2018, A. Logunov in [24] improved the result to \(C\lambda^{\alpha}\) for some positive constant \(\alpha>\frac{1}{2}\). In [18], I. Kukavika considered the linear and uniformly elliptic operator \(\mathcal{A}\) of \(2m-\)order with analytic coefficients and proved that, if the boundary \(\partial\Omega\) is analytic, the upper measure bounds of nodal sets of solutions to the equation \(\mathcal{A}u=\lambda u\) with analytic homogeneous boundary conditions are less than or equal to \(C\lambda^{\frac{1}{2m}}\). In 2000, Q. Han in [13] described the structures of the nodal sets of solutions to the linear and uniformly elliptic equations of higher order. In [31], the authors showed the upper measure bounds of nodal sets of eigenfunctions to the bi-Laplacian operator with non-analytic boundary data. In [22], F. H. Lin and J. Zhu obtained upper bounds of nodal sets for eigenfunctions of eigenvalue problems including bi-harmonic Steklov eigenvalue problems, buckling eigenvalue problems and clamped-plate eigenvalue problems by using analytic estimates of Morrey-Nirenberg and Carleman estimates. There are also various papers discussing the lower measure bounds of nodal sets of eigenfunctions, see for example [5, 25, 29] and references therein. The unique continuation property has been a very active research topic in recent decades. N. Garofalo and F. H. Lin in [10] and [11] proved the monotonicity formula for the frequency functions, the doubling conditions of solutions to linear and uniformly elliptic equations of second order, and obtained the strong unique continuation property. In 1998, I. Kukavica in [19] gave an upper bound for the vanishing order of solutions of some second-order linear and uniformly elliptic equations. J. Zhu in [33] obtained the doubling inequality and the vanishing order of the solutions to the bi-Laplacian equation. In [34], he further gave a bound of the maximal vanishing order of solutions to higher-order elliptic equations with singular lower terms. G. Alessandrini, L. Rondi, E. Rosset, and S. Vessella in [1] established the three-spheres inequality and the stability for the Cauchy problem for elliptic equations. A. Logunov and E. Malinnikova in [27] showed the quantitative propagation of smallness for solutions of elliptic equations. For various related results, see [4, 6, 7, 17, 35]. The vanishing order of \(u\in C^{\infty}(\Omega)\) at \(x_{0}\in\Omega\) is the nonnegative integer \(m\) such that \[\begin{cases}D^{\alpha}u(x_{0})=0,\quad\forall\ \ |\alpha|<m,\\ D^{\alpha}u(x_{0})\neq 0,\quad for\ some\ |\alpha|=m,\end{cases} \tag{1.2}\] where \(\alpha=(\alpha_{1},\cdots,\alpha_{n})\) is a multi-index, each \(\alpha_{i}\) is a nonnegative integer for any \(i=1,2,\cdots,n\), and \(D^{\alpha}u=D_{x_{1}}^{\alpha_{1}}D_{x_{2}}^{\alpha_{2}}\cdots D_{x_{n}}^{ \alpha_{n}}u\). Moreover, if for any positive integer \(m\), it holds that \[D^{\alpha}u(x_{0})=0,\quad\forall\ \ |\alpha|<m, \tag{1.3}\] then we say that \(u\) vanishes to infinite order at \(x_{0}\). The strong unique continuation property means that, if \(u\) vanishes to infinite order at some point \(x_{0}\), then \(u\equiv 0\) in the connected component containing \(x_{0}\). The main results of this paper are the following three theorems. **Theorem 1.1**.: _Assume that \(\Omega\) is a bounded, connected and analytic domain of \(\mathbb{R}^{n}\), and \(k,\ \lambda\geq 0\) and at least one of them large enough. Then, for a solution \(u\) to (1.1), there exists a positive constant \(C\) depending only on \(n\) and \(\Omega\), such that the maximal vanishing order of \(u\) at any point \(x\in\Omega\) is less than or equal to \(C(\sqrt{\lambda}+\sqrt{k})\). In other words, if the vanishing order of \(u\) at some point \(x\in\Omega\) is larger than \(C(\sqrt{\lambda}+\sqrt{k})\), then \(u\) must be identically zero in \(\Omega\)._ **Theorem 1.2**.: _Let \(u\) be a solution of (1.1), and \(\Omega\) be a bounded and analytic domain. Then for \(k,\lambda\geq 0\), and at least one of them large enough,_ \[\mathcal{H}^{n-1}\left(\left\{x\in\Omega\ \big{|}\ u(x)=0\right\}\right)\leq C (\sqrt{\lambda}+\sqrt{k}), \tag{1.4}\] _where \(C\) is a positive constant depending only on \(n\) and \(\Omega\), and \(\mathcal{H}^{n}\) is the \(n-\)dimensional Hausdorff measure._ **Theorem 1.3**.: _Let \(u\) be a solution of (1.1) in a bounded and connected domain \(\Omega\). Assume that \(G\subset\subset\Omega\) is a connected and open set, and \(E\) is a convex subset of \(\Omega\) with \(\mathcal{H}^{n}(E)\geq\epsilon\) for some positive constant \(\epsilon\). If_ \[\|u\|_{L^{\infty}(E)}\leq\eta,\quad\|u\|_{L^{\infty}(\Omega)}\leq 1,\] _then for \(\lambda>0\) and \(k>0\), at least one of them large enough, it holds that_ \[\|u\|_{L^{\infty}(G)}\leq e^{C(\sqrt{\lambda}+\sqrt{k})}\eta^{\delta}, \tag{1.5}\] _where \(C\) and \(\delta\) are positive constants depending only on \(n\), \(diam(\Omega)\), \(dist(G,\partial\Omega)\) and \(\epsilon\)._ In order to show the above results, we first explicitly establish a series of elliptic estimates involving \(\lambda\) and \(k\). With the help of introducing the frequency and doubling index related to solutions to the buckling type equation, we control the vanishing order and upper measure bounds of nodal sets of solutions by the frequency after deriving its monotonicity, doubling estimates, and mutually controlled relationship between it and the doubling index. We further show the measure upper bounds by the standard complexification. Finally, we establish the three sphere inequality and prove the quantitative propagation of smallness by iteration arguments. We point out that it is important for us to analytically extend the solutions considered to some neighborhood of \(\Omega\) because of the analyticity of the solutions and \(\partial\Omega\) in this paper. The rest of this paper is organized as follows. In the second section, we give the \(L^{2}\) and \(L^{\infty}\) estimates for every order derivative of \(u\) in \(\Omega\) explicitly involving \(\lambda\) and \(k\), and analytically extend \(u\) across the boundary \(\partial\Omega\). In the third section, we introduce the frequency and doubling index, and show an upper bound for the vanishing order and the quantitative unique continuation of \(u\), i.e., proving Theorem 1.1. In the fourth section, we prove Theorem 1.2 to give an upper measure bound for the nodal set of in \(\Omega\). Finally, in the fifth section, we prove Theorem 1.3, and show the propagation of smallness of \(u\). In the rest of this paper, \(C\) and \(C^{\prime}\) in different lines may be different positive constants depending only on \(n\) and \(\Omega\). ## 2 A priori estimates for any order derivatives of \(u\) This section will give the estimates of any order derivatives of a solution \(u\) to (1.1). We first recall the following lemma which comes from [23]. **Lemma 2.1**.: _Let \(u\in\mathcal{D}(B_{r}^{+}(0)):=\cap_{m=0}^{\infty}W^{m,2}(B_{r}^{+}(0))\) and \(D_{n}^{l}u=\frac{\partial^{l}u}{\partial x_{n}^{l}}\), where \(W^{m,2}\) is the standard Sobolev space, \(B_{r}^{+}(0)=\{x\mid|x|<r,\ x_{n}>0\}\) is the upper half ball with radius \(r\) centered at the origin. Then for any \(0<\rho\leq r\), and any \(\epsilon>0\), there exists a positive constant \(C\) depending on \(\epsilon\), \(n\) and \(r\), such that_ \[\sum_{t=1}^{3}\sum_{|\alpha|=t,\alpha_{n}=0}\|D_{n}^{4-l}D^{\alpha}u\|_{L^{2}( B_{\rho}^{+}(0))}\leq\epsilon\|D_{n}^{4}u\|_{L^{2}(B_{\rho}^{+}(0))}+C\sum_{| \alpha|=4,\alpha_{n}=0}\|D^{\alpha}u\|_{L^{2}(B_{\rho}^{+}(0))}. \tag{2.1}\] Next, we define \[\bar{u}(x,x_{n+1})=u(x)e^{\sqrt{\frac{\epsilon}{2}}\chi_{n+1}}.\] Then \(\bar{u}\) satisfies the following equation: \[\triangle^{2}\bar{u}=\Lambda\bar{u}\quad in\quad\Omega\times\mathbb{R}, \tag{2.2}\] with the boundary conditions below: \[\bar{u}=0,\quad\bar{u}_{\nu}=0\quad on\quad\partial\Omega\times\mathbb{R}. \tag{2.3}\] Here \(\Lambda=\frac{\lambda^{2}}{4}+k^{2}.\) In the following, we always assume that \(\Lambda>0\) is large enough. **Remark 2.2**.: _From the standard elliptic theory ([23], Chapter 8), the solutions to the problems (1.1) and (2.2) belong to \(W^{m,2}\) for any positive integer \(m\), and are analytic in **Lemma 2.3**.: _Let \(\bar{u}\) satisfy the equation (2.2). Then for any \(z_{0}=(x_{0},0)\) with \(x_{0}\in\Omega\) and \(B_{r}(x_{0})\subseteq\Omega\), any multi-index \(\alpha\),_ \[\|D^{\alpha}\bar{u}\|_{W^{4,2}(B_{\eta r}(z_{0}))}\leq C\left(\Lambda+\frac{1}{ (1-\eta)^{4}r^{4}}\right)\|D^{\alpha}\bar{u}\|_{L^{2}(B_{r}(z_{0}))}, \tag{2.4}\] _for any \(\eta\in(0,1)\). Here \(B_{r}(z_{0})\subseteq\Omega\times\mathbb{R}\) is the ball in \(\mathbb{R}^{n+1}\) centered at \(z_{0}\) with its radius \(r\), and \(C\) is a positive constant depending only on \(n\)._ Proof.: Since \(\bar{u}\) is real analytic by Remark 2.2, \(\bar{u}_{ijml}=:D_{x_{i}}D_{x_{j}}D_{x_{m}}D_{x_{l}}u\) makes sense for any \(i,j,m,l\in\{1,2,\cdots,n+1\}\). We multiply both sides of the equation (2.2) by \(\bar{u}_{mmll}\psi\), and take integral over \(\Omega\times\mathbb{R}\), here \(\psi=\phi^{4}\), \(\phi\in C^{\infty}(B_{r}(z_{0}))\), and \[\begin{cases}\phi(x)=1&in\quad B_{\eta r}(z_{0}),\\ \phi(x)=0&outside\quad B_{\frac{1+\eta}{2}}(z_{0}),\\ |D\phi(x)|\leq\frac{C}{(1-\eta)r},\end{cases} \tag{2.5}\] for some positive constant \(C\) depending only on \(n\). Then by integrating by parts, summing over \(m,l\) from \(1\) to \(n+1\), we have for any \(\epsilon>0\), \[\Lambda\sum_{m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{0})}\bar{ u}\bar{u}_{mmll}\psi dz=\sum_{i,j,m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{0})} \bar{u}_{ijj}\bar{u}_{mmll}\psi dz\] \[= \sum_{i,j,m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{0})}\bar{u} _{ijml}^{2}\psi dz-\sum_{i,j,m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{0})} \bar{u}_{ijj}\bar{u}_{mmll}\psi_{i}dz+\sum_{i,j,m,l=1}^{n+1}\int_{B_{\frac{1+ \eta}{2},}(z_{0})}\bar{u}_{ijj}\bar{u}_{mll}\psi_{m}dz\] \[- \sum_{i,j,m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{0})}\bar{u} _{imj}\bar{u}_{mlli}\psi_{j}dz+\sum_{i,j,m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{0})}\bar{u}_{imj}\bar{u}_{mili}\psi_{l}dz\] \[\geq (1-\epsilon)\sum_{i,j,m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{ 0})}\bar{u}_{ijml}^{2}\psi dz-\frac{C}{\epsilon}\sum_{i,j,m=1}^{n+1}\int_{B_{ \frac{1+\eta}{2},}(z_{0})}\bar{u}_{ijm}^{2}\frac{|D\psi|^{2}}{\psi}dz.\] In the last inequality, we have used the Holder's inequality and Young inequality. On the other hand, \[\Lambda\sum_{m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{0})}\bar{u}\bar{u}_{ mmll}\psi dz\leq\frac{\Lambda^{2}}{\epsilon}\int_{B_{\frac{1+\eta}{2},}(z_{0})} \bar{u}^{2}\psi dz+\epsilon\sum_{m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2},}(z_{0} )}\bar{u}_{mmll}^{2}\psi dz.\] So \[\sum_{i,j,m,l=1}^{n+1}\int_{B_{t}(z_{0})}|\bar{u}_{ijml}|^{2}\phi^{4}dz\leq\frac{ C}{(1-\eta)^{2}r^{2}\epsilon}\sum_{i,j,m=1}^{n+1}\int_{B_{t}(z_{0})}|\bar{u}_{ijml}|^{2 }\phi^{2}dz+\frac{C\Lambda^{2}}{\epsilon}\int_{B_{t}(z_{0})}|\bar{u}|^{2}\phi^{ 4}dz, \tag{2.6}\] So by choosing \(\epsilon=\frac{1}{2}\), we have \[\sum_{i,j,m,l=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}|\bar{u}_{ijml}|^{2} \phi^{4}dz\leq\frac{C}{(1-\eta)^{2}r^{2}}\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+ \eta}{2}r}(z_{0})}|\bar{u}_{ijml}|^{2}\phi^{2}dz+C\Lambda^{2}\int_{B_{\frac{1+ \eta}{2}r}(z_{0})}|\bar{u}|^{2}\phi^{4}dz. \tag{2.7}\] Now we consider the first term on the right hand side of (2.7). In fact, by the direct calculation, integrating by parts, and the equation (2.2), we have \[I_{1} =: \sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}|\bar{u}_ {ijml}|^{2}\phi^{2}dz=-\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0})} \bar{u}_{ij}\bar{u}_{ijmm}\phi^{2}dz-2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+ \eta}{2}r}(z_{0})}\bar{u}_{ij}\bar{u}_{ijm}\phi\phi_{m}dz\] \[= \sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}\bar{u}_{ ijj}\bar{u}_{imm}\phi^{2}dz+2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0}) }\bar{u}_{ij}\bar{u}_{imm}\phi\phi_{j}dz-2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1 +\eta}{2}r}(z_{0})}\bar{u}_{ij}\bar{u}_{ijm}\phi\phi_{m}dz\] \[= -\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}\bar{u}_ {jj}\bar{u}_{imm}\phi^{2}dz-2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z _{0})}\bar{u}_{jj}\bar{u}_{imm}\phi\phi_{i}dz\] \[+ 2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}\bar{u}_ {ij}\bar{u}_{imm}\phi\phi_{j}dz-2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2} r}(z_{0})}\bar{u}_{ij}\bar{u}_{ijm}\phi\phi_{m}dz\] \[= -\Lambda\int_{B_{\frac{1+\eta}{2}r}(z_{0})}\Delta\bar{u}\bar{u} \phi^{2}dz-2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}\bar{u}_{ jj}\bar{u}_{imm}\phi\phi_{i}dz\] \[+ 2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}\bar{u}_ {ij}\bar{u}_{imm}\phi\phi_{j}dz-2\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\eta}{2} r}(z_{0})}\bar{u}_{ij}\bar{u}_{ijm}\phi\phi_{m}dz\] \[\leq \frac{1}{2}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}(\Delta\bar{u})^{2} \phi^{2}dz+\frac{\Lambda^{2}}{2}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}\bar{u}^{2} \phi^{2}dz\] \[+ \frac{3}{\epsilon_{1}}\int_{B_{\frac{1+\eta}{2}r}(z_{0})}(\Delta \bar{u})^{2}|D\phi|^{2}dz+\frac{\epsilon_{1}}{3}\int_{B_{\frac{1+\eta}{2}r}(z_ {0})}|D\triangle\bar{u}|^{2}\phi^{2}dz\] \[+ \frac{3}{\epsilon_{1}}\sum_{i,j=1}^{n+1}\int_{B_{\frac{1+\eta}{2} r}(z_{0})}(\bar{u}_{ij})^{2}|D\phi|^{2}dz+\frac{\epsilon_{1}}{3}\int_{B_{\frac{1+ \eta}{2}r}(z_{0})}|D\triangle\bar{u}|^{2}\phi^{2}dz\] \[+ \frac{3}{\epsilon_{1}}\sum_{i,j=1}^{n+1}\int_{B_{\frac{1+\alpha \eta}{2},\epsilon_{(2)}}}(\bar{u}_{ij})^{2}|D\phi|^{2}dz+\frac{\epsilon_{1}}{3} \sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\alpha\eta}{2},\epsilon_{(2)}}}|\bar{u}_{ ijm}|^{2}\phi^{2}dz\] \[\leq \epsilon_{1}\sum_{i,j,m=1}^{n+1}\int_{B_{\frac{1+\alpha\eta}{2}, \epsilon_{(2)}}}|\bar{u}_{ijm}|^{2}\phi^{2}dz+\frac{C}{(1-\eta)^{2}r^{2} \epsilon_{1}}\sum_{i,j=1}^{n+1}\int_{B_{\frac{1+\alpha\eta}{2},\epsilon_{(2)} }}|\bar{u}_{ij}|^{2}dz+C\Lambda^{2}\int_{B_{\frac{1+\alpha\eta}{2},\epsilon_{( 2)}}}\bar{u}^{2}\phi^{2}dz,\] for any \(\epsilon_{1}\in(0,1)\). Here \(C\) is a positive constant depending only on \(n\). So by choosing \(\epsilon_{1}=\frac{1}{2}\), we obtain \[I_{1}\leq\frac{C}{(1-\eta)^{2}r^{2}}\int_{B_{\frac{1+\alpha\eta}{2},\epsilon_ {(2)}}}|D^{2}\bar{u}|^{2}dz+C\Lambda^{2}\int_{B_{\frac{1+\alpha\eta}{2},\epsilon _{(2)}}}\bar{u}^{2}dz. \tag{2.8}\] Next, we estimate the first term of (2.8). In fact, let \(\bar{\phi}\) be a \(C_{0}^{\infty}\) cut-off function such that \(\bar{\phi}(z)=1\) when \(|z-z_{0}|<\frac{1+\eta}{2}r\), \(\bar{\phi}(z)=0\) when \(|z-z_{0}|>r\), \(0\leq\bar{\phi}\leq 1\), \(|D\bar{\phi}|<\frac{C}{(1-\eta)r}\), and \(|D^{2}\bar{\phi}|\leq\frac{C}{(1-\eta)^{2}r^{2}}\). Then define \[\psi=\begin{cases}e^{1-\bar{\phi}^{-1}},\quad 0<\bar{\phi}\leq 1,\\ \\ 0,\quad\bar{\phi}=0.\end{cases} \tag{2.9}\] Thus \(\psi\) satisfies that for any \(l>0\), \[\lim_{\bar{\phi}\to 0}\frac{\psi}{\bar{\phi}^{l}}=0.\] Moreover, through some direct calculations, \[\begin{cases}D\psi=\psi\frac{D\bar{\phi}}{\bar{\phi}^{2}},\\ \\ \triangle\psi=\psi\left(\frac{|D\bar{\phi}|^{2}}{\bar{\phi}^{4}}-2\frac{|D\bar{ \phi}|^{2}}{\bar{\phi}^{3}}+\frac{\bar{\triangle}\bar{\phi}}{\bar{\phi}^{2}} \right).\end{cases} \tag{2.10}\] By multiplying \(\bar{u}\psi\) on both sides of (2.2) and using integration by parts, we have \[\Lambda\int_{B_{r(2)}}\bar{u}^{2}\psi dz = \int_{B_{r(2)}}\triangle^{2}\bar{u}\bar{u}\psi dz\] \[= \int_{B_{r(2)}}|\triangle\bar{u}|^{2}\psi dz+2\int_{B_{r(2)}} \triangle\bar{u}D\bar{u}D\psi dz+\int_{B_{r(2)}}\triangle\bar{u}\bar{u}\triangle \psi dz\] \[= \int_{B_{r(2)}}|\triangle\bar{u}|^{2}\psi dz+2\int_{B_{r(2)}} \triangle\bar{u}D\bar{u}\psi\frac{D\bar{\phi}}{\bar{\phi}^{2}}dz+\int_{B_{r(2 )}}\triangle\bar{u}\bar{u}\psi\left(\frac{|D\bar{\phi}|^{2}}{\bar{\phi}^{4}}-2 \frac{|D\bar{\phi}|^{2}}{\bar{\phi}^{3}}+\frac{\triangle\bar{\phi}}{\bar{\phi}^ {2}}\right)dz.\] So for any \(\epsilon_{2}\in(0,1)\), \[\int_{B_{r}(z_{0})}|\triangle\bar{u}|^{2}\psi dz \leq \Lambda\int_{B_{r}(z_{0})}|\bar{u}|^{2}\psi dz+\epsilon_{2}\int_{B_ {r}(z_{0})}|\triangle\bar{u}|^{2}\psi dz\] \[+ \frac{C}{\epsilon_{2}}\int_{B_{r}(z_{0})}|D\bar{u}|^{2}\psi\frac{ |D\bar{\phi}|^{2}}{\bar{\phi}^{4}}dz+\frac{C}{\epsilon_{2}}\int_{B_{r}(z_{0}) }\bar{u}^{2}\psi\frac{|D\bar{\phi}|^{4}+|\triangle\bar{\phi}|^{2}}{\bar{\phi}^ {8}}dz.\] Choosing \(\epsilon_{2}=\frac{1}{2}\), we have \[\int_{B_{r}(z_{0})}|\triangle\bar{u}|^{2}\psi dz\leq\frac{C}{(1- \eta)^{2}r^{2}}\int_{B_{r}(z_{0})}|D\bar{u}|^{2}\frac{\psi}{\bar{\phi}^{4}}dz+ C\left(\Lambda+(1-\eta)^{-4}r^{-4}\right)\int_{B_{r}(z_{0})}\bar{u}^{2} \frac{\psi}{\bar{\phi}^{8}}dz. \tag{2.11}\] Since \[\int_{B_{r}(z_{0})}|D^{2}\bar{u}|^{2}\psi dz\leq\int_{B_{r}(z_{0})}|\triangle \bar{u}|^{2}\psi dz+\frac{C}{(1-\eta)^{2}r^{2}}\int_{B_{r}(z_{0})}|D\bar{u}|^{2 }\frac{\psi}{\bar{\phi}^{4}}dz, \tag{2.12}\] which comes from the integration by parts, we have \[I_{2}=:\int_{B_{r}(z_{0})}|D^{2}\bar{u}|^{2}\psi dz\leq\frac{C}{(1- \eta)^{2}r^{2}}\int_{B_{r}(z_{0})}|D\bar{u}|^{2}\frac{\psi}{\bar{\phi}^{4}}dz+ C\left(\Lambda+(1-\eta)^{-4}r^{-4}\right)\int_{B_{r}(z_{0})}\bar{u}^{2} \frac{\psi}{\bar{\phi}^{8}}dz. \tag{2.13}\] Integration by parts again, for any \(\epsilon_{3},\ \epsilon_{4}>0\), \[I_{3} =: \int_{B_{r}(z_{0})}|D\bar{u}|^{2}\frac{\psi}{\bar{\phi}^{4}}dz=- \int_{B_{r}(z_{0})}\bar{u}\triangle\bar{u}\frac{\psi}{\bar{\phi}^{4}}dz-\int_{ B_{r}(z_{0})}\bar{u}D\bar{u}D\left(\frac{\psi}{\bar{\phi}^{4}}\right)dz\] \[\leq \epsilon_{3}\int_{B_{r}(z_{0})}|\triangle\bar{u}|^{2}\psi dz+ \frac{C}{\epsilon_{3}}\int_{B_{r}(z_{0})}\bar{u}^{2}\frac{\psi}{\bar{\phi}^{ 8}}dz+\epsilon_{4}\int_{B_{r}(z_{0})}|D\bar{u}|^{2}\frac{\psi}{\bar{\phi}^{4}} dz+\frac{C}{(1-\eta)^{2}r^{2}\epsilon_{4}}\int_{B_{r}(z_{0})}\bar{u}^{2}\frac{\psi}{ \bar{\phi}^{12}}dz.\] Then by choosing \(\epsilon_{4}=\frac{1}{2}\) and \(\epsilon_{3}=\frac{(1-\eta)^{2}r^{2}}{8nC}\), where \(C\) is the same positive constant as in (2.13), we have \[\int_{B_{\frac{1+\eta}{2}},(z_{0})}|D^{2}\bar{u}|^{2}dz\leq C\left(\Lambda+( 1-\eta)^{-4}r^{-4}\right)\int_{B_{r}(z_{0})}\bar{u}^{2}dz. \tag{2.14}\] From the inequalities (2.7), (2.8) and (2.14), we have \[\|\bar{u}\|^{2}_{\psi^{4,2}(B_{\psi}(z_{0}))}\leq C(\Lambda^{2}+(1-\eta)^{-8}r^ {-8})\int_{B_{r}(z_{0})}|\bar{u}|^{2}dz. \tag{2.15}\] Then from the fact that \(\bar{u}(z)=\bar{u}(x,x_{n+1})=u(x)e^{\sqrt{\frac{4}{2}}x_{n+1}}\), the case \(|\alpha|=0\) of (2.4) is obtained. Since for any multi-index \(\alpha\), \[\triangle^{2}D^{\alpha}\bar{u}-\Lambda D^{\alpha}\bar{u}=0,\quad\text{ in}\quad\Omega\times\mathbb{R},\] the desired result is obtained by applying the above argument to \(D^{\alpha}\bar{u}\) and the fact that \(\bar{u}(z)=\bar{u}(x,x_{n+1})=u(x)e^{\sqrt{\frac{z}{2}}x_{n+1}}\). **Remark 2.4**.: _From the Sobolev interpolation inequality, for any \(\epsilon>0\) and any \(u\in W^{4,2}(B_{r}(z_{0}))\),_ \[\|u\|_{W^{1,2}(B_{r}(z_{0}))}\leq\epsilon\|u\|_{W^{4,2}(B_{r}(z_{0}))}+C \epsilon^{-1/3}\|u\|_{L^{2}(B_{r}(z_{0}))}. \tag{2.16}\] _Then from (2.16) with \(\epsilon=\left(\Lambda^{1/4}+\frac{1}{(1-\eta)r}\right)^{-3}\) and Lemma 2.3, for any \(\eta\in(0,1)\) and any \(r>0\) such that \(dist(x_{0},\partial\Omega)>r\),_ \[\|\bar{u}\|_{W^{1,2}(B_{\eta r}(z_{0}))}\leq C\left(\Lambda^{1/4}+\frac{1}{(1- \eta)r}\right)\|\bar{u}\|_{L^{2}(B_{r}(z_{0}))}. \tag{2.17}\] _So by the iteration argument,_ \[\|\bar{u}\|_{W^{m,2}(B_{\eta r}(z_{0}))} \leq C\left(\Lambda^{1/4}+\frac{m}{(1-\eta)r}\right)\|\bar{u}\|_{W^{ m-1,2}(B_{\left(\eta^{+}\frac{1-\eta}{m}\right)}(z_{0}))} \tag{2.18}\] \[\leq C^{2}\left(\Lambda^{1/4}+\frac{m}{(1-\eta)r}\right)^{2}\|\bar{u} \|_{W^{m-2,2}(B_{\left(\eta^{+}\frac{1-\eta}{m}\right)}(z_{0}))}\] \[\leq \cdots\] \[\leq C^{m}\left(\Lambda^{1/4}+\frac{m}{(1-\eta)r}\right)^{m}\|\bar{u} \|_{L^{2}(B_{r}(z_{0}))},\] _where \(z_{0}=(x_{0},0)\), and \(C\) is a positive constant depending only on \(n\)._ Now we focus on the boundary estimates. In fact, for any fixed point \(x_{0}\in\partial\Omega\), we locally flat the boundary \(\partial\Omega\) near \(x_{0}\) by the following process. Without loss of generality, assume that \(x_{0}=0\), and for \(r\) small enough, the set \(\Omega\cap B_{r}(0)\) can be expressed in the following form: \[\Omega\cap B_{r}(x_{0})=\left\{x\in B_{r}(x_{0})\mid x_{n}>\gamma(x_{1},\cdots, x_{n-1})\right\}.\] where \(\gamma\) is a real analytic function. Then define \(z=(x,x_{n+1})=(x_{1},\cdots,x_{n},x_{n+1})\), and \[\begin{cases}y_{i}=\Phi_{i}(z):=x_{i},\quad i=1,2,\cdots,n-1,\\ y_{n}=\Phi_{n}(z):=x_{n}-\gamma(x_{1},\cdots,x_{n-1}),\\ y_{n+1}=x_{n+1}.\end{cases} \tag{2.19}\] It is also denoted by \(y=\Phi(z)\). Similarly, we write \(z=\Psi(y)\) with \(\Psi=\Phi^{-1}\). Then the map \(y=\Phi(z)\) straightens \(\partial\Omega\) near \(0\), and pushes \(\Omega\cap B_{r}(0)\) to \(\Phi(\Omega\cap B_{r}(0))\subseteq B_{R}^{+}(x_{0},0)\) for some \(r,R>0\), and the map \(\Phi\) is real analytic. Under this transformation, \(\bar{u}(z)\) and \(\bar{v}(z)=\Delta\bar{u}(z)\) become \(\bar{\bar{u}}(y)\) and \(\bar{\bar{v}}(y)\), i.e., \(\bar{\bar{u}}(y)=\bar{u}(\Psi(y))\) and \(\bar{\bar{v}}(y)=\bar{v}(\Psi(y))\), respectively, which satisfy the following equations: \[\begin{cases}\mathcal{L}\bar{\bar{u}}=a_{ij}(y)\bar{\bar{u}}_{ij}(y)+b_{i}(y) \bar{\bar{u}}_{i}(y)=\bar{\bar{v}}(y),\\ \mathcal{L}\bar{\bar{v}}=a_{ij}(y)\bar{\bar{v}}_{ij}(y)+b_{i}(y)\bar{\bar{v}}_ {i}(y)=\Lambda\bar{\bar{u}}(y),\end{cases} \tag{2.20}\] in the domain \(\Phi(\Omega\cap B_{r}(0))\times(-\infty,+\infty)\). Here the coefficients \(a_{ij}(y)=\sum\limits_{m=1}^{n}\Phi_{x_{m}}^{i}(\Psi(y))\Phi_{x_{m}}^{j}(\Psi( y))\), and \(b_{i}(y)=\sum\limits_{j=1}^{n}\Phi_{x_{j}x_{j}}^{i}(\Psi(y))\). The coefficients \(a_{ij}\) and \(b_{i}\) are also analytic, and the operator \(\mathcal{L}\) is uniformly elliptic. The map \(\Phi\) can be chosen such that \[a_{ij}(0)=\delta_{ij},\quad|a_{ij}(y)-a_{ij}(0)|\leq C_{0}|y|, \tag{2.21}\] for any \(i,j=1,\cdots,n+1\). Here \(C_{0}\) is a positive constant depending only on \(n\) and \(\Omega\). Then for \(r\) small enough and some positive constant \(\delta_{0}\), the matrix \(\{a_{ij}\}\) satisfies that \(a_{ij}(y)\xi_{i}\xi_{j}\geq\delta_{0}|\xi|^{2}\) for any \(y\in B_{r}^{+}(0)\subseteq\mathbb{R}^{n+1}\) and \(\xi\in\mathbb{R}^{n+1}\). Moreover, the boundary condition becomes \[\bar{\bar{u}}=0,\quad\bar{\bar{u}}_{n}=0, \tag{2.22}\] on \(\Gamma^{*}\), the flat boundary of \(B_{r}^{+}(0)\subseteq\mathbb{R}^{n+1}\). Now we use this transformation to establish the boundary estimate below. **Lemma 2.5**.: _There exist positive constants \(\rho_{0}\) and \(L_{0}\) depending only on \(n\), \(\Omega\), and \(\underline{\mathcal{L}}\), such that for any integer \(m\) and some \(\rho<\rho_{0}\),_ \[\sup_{x\in B^{*}_{\rho}(0)}|D^{\alpha}\bar{\bar{u}}|\leq m!L_{0}^{m}e^{\Lambda^{ 1/4}}\|\bar{\bar{u}}\|_{L^{2}(B^{*}_{\rho_{0}}(0))},\quad\forall\ |\alpha|=m. \tag{2.23}\] Proof.: By the above transformation, there exists a positive constant \(\rho_{0}\) depending only on \(n\) and \(\Omega\), such that the function \(\bar{\bar{u}}\) satisfies the equation \(\mathcal{L}^{2}\bar{\bar{u}}=\Lambda\bar{\bar{u}}\) in \(B^{+}_{\rho_{0}}(0)\), and \(\bar{\bar{u}}=\bar{\bar{u}}_{n}=0\) on \(\Gamma^{*}\). Then \[\|(\mathcal{L}^{2})^{l}\bar{\bar{u}}\|_{L^{2}(B^{*}_{\rho_{0}}(0))}=\Lambda^{ l}\|\bar{\bar{u}}\|_{L^{2}(B^{*}_{\rho_{0}}(0))}\leq(4l)!e^{\Lambda^{1/4}}\| \bar{\bar{u}}\|_{L^{2}(B^{*}_{\rho_{0}}(0))}. \tag{2.24}\] For the operator \(\mathcal{L}^{2}\) and the function \(\frac{\bar{\bar{u}}}{e^{\Lambda/4}\|\bar{\bar{u}}\|_{L^{2}(B^{*}_{\rho_{0}})}}\), we use Theorem 1.3 in Chapter 8 in [23] to obtain that, there exist positive constants \(\rho\) and \(L_{0}\) depending only on \(n\), \(\Omega\) and \(z_{0}\), such that for any positive integer \(m\), \[\sup_{x\in B^{*}_{\rho}(0)}|D^{\alpha}\bar{\bar{u}}|\leq m!L_{0}^{m}e^{\Lambda ^{1/4}}\|\bar{\bar{u}}\|_{L^{2}(B^{*}_{\rho_{0}}(0))},\quad\forall\ |\alpha|=m. \tag{2.25}\] **Lemma 2.6**.: _Let \(u\) be a solution to the problem (1.1). Then there exists a positive constant \(R\) depending only on \(n\) and \(\partial\Omega\), such that \(u\) can be extended into the neighborhood \(\Omega_{R}:=\{x\in\mathbb{R}^{n}\ |\ dist(x,\Omega)<R\}\) with_ \[\|u\|_{L^{\infty}(\Omega_{R})}\leq e^{C\Lambda^{\frac{1}{4}}}\|u\|_{L^{2}( \Omega)}, \tag{2.26}\] _where \(C\) and \(R\) are positive constants depending only on \(n\) and \(\Omega\)._ Proof.: By Lemma 2.5 and the finite covering, there exist constants \(r_{0}>0\), \(\tau>1\), and \(L_{0}>1\) depending only on \(n\) and \(\Omega\), such that for any positive integer \(m\), \[\sup_{T_{r_{0}}(\partial\Omega)\times(-r_{0},r_{0})}|D^{\alpha}\bar{u}|\leq m! L_{0}^{m}e^{\Lambda^{1/4}}\|\bar{\bar{u}}\|_{L^{2}(\Omega\times(-\tau r_{0}, \tau r_{0}))},\quad\forall\ |\alpha|=m, \tag{2.27}\] where \(T_{r_{0}}(\partial\Omega)=\{x\in\Omega\ |\ dist(x,\partial\Omega)<r_{0}\}\). From (2.27) and the fact that \(\bar{u}(x,x_{n+1})=u(x)e^{\Lambda^{1/4}x_{n+1}}\), for any \(x\in T_{r_{0}}(\partial\Omega)\), \[|D^{\alpha}u(x)|\leq m!L_{0}^{m}e^{C\Lambda^{1/4}}\|u\|_{L^{2}(\Omega)},\quad \forall\ |\alpha|=m. \tag{2.28}\] Since \(\partial\Omega\) is compact, one can extend \(u(x)\) analytically into a neighborhood of \(\Omega\), denoted by \(\Omega_{R}=\{x\in\mathbb{R}^{n}\mid dist(x,\Omega)<R\}\). Here \(R\) is a positive constant depending only on \(\partial\Omega\) and \(n\). In fact, for any \(x_{0}\in T_{r_{0}}(\partial\Omega)\) and any \(x\in B_{R}(x_{0})\), there holds \[u(x)=\sum_{|\alpha|=0}^{\infty}\frac{1}{\alpha!}D^{\alpha}u(x_{0})(x-x_{0})^{ \alpha}.\] That is the Taylor series of \(u(x)\). So by choosing \(R\leq cL_{0}^{-1}\), where \(c\) is a positive constant depending only on \(n\), \[|u(x)| \leq \sum_{|\alpha|=0}^{\infty}\frac{1}{\alpha!}|D^{\alpha}u(x_{0})||x -x_{0}|^{|\alpha|}\leq e^{C\Lambda^{1/4}}\|u\|_{L^{2}(\Omega)}\sum_{|\alpha|=0 }^{\infty}\frac{|\alpha|!(RL_{0})^{|\alpha|}}{\alpha!} \tag{2.29}\] \[\leq e^{C\Lambda^{1/4}}\|u\|_{L^{2}(\Omega)}\sum_{m=0}^{\infty}(RL_{0 })^{m}\sum_{|\alpha|=m}\frac{|\alpha|!}{\alpha!}\] \[\leq e^{C\Lambda^{1/4}}\|u\|_{L^{2}(\Omega)}\sum_{m=0}^{\infty}(nRL_{ 0})^{m}\] \[\leq C^{\prime}e^{C\Lambda^{1/4}}\|u\|_{L^{2}(\Omega)}.\] Here we used the fact that \(n^{m}=(1+1+\cdots+1)^{m}=\sum\limits_{|\alpha|=m}\frac{|\alpha|!}{\alpha!}\). On the other hand, from Sobolev Embedding Theorem and Remark 2.4 with \(\eta=1/2\) and \(r=(C\Lambda^{1/4})^{-1}\leq r_{0}\), for any \(z=(x,0)\) with \(x\in\Omega\setminus T_{r_{0}}(\partial\Omega)\), \[|\bar{u}(z)| \leq \|\bar{u}\|_{L^{\infty}(B_{\eta}(z))}\leq C(\eta r)^{-(n+1)/2}\| \bar{u}\|_{W^{1/2}(B_{\eta^{\prime}}(z))} \tag{2.30}\] \[\leq C^{l}(l)^{l}\Lambda^{\frac{l}{4}+\frac{n+1}{8}}\|\bar{u}\|_{L^{2 }(B_{\eta}(z))}\leq e^{C\log\Lambda}\|\bar{u}\|_{L^{2}(\Omega\times(-2r_{0},2 r_{0}))},\ \ \ \ \forall\ |\alpha|=m,\] where \(l=\left[\frac{n+1}{2}\right]+1\). From (2.30) and the fact that \(\bar{u}(x,x_{n+1})=u(x)e^{\Lambda^{1/4}x_{n+1}}\) again, for any \(x\in\Omega\setminus T_{r_{0}}(\partial\Omega)\), \[|u(x)|=|\bar{u}(x,0)|\leq e^{C\log\Lambda}\|\bar{u}\|_{L^{2}(\Omega\times(-2r_{ 0},2r_{0}))}\leq e^{C(\Lambda^{1/4}+\log\Lambda)}\|u\|_{L^{2}(\Omega)}\leq e ^{C\Lambda^{1/4}}\|u\|_{L^{2}(\Omega)}. \tag{2.31}\] The inequalities (2.29) and (2.31) complete the proof. Quantitative Unique Continuation Property In this section, we will give the upper bound for the vanishing order of \(u\) in \(\Omega\). From Section 2, \(u\) is analytic in \(\Omega_{R}\), so are \(\triangle u\) and \(\triangle^{2}u\). Therefore, by the uniqueness of the analytic continuation, the equation of (1.1) holds in \(\Omega_{R}\). Then we rewrite it in \(\Omega_{R}\) as follows. Let \(\widetilde{u}(x,x_{n+1})=u(x)e^{\sqrt{\frac{x_{n+1}}{2}}x_{n+1}}\) and \(\widetilde{\nu}(x,x_{n+1})=\left(\triangle u(x)+\frac{\lambda+\mu}{2}u(x) \right)e^{\sqrt{\frac{x_{n+1}}{2}}x_{n+1}}\) with \(\mu=\sqrt{\lambda^{2}+4k^{2}}\). Then \(\widetilde{u}\) satisfies that \[\begin{cases}\triangle\widetilde{u}=\widetilde{\nu},\\ \triangle\widetilde{\nu}=\mu\widetilde{\nu},\end{cases} \tag{3.1}\] in \(\Omega_{R}\times\mathbb{R}\). So we define the frequency function and doubling index as follows. **Definition 3.1**.: _Let \(\widetilde{u}(z)=\widetilde{u}(x,x_{n+1})=u(x)e^{\sqrt{\frac{x_{n+1}}{2}}x_{n +1}}\) as above. Then \(\widetilde{u}\) satisfies the equation (3.1). For \(z_{0}=(x_{0},0)\) we call the following quantities_ \[N(z_{0},r)=r\frac{\int_{B_{r}(z_{0})}\left(|D\widetilde{u}|^{2}+|D\widetilde{ \nu}|^{2}+\widetilde{u}\widetilde{\nu}+\mu|\widetilde{\nu}|^{2}\right)dz}{ \int_{\partial B_{r}(z_{0})}\left(\widetilde{u}^{2}+\widetilde{\nu}^{2} \right)d\sigma}=r\frac{\int_{\partial B_{r}(z_{0})}\left( \widetilde{u}\widetilde{\nu}_{\nu}+\widetilde{\nu}_{\nu}\right)d\sigma}{\int_ {\partial B_{r}(z_{0})}\left(\widetilde{u}^{2}+\widetilde{\nu}^{2}\right)d \sigma}, \tag{3.2}\] _the frequency function with radius \(r\) centered at \(z_{0}\), and_ \[M(z_{0},r)=\frac{1}{2}log_{2}\left(\frac{|\widetilde{u}|\widetilde{u}|^{2}_{L ^{\infty}(B_{r}(z_{0}))}+|\widetilde{\nu}|^{2}_{L^{\infty}(B_{r}(z_{0}))}}{| \widetilde{u}|^{2}_{L^{\infty}(B_{r/2}(z_{0}))}+|\widetilde{\nu}|^{2}_{L^{ \infty}(B_{r/2}(z_{0}))}}\right), \tag{3.3}\] _the doubling index with radius \(r\) centered at \(z_{0}\), respectively._ We will show the "almost monotonicity formula" for \(N(z_{0},r)\). **Lemma 3.1**.: _For \(z_{0}=(x_{0},0)\) with \(x_{0}\in\Omega\), there exist positive constants \(C_{0}\), \(C\), and \(r_{0}<R\), such that if \(N(z_{0},r)\geq C_{0}\) and \(r<r_{0}\), it holds that_ \[\frac{N^{\prime}(z_{0},r)}{N(z_{0},r)}\geq-Cr. \tag{3.4}\] Proof.: Denote \[\begin{cases}D_{1}(z_{0},r)=\int_{B_{r}(z_{0})}|D\overline{u}|^{2}dz;\quad D_{2}( z_{0},r)=\int_{B_{r}(z_{0})}|D\overline{v}|^{2}dz;\\ D_{3}(z_{0},r)=\int_{B_{r}(z_{0})}\overline{uv}dz;\quad D_{4}(z_{0},r)=\mu\int_{ B_{r}(z_{0})}\overline{v}^{2}dz;\\ H_{1}(z_{0},r)=\int_{\partial B_{r}(z_{0})}\overline{u}^{2}d\sigma,\quad H_{2} (z_{0},r)=\int_{\partial B_{r}(z_{0})}\overline{v}^{2}d\sigma;\\ D(z_{0},r)=D_{1}(z_{0},r)+D_{2}(z_{0},r)+D_{3}(z_{0},r)+D_{4}(z_{0},r);\\ H(z_{0},r)=H_{1}(z_{0},r)+H_{2}(z_{0},r).\end{cases} \tag{3.5}\] Then \[N(z_{0},r)=r\frac{D(z_{0},r)}{H(z_{0},r)}.\] For \(H(z_{0},r)\), through the direct calculation, we have \[\begin{cases}H_{1}^{\prime}(z_{0},r)=\frac{n-1}{r}H_{1}(z_{0},r)+ \frac{2}{r}\int_{\partial B_{r}(z_{0})}\overline{uu}_{\nu}d\sigma;\\ H_{2}^{\prime}(z_{0},r)=\frac{n-1}{r}H_{2}(z_{0},r)+\frac{2}{r}\int_{ \partial B_{r}(z_{0})}\overline{vv}d\sigma.\end{cases} \tag{3.6}\] For \(D^{\prime}(z_{0},r)\), we have \[D_{1}^{\prime}(z_{0},r) = \int_{\partial B_{r}(z_{0})}|D\overline{u}|^{2}d\sigma=\frac{1}{ r}\int_{B_{r}(z_{0})}div(|D\overline{u}|^{2}\cdot z)dz \tag{3.7}\] \[= \frac{n}{r}D_{1}(z_{0},r)+\frac{2}{r}\int_{B_{r}(z_{0})}\overline {u}_{i}\cdot\overline{u}_{ij}\cdot z_{j}dz\] \[= \frac{n-2}{r}D_{1}(z_{0},r)+\frac{2}{r}\int_{\partial B_{r}(z_{0} )}\overline{u}_{\nu}^{2}d\sigma-\frac{2}{r}\int_{B_{r}(z_{0})}\overline{v}D \overline{u}\cdot zdz\] \[= \frac{n-2}{r}D_{1}(z_{0},r)+\frac{2}{r}\int_{\partial B_{r}(z_{0} )}\overline{u}_{\nu}^{2}d\sigma-I_{1},\] with \(I_{1}=\frac{2}{r}\int_{B_{r}(z_{0})}\overline{v}D\overline{u}\cdot zdz\), \[D_{2}^{\prime}(z_{0},r)=\frac{n-2}{r}D_{2}(z_{0},r)+\frac{2}{r}\int_{\partial B _{r}(z_{0})}\overline{v}_{\nu}^{2}d\sigma-I_{2}, \tag{3.8}\] with \(I_{2}=\frac{2\mu}{r}\int_{B_{r}(z_{0})}\overline{v}D\overline{v}\cdot zdz\), \[|D_{3}^{\prime}(z_{0},r)| = \left|\int_{\partial B_{r}(z_{0})}\overline{uv}d\sigma\right| \leq\frac{1}{2}\left(\int_{\partial B_{r}(z_{0})}\overline{u}^{2}d\sigma+\int_ {\partial B_{r}(z_{0})}\overline{v}^{2}\sigma\right)\leq\frac{1}{2}H(z_{0},r), \tag{3.9}\] \[D^{\prime}_{4}(z_{0},r) = \mu\int_{\partial B_{r}(z_{0})}\widetilde{v}^{2}d\sigma=\frac{\mu}{ r}\int_{B_{r}(z_{0})}div(\widetilde{v}^{2}\cdot z)dz \tag{3.10}\] \[= \frac{n}{r}D_{4}(z_{0},r)+\frac{2\mu}{r}\int_{B_{r}(z_{0})} \widetilde{v}D\widetilde{v}\cdot zdz\] \[= \frac{n}{r}D_{4}(z_{0},r)+I_{2}.\] Now we give an estimate of \(\int_{B_{r}(z_{0})}\widetilde{u}^{2}dz\) and \(\int_{B_{r}(z_{0})}\widetilde{v}^{2}dz\) below. Let \(\widetilde{u}=\widetilde{u}_{1}+\widetilde{u}_{2}\) such that \(\widetilde{u}_{1}\) is a harmonic function with \(\widetilde{u}_{1}=\widetilde{u}\) on \(\partial B_{r}(z_{0})\). Then, by Corollary 2.2.7 in [15], we have \[\int_{B_{r}(z_{0})}\widetilde{u}_{1}^{2}dz\leq\frac{r}{n}\int_{\partial B_{r}( z_{0})}\widetilde{u}_{1}^{2}d\sigma=\frac{r}{n}\int_{\partial B_{r}(z_{0})} \widetilde{u}^{2}dz. \tag{3.11}\] Since \(\widetilde{u}_{2}=\widetilde{u}-\widetilde{u}_{1}\in W^{1,2}_{0}(B_{r}(z_{0}))\), from the Poincare's inequality, \[\int_{B_{r}(z_{0})}\widetilde{u}_{2}^{2}dz\leq Cr^{2}\int_{B_{r}(z_{0})}|D \widetilde{u}_{2}|^{2}dz\leq Cr^{2}\int_{B_{r}(z_{0})}|D\widetilde{u}|^{2}dz.\] So \[\int_{B_{r}(z_{0})}\widetilde{u}^{2}dz \leq 2\int_{B_{r}(z_{0})}(\widetilde{u}_{1}^{2}+\widetilde{u}_{2}^{2})dz\] \[\leq Cr\int_{\partial B_{r}(z_{0})}\widetilde{u}^{2}d\sigma+Cr^{2} \int_{B_{r}(z_{0})}|D\widetilde{u}|^{2}dz.\] By the similar argument to \(\widetilde{v}\), we also have \[\int_{B_{r}(z_{0})}\widetilde{v}^{2}dz \leq 2\int_{B_{r}(z_{0})}(\widetilde{v}_{1}^{2}+\widetilde{v}_{2}^{2 })dz\] \[\leq Cr\int_{\partial B_{r}(z_{0})}\widetilde{v}^{2}d\sigma+Cr^{2} \int_{B_{r}(z_{0})}|D\widetilde{v}|^{2}dz.\] Thus \[\int_{B_{r}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})dz \leq C\left(r\int_{\partial B_{r}(z_{0})}(\widetilde{u}^{2}+\widetilde {v}^{2})d\sigma+r^{2}\int_{B_{r}(z_{0})}(|D\widetilde{u}|^{2}+|D\widetilde{v}| ^{2})dz\right) \tag{3.14}\] \[= Cr^{2}(D_{1}(z_{0},r)+D_{2}(z_{0},r))+CrH(z_{0},r).\] From (3.7) and (3.14), we have \[|I_{1}| = \frac{2}{r}\left|\int_{B_{r}(z_{0})}\widetilde{v}D\widetilde{u} \cdot zdz\right|\leq 2\int_{B_{r}(z_{0})}|\widetilde{v}||D\widetilde{u}|dz \tag{3.15}\] \[\leq \frac{1}{r}\int_{B_{r}(z_{0})}\widetilde{v}^{2}dz+r\int_{B_{r}(z_ {0})}|D\widetilde{u}|^{2}dz\] \[\leq Cr\left(D_{1}(z_{0},r)+D_{2}(z_{0},r)\right)+CH(z_{0},r).\] So from (3.7 - 3.15), there holds \[D^{\prime}(z_{0},r) = \frac{n-2}{r}\left(D_{1}(z_{0},r)+D_{2}(z_{0},r)+D_{4}(z_{0},r) \right)+D_{3}^{\prime}(z_{0},r)\] \[+ \frac{2}{r}D_{4}(z_{0},r)+\frac{2}{r}\left(\int_{\partial B_{r}( z_{0})}\widetilde{u}\widetilde{u}_{\nu}d\sigma+\int_{\partial B_{r}(z_{0})} \widetilde{v}\widetilde{v}_{\nu}d\sigma\right)-I_{1}\] \[\geq \frac{n-2}{r}D(z_{0},r)+\frac{2}{r}\int_{\partial B_{r}(z_{0})} \left(\widetilde{u}_{\nu}^{2}+\widetilde{v}_{\nu}^{2}\right)d\sigma\] \[- Cr(D_{1}(z_{0},r)+D_{2}(z_{0},r))-CH(z_{0},r)-\frac{n-2}{r}|D_{ 3}(z_{0},r)|.\] Next, we need to estimate the upper bound for the term \(|D_{3}(z_{0},r)|\). In fact, from (3.12) and (3.13), \[|D_{3}(z_{0},r)| = \left|\int_{B_{r}(z_{0})}\widetilde{u}\widetilde{v}dz\right| \tag{3.17}\] \[\leq \frac{1}{2}\left(\int_{B_{r}(z_{0})}\widetilde{u}^{2}+\int_{B_{r }(z_{0})}\widetilde{v}^{2}dz\right)\] \[\leq Cr^{2}(D_{1}(z_{0},r)+D_{2}(z_{0},r))+CrH(z_{0},r).\] For \(N(z_{0},r)\geq C_{0}\), \[H(z_{0},r)\leq\frac{r}{C_{0}}D(z_{0},r). \tag{3.18}\] Thus from (3.17) and (3.18), \[|D_{3}(z_{0},r)|\leq Cr^{2}(D_{1}(z_{0},r)+D_{2}(z_{0},r))+\frac{C}{C_{0}}r^{ 2}D(z_{0},r),\] provided that \(N(z_{0},r)\geq C_{0}\). Then for any \(r\leq r_{0}\) with \(r_{0}\) small enough such that \(\frac{C}{C_{0}}r^{2}<\frac{1}{2}\), \[|D_{3}(z_{0},r)|\leq Cr^{2}(D(z_{0},r)+|D_{3}(z_{0},r)|)\leq Cr^{2}D(z_{0},r)+ \frac{1}{2}|D_{3}(z_{0},r)|,\] which implies that \[|D_{3}(z_{0},r)|\leq Cr^{2}D(z_{0},r). \tag{3.19}\] By putting (3.19) into (3.16), we have \[\frac{D^{\prime}(z_{0},r)}{D(z_{0},r)}\geq\frac{n-2}{r}+\frac{2}{r}\frac{\int_{ \partial B_{r}(z_{0})}(\widetilde{u}_{v}^{2}+\widetilde{v}_{v}^{2})d\sigma}{ \int_{\partial B_{r}(z_{0})}(\widetilde{u}_{v}+\widetilde{v}_{v})d\sigma}-Cr. \tag{3.20}\] From the Cauchy inequality, there holds \[\int_{\partial B_{r}(z_{0})}(\widetilde{u}\widetilde{u}_{v}+\widetilde{v} \widetilde{v}_{v})d\sigma\leq\left(\int_{\partial B_{r}(z_{0})}(\widetilde{u} ^{2}+\widetilde{v}^{2})d\sigma\right)^{\frac{1}{2}}\left(\int_{\partial B_{r}( z_{0})}(\widetilde{u}_{v}^{2}+\widetilde{v}_{v}^{2})d\sigma\right)^{\frac{1}{2}}. \tag{3.21}\] So \[\left(\frac{\int_{\partial B_{r}(z_{0})}(\widetilde{u}_{v}^{2}+\widetilde{v} _{v}^{2})d\sigma}{\int_{\partial B_{r}(z_{0})}(\widetilde{u}\widetilde{u}_{v} +\widetilde{v}\widetilde{v}_{v})d\sigma}-\frac{\int_{\partial B_{r}(z_{0})}( \widetilde{u}\widetilde{u}_{v}+\widetilde{v}\widetilde{v}_{v})d\sigma}{\int_ {\partial B_{r}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})d\sigma}\right) \geq 0. \tag{3.22}\] Then from the derivative of \(H(z_{0},r)\), and the direct calculation of \(N^{\prime}(z_{0},r)\), \[\frac{N^{\prime}(z_{0},r)}{N(z_{0},r)} = \frac{1}{r}+\frac{D^{\prime}(z_{0},r)}{D(z_{0},r)}-\frac{H^{ \prime}(z_{0},r)}{H(z_{0},r)} \tag{3.23}\] \[\geq \frac{2}{r}\left(\frac{\int_{\partial B_{r}(z_{0})}(\widetilde{u }_{v}^{2}+\widetilde{v}_{v}^{2})d\sigma}{\int_{\partial B_{r}(z_{0})}( \widetilde{u}\widetilde{u}_{v}+\widetilde{v}\widetilde{v}_{v})d\sigma}-\frac{ \int_{\partial B_{r}(z_{0})}(\widetilde{u}\widetilde{u}_{v}+\widetilde{v} \widetilde{v}_{v})d\sigma}{\int_{\partial B_{r}(z_{0})}(\widetilde{u}^{2}+ \widetilde{v}^{2})d\sigma}\right)-Cr\] \[\geq -Cr,\] which is the desired result. Such frequency functions also have a lower bound as follows. **Lemma 3.2**.: _There exists positive constant \(r_{0}^{\prime}\) depending only on \(n\) and \(\Omega\), such that if \(r\leq r_{0}^{\prime}\), then_ \[N(z_{0},r)\geq-Cr^{2}. \tag{3.24}\] _Here \(C\) is a positive constant depending only on \(n\) and \(\Omega\)._ Proof.: From (3.5), we only need to estimate \(D_{3}(z_{0},r)\), since other terms are all positive. From the Holder inequality and the inequalities (3.12) and 3.13). \[|D_{3}(z_{0},r)| \leq \left(\int_{B_{r}(z_{0})}\widetilde{u}^{2}dz\right)^{1/2}\left(\int _{B_{r}(z_{0})}\widetilde{v}^{2}dz\right)^{1/2} \tag{3.25}\] \[\leq \frac{1}{2}\left(\int_{B_{r}(z_{0})}\widetilde{u}^{2}dz+\int_{B_{ r}(z_{0})}\widetilde{v}^{2}dz\right)\] \[\leq Cr\left(\int_{\partial B_{r}(z_{0})}\widetilde{u}^{2}d\sigma+ \int_{\partial B_{r}(z_{0})}\widetilde{v}^{2}d\sigma\right)+Cr^{2}\left(\int_ {B_{r}(z_{0})}|D\widetilde{u}|^{2}dz+\int_{B_{r}(z_{0})}|D\widetilde{v}|^{2}dz\right)\] \[= Cr\left(H_{1}(z_{0},r)+H_{2}(z_{0},r)\right)+Cr^{2}\left(D_{1}( z_{0},r)+D_{2}(z_{0},r)\right).\] Here \(C\) is a positive constant depending only on \(n\). Thus \[N(z_{0},r) \geq r\frac{D_{1}(z_{0},r)+D_{2}(z_{0},r)-|D_{3}(z_{0},r)|+D_{4}(z_{0 },r)}{H_{1}(z_{0},r)+H_{2}(z_{0},r)} \tag{3.26}\] \[\geq r\frac{(1-Cr^{2})(D_{1}(z_{0},r)+D_{2}(z_{0},r))-Cr(H_{1}(z_{0}, r)+H_{2}(z_{0},r))}{H_{1}(z_{0},r)+H_{2}(z_{0},r)}\] \[\geq -Cr^{2},\] provided that \(r>0\) is small enough such that \(1-Cr^{2}\geq 0\). This completes the proof. We can get the following doubling conditions from Lemma 3.1 and the "almost monotonicity formula". **Lemma 3.3**.: _Let \(r_{0}\) be the same positive constant as in Lemma 3.1. For \(z_{0}=(x_{0},0)\) and \(r<r_{0}\), it holds that_ \[\begin{cases}\fint_{\partial B_{r}(z_{0})}(\widetilde{u}^{2}+ \widetilde{v}^{2})d\sigma\leq 2^{C(N(z_{0},r)+1)}\fint_{\partial B_{r/2}(z_{0})}( \widetilde{u}^{2}+\widetilde{v}^{2})d\sigma,\\ \fint_{B_{r}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})dz\leq 2^{C(N(z_{0},r )+1)}\fint_{B_{r/2}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})dz,\\ \fint_{\partial B_{r}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})d\sigma\geq 2 ^{CN(z_{0},r/2)-C^{\prime}}\fint_{\partial B_{r/2}(z_{0})}(\widetilde{u}^{2}+ \widetilde{v}^{2})d\sigma,\\ \fint_{B_{r}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})dz\geq 2^{CN(z_{0},r )-C^{\prime}}\fint_{B_{r/2}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})dz, \end{cases} \tag{3.27}\] _where \(C\) and \(C^{\prime}\) in different forms are different positive constants depending only on \(n\)._ Proof.: This is a direct result by taking integration on the quantity \(\frac{N^{\prime}(z_{0},r)}{N(z_{0},r)}\). From the calculation of \(H^{\prime}(z_{0},r)\) in the proof of Lemma 3.1, \[\frac{d}{dr}\left(\log H(z_{0},r)\right)=\frac{H^{\prime}(z_{0},r)}{H(z_{0},r)}= \frac{n-1}{r}+2\frac{\int_{\partial B_{r}(z_{0})}(\widetilde{uu}_{\nu}+ \overline{\nu v}_{\nu})d\sigma}{\int_{\partial B_{r}(z_{0})}(\widetilde{u^{2} }+\overline{\nu^{2}})d\sigma}=\frac{n-1}{r}+2\frac{N(z_{0},r)}{r}. \tag{3.28}\] Thus \[\ln\frac{H(z_{0},r)}{H\left(z_{0},\frac{r}{2}\right)}=\int_{\frac{r}{2}}^{r} \frac{H^{\prime}(z_{0},\rho)}{H(z_{0},\rho)}d\rho=\int_{\frac{r}{2}}^{r}\frac{ n-1+2N(z_{0},\rho)}{\rho}d\rho. \tag{3.29}\] From the monotonicity formula, we know that for any \(\rho<r\), \[N(z_{0},\rho)\leq C(N(z_{0},r)+1), \tag{3.30}\] for some \(C>0\) depending only on the dimension \(n\). Then \[\ln\frac{H(z_{0},r)}{H\left(z_{0},\frac{r}{2}\right)}\leq C(N(z_{0},r)+1),\] and then \[H(z_{0},r)\leq 2^{C(N(z_{0},r)+1)}H\left(z_{0},\frac{r}{2}\right), \tag{3.31}\] where \(C\) is a positive constant depending only on \(n\). This is the first inequality of this lemma. The second inequality of (3.3) can be obtained by the first one. Now we prove the third and the fourth inequalities. In fact, from the monotonicity formula, for any \(\rho\in(r/2,r)\), we have \[N(z_{0},\rho)\geq CN(z_{0},r/2)-C^{\prime}.\] Thus \[\ln\frac{H(z_{0},r)}{H\left(z_{0},\frac{r}{2}\right)} = \int_{\frac{r}{2}}^{r}\frac{n-1+2N(z_{0},\rho)}{\rho}d\rho\geq\int _{\frac{r}{2}}^{r}\frac{n-1+CN(z_{0},r/2)-C^{\prime}}{\rho}d\rho\] \[\geq CN(z_{0},r/2)-C^{\prime}.\] Then \[H(z_{0},r)\geq 2^{CN(z_{0},r/2)-C^{\prime}}H\left(z_{0},\frac{r}{2}\right), \tag{3.32}\] where \(C\) and \(C^{\prime}\) are positive constants depending only on \(n\). This is the third inequality. The fourth one can be derived by integrating the third one. **Remark 3.4**.: _By the similar arguments as in the proof of Lemma 3.3, we have for \(0<r_{1}<r_{2}\leq r_{0}\),_ \[\begin{cases}\fint_{B_{r_{2}}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})dz\leq \left(\frac{r_{2}}{r_{1}}\right)^{C(N(z_{0},r)+1)}\fint_{B_{r_{1}}(z_{0})}( \widetilde{u}^{2}+\widetilde{v}^{2})dz,\\ \fint_{B_{r_{2}}(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})dz\geq\left(\frac {r_{2}}{r_{1}}\right)^{CN(z_{0},r/2)-C^{\prime}}\fint_{B_{r_{1}}(z_{0})}( \widetilde{u}^{2}+\widetilde{v}^{2})dz.\end{cases} \tag{3.33}\] Now we can establish the "changing center property". **Lemma 3.5**.: _Let \(z_{1}\in B_{r/4}(z_{0})\) with \(z_{1}=(x_{1},0)\) and \(x_{1}\in\Omega\). Then for \(\rho\leq r/4\), we have_ \[N(z_{1},\rho)\leq C(N(z_{0},r)+1), \tag{3.34}\] _where \(C\) is a positive constant depending only on \(n\)._ Proof.: From (3.28), for \(\rho=\frac{r}{4}\) and any \(t\in\left(\frac{3\rho}{2},2\rho\right)\), we have \[\ln\frac{\fint_{\partial B_{r}(z_{1})}(\widetilde{u}^{2}+\widetilde{v}^{2})d \sigma}{\fint_{\partial B_{3\rho/2}(z_{1})}(\widetilde{u}^{2}+\widetilde{v}^{ 2})d\sigma}=\int_{3\rho/2}^{t}\frac{2N(z_{1},l)}{l}dl\geq-C\ln\frac{2t}{3\rho} \geq-C, \tag{3.35}\] which implies that \[\fint_{\partial B_{3\rho/2}(z_{1})}(\widetilde{u}^{2}+\widetilde{v}^{2})d \sigma\leq C\fint_{\partial B_{r}(z_{1})}(\widetilde{u}^{2}+\widetilde{v}^{2} )d\sigma.\] for any \(t\in\left(\frac{3\rho}{2},2\rho\right)\). Then \[\fint_{\partial B_{3\rho/2}(z_{1})}(\widetilde{u}^{2}+\widetilde{v}^{2})d \sigma\leq C\fint_{B_{2\rho}(z_{1})\setminus B_{3\rho/2}(z_{1})}(\widetilde{u }^{2}+\widetilde{v}^{2})dz\leq C\fint_{B_{r}(z_{0})}(\widetilde{u}^{2}+ \widetilde{v}^{2})dz. \tag{3.36}\] Here we have used the fact that \(B_{2\rho}(z_{1})\subseteq B_{r}(z_{0})\). By the similar argument as in the proof of (3.35), we have \[\ln\frac{\fint_{\partial B_{3\rho/4}(z_{1})}\left(\widetilde{u}^{2}+\widetilde {v}^{2}\right)d\sigma}{\fint_{\partial B_{r}(z_{1})}\left(\widetilde{u}^{2}+ \widetilde{v}^{2}\right)d\sigma}\geq-C,\] for any \(t\in\left(0,\,\frac{5\rho}{4}\right)\). Then because \(B_{\rho/4}(z_{0})\subseteq B_{5\rho/4}(z_{1})\), \[\fint_{\partial B_{5\rho/4}(z_{1})}\left(\widetilde{u}^{2}+\widetilde{v}^{2} \right)d\sigma\geq\frac{1}{C}\fint_{B_{5\rho/4}(z_{1})}\left(\widetilde{u}^{2} +\widetilde{v}^{2}\right)dz\geq\frac{1}{C}\fint_{B_{\rho/4}(z_{0})}\left( \widetilde{u}^{2}+\widetilde{v}^{2}\right)dz. \tag{3.37}\] From Lemma 3.3, we also have \[\fint_{B_{r}(z_{0})}\left(\widetilde{u}^{2}+\widetilde{v}^{2}\right)dz\leq 2^{C(N(z_ {0},r)+1)}\fint_{B_{\rho/4}(z_{0})}\left(\widetilde{u}^{2}+\widetilde{v}^{2} \right)dz. \tag{3.38}\] So from (3.36), (3.37), and (3.38), \[N(z_{1},\rho)\leq C\ln\frac{\fint_{\partial B_{3\rho/2}(z_{1})}( \widetilde{u}^{2}+\widetilde{v}^{2})d\sigma}{\fint_{\partial B_{5\rho/4}(z_{1 })}(\widetilde{u}^{2}+\widetilde{v}^{2})\,d\sigma}\leq C\ln\frac{C\fint_{B_{r }(z_{0})}(\widetilde{u}^{2}+\widetilde{v}^{2})dz}{C^{-1}\fint_{B_{\rho/4}(z_{0 })}(\widetilde{u}^{2}+\widetilde{v}^{2})\,dz}\leq C(N(z_{0},r)+1).\] \(\Box\) From the above lemmas and Sobolev's Embedding Theorem, we can derive the relationship between the frequency function and the doubling index. **Lemma 3.6**.: _If \(\mu>0\) large enough, there exist positive constants \(C\), \(c\), \(\widetilde{C}\) and \(\widetilde{c}\) depending only on \(n\), such that for any \(\eta\in(0,1/2)\),_ \[N(z_{0},r)\leq cM(z_{0},(\eta+1)r)+\widetilde{c}(1-\log_{2}\eta- \log_{2}r), \tag{3.39}\] _and_ \[M(z_{0},r)\leq CN(z_{0},(\eta+1)r)+\widetilde{C}(1-\log_{2}\eta- \log_{2}r). \tag{3.40}\] _with \(z_{0}=(x_{0},0)\)._ Proof.: First we will give interior estimates of \(\widetilde{u}\) and \(\widetilde{v}\). Let \(B_{r}(z_{0})\subseteq\Omega_{R}\times\mathbb{R}\) be a fixed ball. Let \(\phi\) be the cut-off function of \(B_{r}(z_{0})\) such that \(\phi=1\) in \(B_{(1-\eta)r}(z_{0})\), \(\phi=0\) outside \(B_{r}(z_{0})\), and \(|D\phi|\leq\frac{C}{\eta r}\). Then by multiplying \(\widetilde{u}\phi^{2}\) on both sides of the first equation in (3.1), and taking integration by parts, we have \[\int_{B_{r}(z_{0})}|D\widetilde{u}|^{2}\phi^{2}dz = -2\int_{B_{r}(z_{0})}\widetilde{u}\phi D\widetilde{u}D\phi dz- \int_{B_{r}(z_{0})}\widetilde{u}\widetilde{v}\phi^{2}dz\] \[\leq \frac{1}{2}\int_{B_{r}(z_{0})}|D\widetilde{u}|^{2}\phi^{2}dz+2 \int_{B_{r}(z_{0})}\widetilde{u}^{2}|D\phi|^{2}dz+\frac{1}{2}\left(\int_{B_{r }(z_{0})}\widetilde{u}^{2}\phi^{2}dz+\int_{B_{r}(z_{0})}\widetilde{v}^{2}\phi ^{2}dz\right).\] This implies that \[\|\widetilde{u}\|_{W^{1,2}(B_{(1-\eta)r}(z_{0}))}\leq C\left((\eta r )^{-1}\|\widetilde{u}\|_{L^{2}(B_{r}(z_{0}))}+\|\widetilde{v}\|_{L^{2}(B_{r}( z_{0}))}\right). \tag{3.41}\] Similarly, by multiplying \(\widetilde{\nu}\phi^{2}\) on both sides of the second equation in (3.1), we have \[\int_{B_{r}(z_{0})}|D\widetilde{\nu}|^{2}\phi^{2}dz = -2\int_{B_{r}(z_{0})}\widetilde{\nu}\phi D\widetilde{\nu}D\phi dz- \mu\int_{B_{r}(z_{0})}\widetilde{\nu}^{2}\phi^{2}dz \tag{3.42}\] \[\leq \frac{1}{2}\int_{B_{r}(z_{0})}|D\overline{\nu}|^{2}\phi^{2}dz+2 \int_{B_{r}(z_{0})}\widetilde{\nu}^{2}|D\phi|^{2}dz.\] This implies that \[|\widetilde{\nu}|\|_{W^{1,2}(B_{(1-\eta)r}(z_{0}))}\leq\frac{C}{\eta r}| \widetilde{\nu}|_{L^{2}(B_{r}(z_{0}))},\] and then \[|\widetilde{\nu}|\|_{W^{k,2}(B_{(1-\eta)r}(z_{0}))}\leq\frac{C}{\eta r}| \widetilde{\nu}|_{W^{k-1,2}(B_{r}(z_{0}))},\] So by the iteration argument and Sobolev's Embedding Theorem, for any \(B_{r}(z_{0})\subseteq\Omega_{R}\times\mathbb{R}\), \[|\widetilde{u}|_{L^{\infty}(B_{(1-\eta)r}(z_{0}))}+|\widetilde{\nu}| |_{L^{\infty}(B_{(1-\eta)r}(z_{0}))}\leq\frac{C}{(\eta r)^{\frac{n+2}{2}}} \left(|\widetilde{u}|\|_{L^{2}(B_{r}(z_{0}))}+|\overline{\nu}|_{L^{2}(B_{r}(z _{0}))}\right). \tag{3.43}\] Thus from Lemma 3.3 and Remark 3.4, we have \[M(z_{0},r) = \frac{1}{2}\log_{2}\frac{|\widetilde{u}|}{|\widetilde{u}|}_{L^{ \infty}(B_{r}(z_{0}))}^{2}+|\overline{\nu}|_{L^{\infty}(B_{r}(z_{0}))}^{2} \tag{3.44}\] \[\leq C\left(-\log_{2}\eta-\log_{2}r\right)+\frac{1}{2}\log_{2}\frac{| \widetilde{u}|}{|\widetilde{u}|}_{L^{2}(B_{(\eta+1)r}(z_{0}))}^{2}+|\overline {\nu}|_{L^{2}(B_{(\eta+1)r}(z_{0}))}^{2}\] \[\leq C\left(-\log_{2}\eta-\log_{2}r\right)+C(N(z_{0},(\eta+1)r)+1).\] which is the inequality (3.40). Inequality (3.39) can be obtained by similar arguments. In fact, from Lemma 3.3 again, we have \[M(z_{0},(1+\eta)r) = \frac{1}{2}\log_{2}\frac{|\widetilde{u}|}{|\widetilde{u}|}_{L^{ \infty}(B_{(1+\eta)r}(z_{0}))}^{2}+|\overline{\nu}|_{L^{\infty}(B_{(1+\eta)r} (z_{0}))}^{2} \tag{3.45}\] \[\geq -C\left(-\log 2\eta-\log_{2}r\right)+\frac{1}{2}\log_{2}\frac{ \|\widetilde{u}|}{|\widetilde{u}|}_{L^{2}(B_{(1+\eta)r}(z_{0}))}^{2}+| \overline{\nu}|_{L^{2}(B_{(1+\eta)r}(z_{0}))}^{2}\] \[\geq C\left(-\log_{2}\eta-\log_{2}r\right)+CN(z_{0},r/2)-C^{\prime}.\] Then the first inequality of this Lemma is obtained. Now we are ready to give an upper bound for the frequency function and the doubling index. **Lemma 3.7**.: _There exist positive constants \(C\) and \(R_{0}\) depending only on \(n\) and \(\Omega\), such that for any \(z_{0}=(x_{0},0)\) with \(x_{0}\in\Omega\) and \(r\leq R_{0}/2\), it holds that_ \[N(z_{0},r)\leq C\sqrt{\mu}, \tag{3.46}\] _provided that \(B_{r}(x_{0})\subseteq\Omega_{R}\) and \(\mu>0\) large enough._ Proof.: Without loss of generality, assume that \(\|u\|_{L^{2}(\Omega)}=1\). Then from Lemma 2.6 and the relationship between \(u\) and \(\widetilde{u}\), we have \[\|\widetilde{u}\|_{L^{\infty}(\Omega\times(-R,R))}\leq e^{C\sqrt{\mu}R}\|u\|_ {L^{\infty}(\Omega)}\leq e^{C(\sqrt{\mu}+\Lambda^{1/4})R}\|u\|_{L^{2}(\Omega)}, \tag{3.47}\] where \(R\) is the same positive constant as in Lemma 2.6. From the proof of Lemma 2.6 and the relationship between \(u\) and \(\widetilde{u}\) again, \[\|\widetilde{v}\|_{L^{\infty}(\Omega\times(-R,R))}\leq C\|\widetilde{u}\|_{W^ {2,\infty}(\Omega\times(-R,R))}\leq e^{C\sqrt{\mu}R}\|u\|_{W^{2,\infty}(\Omega )}\leq e^{C(\sqrt{\mu}+\Lambda^{1/4})R}\|u\|_{L^{2}(\Omega)}, \tag{3.48}\] where \(C\) in different terms are different positive constants depending only on \(n\) and \(\Omega\). So \[\|\widetilde{u}\|_{L^{\infty}(\Omega\times(-R,R))}^{2}+\|\widetilde{v}\|_{L^ {\infty}(\Omega\times(-R,R))}^{2}\leq e^{C(\sqrt{\mu}+\Lambda^{1/4})R}. \tag{3.49}\] Let \(\bar{x}\) be the maximum point of \(u\) in \(\overline{\Omega}\) and \(\bar{z}=(\bar{x},0)\). Since \(\|u\|_{L^{2}(\Omega)}=1\), there holds \[|u(\bar{x})|=\|u\|_{L^{\infty}(\Omega)}\geq\frac{\|u\|_{L^{2}(\Omega)}}{\sqrt {|\Omega|}}=|\Omega|^{-\frac{1}{2}}. \tag{3.50}\] Here \(|\Omega|\) means the \(n\) dimensional Hausdorff measure of \(\Omega\). Then for any \(r<R\), from (3.49), \[M(\bar{z},r)=\frac{1}{2}\log_{2}\frac{\|\widetilde{u}\|_{L^{\infty}(B_{r}( \bar{z}))}^{2}+\|\widetilde{v}\|_{L^{\infty}(B_{r}(\bar{z}))}^{2}}{\|\widetilde {u}\|_{L^{\infty}(B_{r}(\bar{z}))}^{2}+\|\widetilde{v}\|_{L^{\infty}(B_{r}( \bar{z}))}^{2}}\leq\frac{1}{2}\log_{2}\frac{e^{C(\sqrt{\mu}+\Lambda^{1/4})R}} {u(\bar{x})}\leq C(\sqrt{\mu}+\Lambda^{1/4}), \tag{3.51}\] where \(C\) is a positive constant depending on \(n\), \(\Omega\) and \(R\). In the first inequality above we have also used the assumption that \(\mu>0\) is large enough. Then by Lemma 3.6 with \(\eta=\frac{1}{4}\), and noting that \(\widetilde{u}(x,x_{n+1})=u(x)e^{\sqrt{\frac{2\epsilon\mu}{2}}x_{n+1}}\) and \(\widetilde{v}(x,x_{n+1})=\left(\triangle u(x)+\frac{\lambda+\mu}{2}u(x) \right)e^{\sqrt{\frac{2\epsilon\mu}{2}}x_{n+1}}\), we have for \(r\leq R_{0}\), with \(R_{0}=\min\left\{r_{0},R/4\right\}\), such that \(N(\bar{z},R_{0})\leq C\sqrt{\mu}\) with \(\bar{z}=(\bar{x},0)\), provided that \(\mu>0\) large enough. Then from Lemma 3.5 and Lemma 3.6, \(N\left(z,\frac{R_{0}}{4}\right),M\left(z,\frac{R_{0}}{4}\right)\leq C(\sqrt{ \mu}+\Lambda^{1/4})\), where \(z\in B_{\frac{R_{0}}{2}}(\bar{z})\) with \(z=(x,0)\) and \(x\in\Omega\). So \(|\widetilde{u}|_{L^{\infty}(B_{\frac{R_{0}}{2}}(z))}\geq e^{-C(\sqrt{\mu}+ \Lambda^{1/4})}\). This implies that \(M(z,R_{0})\leq C\sqrt{\mu}\) for above \(z\). By the similar argument for finitely many steps, where the number of the steps depends only on \(\Omega\), \(R\) and \(R_{0}\), we have that for any \(z=(x,0)\) with \(x\in\Omega\), \(M(z,R_{0})\leq C(\sqrt{\mu}+\Lambda^{1/4})\). Then by the fact that \(\Lambda=\left(\frac{\mu}{2}\right)^{2}\) and Lemma 3.6 again, it holds that \(N(z,2R_{0}/3)\leq C(\sqrt{\mu}+\Lambda^{1/4})\leq C\sqrt{\mu}\). By the inequality (3.30), \[N(z,r)\leq C\sqrt{\mu},\quad\forall\ r\leq\frac{R_{0}}{2}. \tag{3.52}\] This completes the proof. \(\Box\) Now we arrive at proving Theorem 1.1. **Proof of Theorem 1.1:** Without loss of generality, assume that \(z_{0}=(0,0)\). Let \(m\) and \(l\) be the vanishing order of \(\widetilde{u}\) and \(\widetilde{v}=\triangle\widetilde{u}\) at the origin \((0,0)\), respectively. Recall the definition of the vanishing order, we have that \[\begin{cases}D^{\alpha}\widetilde{u}(0)=0,\ \ for\ any\ \ |\alpha|<k,\quad D^{ \alpha}\widetilde{u}(0)\neq 0\ \ for\ some\ \ |\alpha|=m;\\ D^{\alpha}\widetilde{v}(0)=0,\ \ for\ any\ \ |\alpha|<l,\quad D^{\alpha} \widetilde{v}(0)\neq 0\ \ for\ some\ \ |\alpha|=l.\end{cases} \tag{3.53}\] Thus for \(r>0\) small enough, we can rewrite \(\widetilde{u}\) and \(\widetilde{v}\) as follows. \[\begin{cases}\widetilde{u}(z)=r^{m}\phi(\theta)+o(r^{m}),\\ \widetilde{v}(z)=r^{l}\psi(\theta)+o(r^{l}).\end{cases} \tag{3.54}\] Here \(r=|z|\), \((r,\theta)\) is the spherical coordinates of \(z\), \(\phi\) and \(\psi\) are analytic functions of \(\theta\). Now we claim that \[\lim_{r\to 0+}N(0,r)=\min\left\{m,l\right\}. \tag{3.55}\] In fact, \[\lim_{r\to 0+}N(0,r) = \lim_{r\to 0+}r\frac{\int_{\partial B_{r}(0)}(\widetilde{u} \widetilde{u}_{\nu}+\widetilde{vv}_{\nu})d\sigma}{\int_{\partial B_{r}(0)}( \widetilde{u}^{2}+\widetilde{v}^{2})d\sigma} \tag{3.56}\] \[= \lim_{r\to 0+}\frac{\int_{\partial B_{r}(0)}(mr^{2m}\phi^{2}( \theta)+lr^{2l}\psi^{2}(\theta)+o(r^{2m}+o(r^{2l})))d\sigma}{\int_{\partial B_ {r}(0)}(r^{2m}\phi^{2}(\theta)+r^{2l}\psi(\theta)+o(r^{2m}+o(r^{2l}))d\sigma}\] \[= \min\left\{m,l\right\}.\] From Lemma 3.7, we have \(\min\left\{m,l\right\}\leq C\sqrt{\mu}\). This means that the vanishing order of \(\widetilde{u}\) is less than or equal to \(C\sqrt{\mu}\), since it is observed that \(m\leq l+2\). Then from the relationship of \(\widetilde{u}\) and \(u\), i.e., \(\widetilde{u}(x,x_{n+1})=u(x)e^{\sqrt{\frac{2\mu}{2}}x_{n+1}}\) with \(\mu=\sqrt{\lambda^{2}+4k^{2}}\), the conclusion of Theorem 1.1 is obtained. ## 4 Measure estimate for the nodal set The doubling estimates in the above section are established for \(\left(|\overline{u}|\|_{L^{2}}^{2}+|\overline{v}|\|_{L^{2}}^{2}\right)\). We will give below a new doubling estimate for \(|\overline{u}|\|_{L^{2}}^{2}\). **Lemma 4.1**.: _There exist positive constants \(\bar{r}\), \(C_{1}\), and \(C_{2}\) depending only on \(n\), such that for any \(r\leq\bar{r}/2\), \(\eta\in\left(0,\frac{1}{3}\right)\), and \(x_{0}\in\Omega\) with \(B_{r}(x_{0})\subseteq\Omega_{R}\),_ \[\int_{B_{(1+\eta)}(z_{0})}\widetilde{u}^{2}dz\leq C_{2}(1+3\eta)^{ C_{1}\sqrt{\mu}}\left(\mu^{2}+\eta^{-4}r^{-4}\right)\int_{B_{r}(z_{0})} \widetilde{u}^{2}dz, \tag{4.1}\] _where \(z_{0}=(x_{0},0)\)._ Proof.: From Lemma 3.3 and Lemma 3.7, \[\int_{B_{(1+\eta)}(z_{0})}\widetilde{u}^{2}dz\leq\int_{B_{(1+\eta) }(z_{0})}\left(\widetilde{u}^{2}+\widetilde{v}^{2}\right)dz\leq\left(\frac{1+ \eta}{1-\eta}\right)^{C_{1}\sqrt{\mu}}\int_{B_{(1-\eta)}(z_{0})}\left( \widetilde{u}^{2}+\widetilde{v}^{2}\right)dz. \tag{4.2}\] By the same argument as in the proof of Lemma 2.3, it holds that \[||\overline{v}||_{L^{2}(B_{(1+\eta)r}(z_{0}))}\leq C(\mu+r^{-2}\eta^{-2})|| \overline{u}||_{L^{2}(B_{r}(z_{0}))}. \tag{4.3}\] Then we have \[\int_{B_{(1+\eta)r}(z_{0})}\overline{u}^{2}dz \leq \left(\frac{1+\eta}{1-\eta}\right)^{C_{1}\sqrt[\eta]{t}+1}\int_{B_ {(1-\eta)r}(z_{0})}(\overline{u}^{2}+\overline{v}^{2})dz \tag{4.4}\] \[\leq \left(\frac{1+\eta}{1-\eta}\right)^{C_{1}(\sqrt[\eta]{t}+1)}C_{2}( \mu^{2}+\eta^{-4}r^{-4})\int_{B_{r}(z_{0})}\overline{u}^{2}dz\] \[\leq (1+3\eta)^{C_{1}\sqrt[\eta]{t}}C_{2}\left(\mu^{2}+\eta^{-4}r^{-4 }\right)\int_{B_{r}(z_{0})}\overline{u}^{2}dz.\] which is the desired result. **Remark 4.2**.: _From the relationship between \(\overline{u}\) and \(u\), one can obtian that for any \(\eta\in\left(0,\frac{1}{3}\right)\),_ \[\int_{B_{(1+\eta)r}(x_{0})}u^{2}dx\leq(1+3\eta)^{C\sqrt[\eta]{t}}C\left(\mu^{ 2}+\eta^{-4}r^{-4}\right)\int_{B_{r}(x_{0})}u^{2}dx, \tag{4.5}\] _where \(B_{(1+\eta)r}(x_{0})\subseteq\Omega\), and \(C\) is a positive constant depending only on \(n\)._ To get the measure estimate of the nodal set of \(u\), we also need the following lemma which can be seen in [9]. **Lemma 4.3**.: _Let \(f:\ B_{1}\subseteq\mathbb{C}\rightarrow\mathbb{C}\) be an analytic function with \(|f(0)|=1\) and \(\sup_{B_{1}}|f|\leq 2^{K}\) for some positive constant \(K\). Then for any \(r\in(0,1)\), the number of zero points of \(f\) in \(B_{r}(0)\) is less than or equal to \(CK\), where \(C\) is a positive constant depending only on \(r\)._ **Remark 4.4**.: _In this lemma, it is obvious that the domain \(B_{1}\) is not essential. If one changes \(B_{1}\) into \(B_{t}\) for any fixed positive constant \(t\), then the conclusion still holds._ From the new doubling condition in this section, Lemma 4.3, and the integral geometric formula, which can be found in [21], we can estimate the measure upper bound for the nodal set of \(u\) in \(\Omega\). **Proof of Theorem 1.2:** Let \(x_{0}\) be a point in \(\Omega\) and \(z_{0}=(x_{0},0)\). Then from Lemma 3.7, \(N(z_{0},R_{0})\leq C\sqrt{\mu}\), and \(N(z,R_{0}/2)\leq C\sqrt{\mu}\) for any \(z=(x,0)\) with \(x\in B_{\frac{R_{0}}{4}}(x_{0})\). Here \(R_{0}\) is a positive constant depending only on \(n\) and \(\Omega\). Without loss of generality, let \(|\overline{u}|\big{|}_{L^{2}(B_{R_{0}/4}(z_{0}))}=1\). Then from Lemma 4.1, for any \(z\in B_{\frac{R_{0}}{4}}(z_{0})\), \[\int_{B_{\frac{R_{0}}{16}}(z)}\widetilde{u}^{2}dz \geq 2^{-C(\sqrt{\mu}+1)}\int_{B_{\frac{R_{0}}{2}}(z)}\widetilde{u}^{ 2}dz\] \[\geq 2^{-C(\sqrt{\mu}+1)}\int_{B_{\frac{R_{0}}{2}}(z_{0})}\widetilde{ u}^{2}dz\] \[= 2^{-C(\sqrt{\mu}+1)}.\] So there exists some point \(p_{z}\in B_{\frac{R_{0}}{16}}(z)\) such that \(|\widetilde{u}(p_{z})|\geq 2^{-C\sqrt{\mu}}\), since otherwise \[\int_{B_{\frac{R_{0}}{16}}(z)}\widetilde{u}^{2}dz\leq|B_{R_{0}/16}(z)|2^{-2C \sqrt{\mu}}=CR_{0}^{n+1}2^{-2C\sqrt{\mu}}. \tag{4.7}\] This is a contradiction to (4.6), provided that \(R_{0}\) is small enough. Now choose \(z_{j}\in\partial B_{\frac{R_{0}}{4}}(z_{0})\) on the \(x_{j}\) axis, \(j=1,2,\cdots,n+1\). Then for any \(j\in\left\{1,2,\cdots,n+1\right\},\) there exists \(p_{z_{j}}\in B_{R_{0}}(z_{j})\) such that \(|\widetilde{u}(p_{z_{j}})|\geq 2^{-C\sqrt{\mu}}\). On the other hand, from the interior estimates, we also have that \(|\overline{u}|_{L^{\infty}(B_{\frac{R_{0}}{4}})(z_{0})}\leq 2^{C(\sqrt{\mu}+1)}\). Define \(f_{j}(w;t)=\widetilde{u}(p_{z_{j}}+tR_{0}w)\) for \(t\in\left(-\frac{5}{16},\frac{5}{16}\right)\) and let \(w\) belong to the \(n\) dimensional unit sphere. Because each \(f_{j}\) is analytic for \(t\), we can extend it to an analytic function \(f_{j}(w;t+i\tau)\) to \(|t|<\frac{5}{16}\) and \(|\tau|\leq c\), where \(c\) is a positive constant depending only on \(n\) and \(\Omega\). Then from Lemma 4.3, \[\mathcal{H}^{0}\left\{|t|<\frac{5}{16}\bigm{|}\widetilde{u}(p_{z_{j}}+tR_{0}w) =0\right\}\leq C\sqrt{\mu}.\] Here \(\mathcal{H}^{0}\) is the counting measure. Thus from the integral geometric formula in [15] and [21], \[\mathcal{H}^{n}\left(\left\{z\in B_{\frac{R_{0}}{32}}(z_{0})\bigm{|}\widetilde {u}(z)=0\right\}\right)\leq C\sqrt{\mu}R_{0}^{n},\] Because \(\widetilde{u}(z)=\widetilde{u}(x,x_{n+1})=u(x)e^{\sqrt{\mu}x_{n+1}}\), and the function \(e^{\sqrt{\mu}x_{n+1}}\) is always positive, \[\mathcal{H}^{n-1}\left(\left\{z\in B_{\frac{R_{0}}{54}}(x_{0})\;\big{|}\;u(x)=0 \right\}\right)\leq\frac{C}{R_{0}}\mathcal{H}^{n}\left(\left\{z\in B_{\frac{R_ {0}}{32}}(z_{0})\;\big{|}\;\widetilde{u}(z)=0\right\}\right)\leq C\sqrt{\mu}R _{0}^{n-1}.\] Then by covering \(\Omega\) with finitely many balls whose radius are \(\frac{R_{0}}{64}\), we have \[\mathcal{H}^{n-1}\left(\left\{x\in\Omega\;\big{|}\;u(x)=0\right\}\right)\leq C \sqrt{\mu}R_{0}^{-1}\leq C^{\prime}\sqrt{\mu}, \tag{4.8}\] which is the desired result. ## 5 Propagation of smallness In this section, we will discuss the propagation of smallness of \(u\), i.e., we will prove Theorem 1.3. We do not assume that \(\partial\Omega\) is analytic, the frequency function and the doubling index are defined only inside \(\Omega\). We first need the three sphere inequality below. **Lemma 5.1**.: _Let \(\overline{u}\) and \(\overline{v}\) satisfy (3.1), \(r_{0}\) be the same positive constant as in Lemma 3.1. Then for any \(r_{1}<r_{2}<r_{3}<r_{0}\) and \(z_{0}=(x_{0},0)\) with \(x_{0}\in\Omega\) and \(B_{r_{0}}(x_{0})\subseteq\Omega\), we have_ \[\begin{cases}|\overline{u}|_{L^{2}(B_{r_{2}}(z_{0}))}^{2}+|| \overline{v}|_{L^{2}(B_{r_{2}}(z_{0}))}^{2}\leq Q(\alpha)\left(||\overline{u} |_{L^{2}(B_{r_{1}}(z_{0}))}^{2}+||\overline{v}|_{L^{2}(B_{r_{1}}(z_{0}))}^{2} \right)^{\alpha}\left(||\overline{u}|_{L^{2}(B_{r_{3}}(z_{0}))}^{2}+|| \overline{v}|_{L^{2}(B_{r_{3}}(z_{0}))}^{2}\right)^{1-\alpha},\\ ||\overline{u}|_{L^{2}(B_{r_{2}}(z_{0}))}\leq P(\beta)||\overline{u}|_{L^{2}(B _{r_{1}}(z_{0}))}^{\beta}||\overline{u}|_{L^{2}(B_{r_{3}}(z_{0}))}^{1-\beta}, \end{cases} \tag{5.1}\] _where_ \[\begin{split}& Q(\alpha)=\frac{(r_{2}/r_{1})^{\alpha}}{(r_{3}/r_ {2})^{1-\alpha}}\left(\frac{r_{2}}{r_{1}}\right)^{\frac{C_{2}}{\alpha}},\\ &\alpha=\frac{\ln(r_{2}/r_{1})}{\ln(r_{2}/r_{1})+C_{1}\,\ln(r_{3}/r_ {2})}\in(0,1),\end{split}\] \[P(\beta)=C(\mu+r_{1}^{-2})^{\beta}(\mu+(r_{3}-r_{2})^{-2})^{1- \beta}\frac{(2r_{2}/r_{1})^{\beta}}{((r_{3}+r_{2})/(2r_{2}))^{1-\beta}}\left( \frac{2r_{2}}{r_{1}}\right)^{\frac{C_{2}}{\beta}},\] _and_ \[\beta=\frac{\ln(2r_{2}/r_{1})}{\ln(2r_{2}/r_{1})+C_{1}\ln((r_{3}+r_{2})/(2r_{ 2}))}\in(0,1).\] _Here \(C\), \(C_{1}\), and \(C_{2}\) are positive constants depending only on \(n\)._ Proof.: Since \(\partial\Omega\) is analytic, the conclusion of Lemma 3.1 also holds when \(z_{0}=(x_{0},0)\) with \(B_{r}(x_{0})\subseteq\Omega_{R}=\{x\mid dist(x,\Omega)<R\}\) with \(R<r_{0}\), where \(r_{0}\) is the same positive constant as in Lemma 3.1. So from Lemma 3.1 and the definition of the frequency function, we have \[\ln\frac{H(z_{0},r_{2})}{H(z_{0},r_{1})} = \int_{r_{1}}^{r_{2}}\frac{H^{\prime}(z_{0},r)}{H(z_{0},r)}dr=(n-1) \ln\frac{r_{2}}{r_{1}}+2\int_{r_{1}}^{r_{2}}\frac{N(z_{0},r)}{r}dr \tag{5.2}\] \[\leq (n-1)\ln\frac{r_{2}}{r_{1}}+C\left(N(z_{0},r_{2})+C_{0}\right)\ln \frac{r_{2}}{r_{1}},\] and \[\ln\frac{H(z_{0},r_{3})}{H(z_{0},r_{2})} = \int_{r_{2}}^{r_{3}}\frac{H^{\prime}(z_{0},r)}{H(z_{0},r)}dr=(n-1 )\ln\frac{r_{3}}{r_{2}}+2\int_{r_{2}}^{r_{3}}\frac{N(z_{0},r)}{r}dr \tag{5.3}\] \[\geq (n-1)\ln\frac{r_{3}}{r_{2}}+C^{-1}(N(z_{0},r_{2})-C_{0})\ln\frac{ r_{3}}{r_{2}}.\] Thus we obtain the three sphere inequality of \(H(z_{0},r)\): \[H(z_{0},r_{2})\leq Q^{\prime}(\alpha)H(z_{0},r_{1})^{\alpha}H(z_{0},r_{3})^{1- \alpha}. \tag{5.4}\] Here \(\alpha=\frac{\ln(r_{2}/r_{1})}{\ln(r_{2}/r_{1})+C_{1}\ln(r_{3}/r_{2})}\), \(Q^{\prime}(\alpha)=\left(\frac{r_{2}}{r_{1}}\right)^{\frac{C_{2}}{\alpha}}\), \(C_{1}\) and \(C_{2}\) are positive constants depending only on \(n\). By the integration of \(H(z_{0},r)\), we have \[\|\overline{u}\|_{L^{2}(B_{r_{2}}(z_{0}))}^{2}+\|\overline{v}\|_{L^{2}(B_{r_{ 2}}(z_{0}))}^{2}\leq Q^{\prime}(\beta)\left(\|\overline{u}\|_{L^{2}(B_{r_{1}}( z_{0}))}^{2}+\|\overline{v}\|_{L^{2}(B_{r_{1}}(z_{0}))}^{2}\right)^{\alpha} \left(\|\overline{u}\|_{L^{2}(B_{r_{3}}(z_{0}))}^{2}+\|\overline{v}\|_{L^{2}( B_{r_{3}}(z_{0}))}^{2}\right)^{1-\alpha}, \tag{5.5}\] where \(Q(\alpha)=Q^{\prime}(\alpha)\frac{(r_{2}/r_{1})^{\alpha}}{(r_{3}/r_{2})^{1- \alpha}}\). This is the first inequality of this Lemma. The second inequality comes from Lemma 2.3 and the first inequality by replacing \(r_{1}\) with \(r_{1}/2\) and \(r_{3}\) with \((r_{2}+r_{3})/2\). **Remark 5.2**.: _The following three sphere inequality of \(u\) can also be obtained by Lemma 5.1 and the relationship between \(u\) and \(\overline{u}\)._ \[\|u\|_{L^{2}(B_{r_{2}}(x_{0}))}\leq S(\theta)e^{C_{3}\sqrt{\mu}r_{0}}\|u\|_{L^ {2}(B_{r_{1}}(x_{0}))}^{\theta}\|u\|_{B_{r_{3}}(x_{0})}^{1-\theta}, \tag{5.6}\] _where_ \[S(\theta)=C(\mu+r_{1}^{-2})^{\theta}(\mu+(r_{3}-2r_{2}+r_{1})^{-2})^{1-\theta }\frac{((4r_{2}-2r_{1})/r_{1})^{\theta}}{((r_{3}+2r_{2}-r_{1})/(4r_{2}-2r_{1} ))^{1-\theta}}\left(\frac{4r_{2}-2r_{1}}{r_{1}}\right)^{\frac{C_{2}}{\theta}},\] _and_ \[\theta=\frac{\ln((4r_{2}-2r_{1})/r_{1})}{\ln((4r_{2}-2r_{1})/r_{1})+C_{1}\ln((r_{3 }+2r_{2}-r_{1})/(4r_{2}-2r_{1}))}.\] _Here \(C\), \(C_{1}\) and \(C_{2}\) are positive constants depending only on \(n\)._ By the above three sphere inequality, we can prove the propagation of the smallness property of \(u\) from some ball \(B_{r_{0}}(x_{0})\) to a subset \(G\subset\subset\Omega\) as follows. **Lemma 5.3**.: _Let \(u\) solve (1.1), \(G\) be a connected open set, \(G\subset\subset\Omega\), and \(x_{0}\subseteq\Omega\). Assume that_ \[\|u\|_{L^{\infty}(B_{r}(x_{0}))}\leq\eta,\quad\|u\|_{L^{\infty}( \Omega)}\leq 1, \tag{5.7}\] _where \(r<dist(G,\partial\Omega)\). Then we have_ \[\|u\|_{L^{\infty}(G)}\leq e^{C_{1}(\sqrt{\mu}r-\ln r)}\eta^{ \delta}, \tag{5.8}\] _with \(\delta=e^{\frac{-C_{2}dian(\Omega)}{r}}\). Here \(C_{1}\) and \(C_{2}\) are positive constants depending only on \(n\)._ Proof.: For any \(h>0\), let \(G^{h}\) be the \(h\) neighborhood of \(G\), i.e., \(G^{h}=\left\{x\in\Omega\ \big{|}\ dist(x,G)<h\right\}\). We also fix \(r_{3}=\frac{r}{2}\), \(r_{2}=\frac{r_{3}}{2}\) and \(r_{1}=\frac{r_{2}}{3}\). Now we consider the set \(G^{r_{1}}\). For any \(y_{0}\in G^{r_{1}}\), there exists a continuous path \(\gamma\) from \([0,1]\) to \(\Omega\) such that \(\gamma(0)=x_{0}\) and \(\gamma(1)=y_{0}\). Let \(0=t_{0}<t_{1}<t_{2}<\cdots<t_{K}=1\) such that \(x_{k}=\gamma(t_{k})\), and \(t_{k+1}=\max\{t\ |\ |\gamma(t)-x_{k}|=2r_{1}\}\) if \(|x_{k}-y_{0}|>2r_{1}\), otherwise we stop the process and set \(K=k+1\) and \(t_{K}=1\). Then \(\{B_{r_{1}}(x_{k})\}\) are mutually disjoint balls, \(|x_{k+1}-x_{k}|=2r_{1}\) for any \(k=0,1,2,\cdots,K-1\), and \(B_{r_{1}}(x_{k+1})\subseteq B_{r_{2}}(x_{k})\) for \(k=0,1,2,\cdots,K-1\), since \(r_{1}=\frac{r_{2}}{3}\). From the first inequality of Lemma 5.1, we have for any \(k=0,1,2,\cdots,K-1\), \[\|\overline{u}\|_{L^{2}(B_{r_{1}}(x_{k+1}))}^{2}+\|\overline{v} \|_{L^{2}(B_{r_{1}}(x_{k+1}))}^{2}\leq\|\overline{u}\|_{L^{2}(B_{r_{2}}(x_{k}) )}^{2}+\|\overline{v}\|_{L^{2}(B_{r_{2}}(x_{k}))}^{2}\] \[\quad\leq Q\left(\|\overline{u}\|_{L^{2}(B_{r_{1}}(x_{k}))}^{2}+ \|\overline{v}\|_{L^{2}(B_{r_{1}}(x_{k}))}^{2}\right)^{\alpha}\left(\| \overline{u}\|_{L^{2}(B_{r_{3}}(x_{k}))}^{2}+\|\overline{v}\|_{L^{2}(B_{r_{3}} (x_{k}))}^{2}\right)^{1-\alpha}\] \[\quad\leq Q\left(\|\overline{u}\|_{L^{2}(B_{r_{1}}(x_{k}))}^{2}+ \|\overline{v}\|_{L^{2}(B_{r_{1}}(x_{k}))}^{2}\right)^{\alpha}\left(\| \overline{u}\|_{L^{2}(\Omega_{r_{3}}\times(-r_{3},r_{3}))}^{2}+\|\overline{v} \|_{L^{2}(\Omega_{r_{3}}\times(-r_{3},r_{3}))}^{2}\right)^{1-\alpha}. \tag{5.9}\] So if we set \[m_{l}=\frac{||\overline{u}||_{L^{2}(B_{r_{1}}(x_{l}))}^{2}+||\overline{v}||_{L^{2 }(B_{r_{1}}(x_{l}))}^{2}}{||\overline{u}||_{L^{2}(\Omega_{r_{3}}\times(-r_{3},r_ {3}))}^{2}+||\overline{v}||_{L^{2}(\Omega_{r_{3}}\times(-r_{3},r_{3}))}^{2}},\] the above inequality becomes \[m_{l+1}\leq Qm_{l}^{\alpha},\quad l=0,1,\cdots,K-1.\] Thus \[m_{K}\leq\widetilde{C}m_{0}^{\delta},\] where \(\widetilde{C}=Q^{c_{1}}\) with \(c_{1}=\frac{1}{1-\alpha}\geq 1+\alpha+\alpha^{2}+\cdots+\alpha^{K-1}\), and \(\delta=\alpha^{K}\). Hence from Lemma 2.3 and the \(L^{\infty}\) estimate of \(\widetilde{u}\), we obtain that for \(z_{K}=(y_{0},0)\), \[||\widetilde{u}||_{L^{\infty}(B_{r_{1}}(z_{K}))}\leq e^{C(\ln\mu-\ln r)} \widetilde{C}\left(||\overline{u}||_{L^{\infty}(B_{2r_{1}}(z_{0}))}\right)^{ \delta}\left(||\overline{u}||_{L^{2}(\Omega\times(-r,r))}\right)^{1-\delta}. \tag{5.10}\] Since \(\{B_{r_{1}}(x_{k})\}\) are pairwise disjoint balls and \(r_{1}=\frac{r}{12}\), we have \(K\leq\frac{C_{1}diam(\Omega)}{r}\). Hence \(\widetilde{C}=Q^{\frac{1}{1-\alpha}}(\alpha)\) and \(\delta=\alpha^{\frac{C_{1}diam(\Omega)}{r}}\). So from the relationship between \(u\) and \(\widetilde{u}\), there holds \[||u||_{L^{\infty}(G)}\leq e^{C(\sqrt[4]{\mu}r+\ln\mu-\ln r)}Q^{\frac{1}{1- \alpha}}||u||_{L^{\infty}(B_{r}(x_{0}))}^{\delta}||u||_{L^{\infty}(\Omega)}^{ 1-\delta} \tag{5.11}\] Here \(C\) is a positive constant depending only on \(n\) and \(\Omega\). This completes the proof. From this Lemma, we prove Theorem 1.3 as follows. **Proof of Theorem \(1.3\):** Since \(E\) is a convex subset of \(\Omega\), there exists a ball \(B_{r}(x_{0})\) contained in \(E\) with \(r<\min\{C\frac{\mathcal{H}^{n}(E)}{diam(\Omega)^{n-1}},dist(G,\partial\Omega)\}\). Thus the conclusion is obtained by Lemma 5.3. \(\Box\) **Remark 5.4**.: _By the same arguments as in [27], a similar result of Theorem \(1.3\) also holds when we replace the condition on "\(E\) is an open subset of \(\Omega\) with \(\mathcal{H}^{n}(E)\geq\epsilon\)" by that "\(E\) is any subset of \(\Omega\) with \(\mathcal{H}^{n-1+s}(E)>\epsilon\)" for any \(s\in(0,1]\). In this case, the positive constants \(C\) and \(\delta\) in (1.5) depend on \(n\), \(diam(\Omega)\), \(dist(G,\partial\Omega)\), \(\epsilon\) and \(s\)._ ## Acknowledgement This work is supported by the National Natural Science Foundation of China (Grant No. 12071219 and No. 11971229).
2306.14877
Reducing Spatial Discretization Error with Linear Discontinuous Source Tilting in Iterative Quasi-Monte Carlo for Neutron Transport
Recently, iterative Quasi-Monte Carlo (iQMC) was introduced as a new method of neutron transport which combines deterministic iterative methods and quasi-Monte Carlo simulation for more efficient solutions to the neutron transport equation. Previous iQMC results utilized a uniform Cartesian grid with a piecewise-constant source. Similar to "teleportation error" in Implicit Monte Carlo (IMC) methods, the spatial discretization and piecewise-constant source can lead to a significant spatial error that limits convergence of the overall method. Taking concepts from IMC, we have developed a history-based discontinuous piecewise-linear source tilting scheme to reduce spatial error in iQMC. The source tilting method is described below and afterward we present results from a fixed-source 2D reactor-like problem adapted from the Takeda-1 Benchmark problem.
Samuel Pasmann, Ilham Variansyah, C. T. Kelley, Ryan G. McClarren
2023-06-26T17:47:03Z
http://arxiv.org/abs/2306.14877v1
# Reducing Spatial Discretization Error with Linear Discontinuous Source Tiliting in ###### Abstract Recently, iterative Quasi-Monte Carlo (iQMC) was introduced as a new method of neutron transport which combines deterministic iterative methods and quasi-Monte Carlo simulation for more efficient solutions to the neutron transport equation [1]. Primary advantages of iQMC include a vectorized multigroup scheme, \(O(N^{-1})\) solution convergence, and the use of advanced iterative Krylov solvers, including GMRES and BiCGSTAB, which converge with far fewer iterations than the standard source iteration. iQMC treats the scattering and fission terms as internal fixed sources, and thus the Monte Carlo transport sweep is reduced to a particle ray trace which provides a well-suited application for Quasi-Monte Carlo. Quasi-Monte Carlo is the use of low-discrepancy sequences in place of pseudo-random number generators and provides a more efficient sampling of the phase space for a given number of particles. The improved sampling technique provides a theoretical \(O(N^{-1})\) convergence compared to the \(O(N^{-1/2})\) of pseudo-random numbers. In each iQMC sweep, particles are emitted with an initial position and direction assigned from the LDS and a statistical weight calculated from a piecewise-constant source over a given mesh. \(N\) particles are created and traced out of the volume, tallying the scalar flux with a path-length tally estimator. After a complete sweep, the new scalar flux approximation is sent to the iterative solver to update the source strength. iQMC was shown to achieve \(O(N^{-1})\) across multiple test problems, however was also observed to plateau in convergence in some problems [1, 2]. Previous iQMC results utilized a uniform Cartesian grid with a piecewise-constant source. Similar to "teleportation error" in Implicit Monte Carlo (IMC) methods [3], the spatial discretization and piecewise-constant source can lead to a significant spatial error that limits convergence of the overall method. Taking concepts from IMC, we have developed a history-based discontinuous piecewise-linear source tilting scheme to reduce spatial error in iQMC. The source tilting method is described below and afterward we present results from a fixed-source 2D reactor-like problem adapted from the Takeda-1 Benchmark problem [4]. ## I Methodology ### Piecewise-Constant Scheme In a 2-dimensional example, previous iQMC studies utilized a piecewise-constant flux approximation \[\phi_{\text{Constant}}=a_{i,j}, \tag{1}\] where \(i\) and \(j\) denote the 2-dimensional spatial indices and \(a_{i,j}\) is the cell-averaged flux \[a_{i,j}=\frac{1}{\Delta x_{i}\,\Delta y_{j}}\int_{y_{j-1}}^{y_{j}}\int_{x_{i- 1}}^{x_{i}}\phi(x,y)\,dx\,dy. \tag{2}\] Given a path-length tally estimator with path length \(S\), total cross section \(\Sigma_{t}\), and continuous particle weight capture with initial weight \(w_{0}\), the resultant tally is \[\int_{0}^{S}\,w_{0}e^{-\Sigma_{t}S^{\prime}}\,dS^{\prime}=w_{0}\left(\frac{1- e^{-\Sigma_{t}S}}{\Sigma_{t}}\right). \tag{3}\] ### Piecewise-Linear Scheme In our discontinuous linear scheme, the scalar flux is now represented as \[\phi_{\text{Linear}}=a_{i,j}+b_{i,j}(x-x_{\text{mid},i})\\ +c_{i,j}(y-y_{\text{mid},j})+d_{i,j}(x-x_{\text{mid},i})(y-y_{ \text{mid},j}), \tag{4}\] where \(x_{\text{mid},i}\) and \(y_{\text{mid},j}\) represent the cell midpoint. The \(a_{i,j}\) term is the same from the piecewise-constant scheme in Eq. 2, while the \(b_{i,j}\), \(c_{i,j}\), \(d_{i,j}\) terms respectively represent the linear flux-tilt in the \(x\), \(y\), and \(xy\) directions, where the linear terms \(b_{i,j}\) and \(c_{i,j}\) are \[b_{i,j}=\frac{12}{\Delta x_{i}^{3}\Delta y_{j}}\int_{y_{j-1}}^{y_{j}}\int_{x_{i -1}}^{x_{i}}\left(x-x_{\text{mid},i}\right)\phi(x,y)\,dx\,dy, \tag{5}\] and \[c_{i,j}=\frac{12}{\Delta x_{i}\Delta y_{j}^{3}}\int_{y_{j-1}}^{y_{j}}\int_{x_{ i-1}}^{x_{i}}\left(y-y_{\text{mid},j}\right)\phi(x,y)\,dx\,dy, \tag{6}\] and the bilinear term is \[d_{i,j}=\frac{12}{(\Delta x_{i}\Delta y_{j})^{3}}\int_{y_{j-1}}^{ y_{j}}\int_{x_{i-1}}^{x_{i}}\left(x-x_{\text{mid},i}\right)\\ \left(y-y_{\text{mid},j}\right)\phi(x,y)\,dx\,dy, \tag{7}\] The new terms can be similarly tallied as \[\int_{0}^{S}w_{0}e^{-\Sigma_{\rm c}S^{\prime}}\left[(x_{0}+\mu_{x}S^{\prime})-x_{ \rm mid}\right]\ dS^{\prime}, \tag{8}\] for the \(b_{i,j}\) and \(c_{i,j}\) terms (swapping x-variables for y-variables respectively) and \[\int_{0}^{S}w_{0}e^{-\Sigma_{\rm c}S^{\prime}}\left[(x_{0}+\mu_{x }S^{\prime})-x_{\rm mid}\right]\\ \left[\left(y_{0}+\mu_{y}S^{\prime}\right)-y_{\rm mid}\right]\ dS ^{\prime}, \tag{9}\] for \(d_{i,j}\), where \(x_{0}\) and \(y_{0}\) denote particle initial position associated with the initial weight \(w_{0}\). ## 2 Test Problem ### 2D Fixed-Source Reactor Problem To evaluate the effect of using the proposed linear source, we've designed a 2-dimensional, 2-group, fixed-source reactor problem inspired by the Takeda-1 Benchmark problem [4] and using the same cross sections. The problem features a core region surrounded by reflective boundary conditions and reflector material, making the system near critical (\(k=0.96883\pm 0.00049\)). A fixed source was placed outside the fuel in the reflector region. Figure 1 depicts the problem setup. A reference scalar flux result is obtained from a high-fidelity Monte Carlo simulation generated using the Monte Carlo code MC/DC [5] with \(10^{10}\) particle histories. The iQMC simulations were run with a 25X25 uniform mesh similar to other benchmark results of the Takeda-1 problem [4]. The Halton Sequence was used to generate particle positions and angles, while GMRES was used to iterate to \(\Delta\phi/\phi\leq 1\times 10^{-9}\). Figure 2 shows the total (summed across groups) scalar flux from a resulting iQMC simulation with source tilting. Figures 3 and 4 show the source strength from a piecewise-constant simulation, while Figures 5 and 6 depict the source strength from a piecewise-linear simulation. Finally, Figure 7 displays the \(L_{\infty}\)-norm of the scalar flux relative error for iQMC simulations with and without source tilting as a function of the number of particles per iteration. Figure 1: Core configuration of the test problem inspired by the Takeda-1 Benchmark [4]. Figure 2: Total scalar flux results from iQMC simulation with source tilting and \(N=2\times 10^{6}\) particles per iteration. Figure 4: Final piecewise-constant fast group source results from \(N=2\times 10^{6}\) particles per iteration. Figure 5: Final piecewise-linear thermal group source results from \(N=2\times 10^{6}\) particles per iteration. Figure 3: Final piecewise-constant thermal group source results from \(N=2\times 10^{6}\) particles per iteration. Figure 6: Final piecewise-linear fast group source results from \(N=2\times 10^{6}\) particles per iteration. ## Conclusions We have developed a history-based linear discontinuous source tilting scheme for the iterative Quasi-Monte Carlo (iQMC) method. The source tilting technique was shown to reduce spatial error in a 2-dimensional, 2-group, reactor problem. Figure 7 shows that the relative error plateaus from piecewise-constant simulations, while the piecewise-linear results are able to converge at the expected \(O(N^{-1})\). It is unlikely however, that piecewise-linear source tilting completely eliminates the spatial discretization error. More tests are needed to further explore the limitations of the method and to evaluate performance on more difficult 3D problems. ## Acknowledgments This work was funded by the Center for Exascale Monte-Carlo Neutron Transport (CEMeNT) a PSAAP-III project funded by the Department of Energy, grant number: DE-NA003967 and the National Science Foundation, grant number DMS-1906446.
2308.02561
Large-scale Generative Simulation Artificial Intelligence: the Next Hotspot in Generative AI
The concept of GenAI has been developed for decades. Until recently, it has impressed us with substantial breakthroughs in natural language processing and computer vision, actively engaging in industrial scenarios. Noticing the practical challenges, e.g., limited learning resources, and overly dependencies on scientific discovery empiricism, we nominate large-scale generative simulation artificial intelligence (LS-GenAI) as the next hotspot for GenAI to connect.
Qi Wang, Yanghe Feng, Jincai Huang, Yiqin Lv, Zheng Xie, Xiaoshan Gao
2023-08-03T02:04:04Z
http://arxiv.org/abs/2308.02561v1
# _Large-scale Generative Simulation Artificial Intelligence_: the Next Hotspot in Generative AI ###### Abstract In this paper, we propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep learning model. We propose a novel framework for generating artificial intelligence in a deep model. We propose a novel framework for generating artificial intelligence in a deep model. isting dilemmas and nominates the next hotspot for generalizing large models to more practical scenarios. Throughout the commentary, we use \(x\in\mathcal{X}\), \(y\in\mathcal{Y}\), and \(z\in\mathcal{Z}\) to denote the explanatory variable, response variable, and latent variable, respectively. For tasks or operators on datasets \(\tau\), we represent them as distributions in the form of \(p\left(\tau\right)\). ## GenAI can do more than AIGC Now widespread popularity of GenAI models stems from their ability of artificial intelligence generated content (AIGC). Technically, the deep generative model empowers GenAI's numerous capabilities beyond standard AIGC. Among them, we stress three practical ones in **Fig. (1.a)**: data compression, representation disentanglement, and causal inference. In the big data era, minimizing the number of bits required to store and transmit information, known as data compression, is crucial. This function is particularly essential in time-sensitive services with memory constraints, such as edge computing. Some generative models, such as the vector quantized variational autoencoder or deep variational information bottleneck models, excel in data compression by finding insufficient statistics of high-dimensional signals. Representation disentanglement refers to the ability to infer statistically independent latent variables that explain different aspects of data generation, e.g., style, color, and pose. It closely relates to the controllable generation, e.g., obtaining samples with only one aspect varied. Causality is also a significant concern in GenAI, and generative models are advantageous in handling high-dimensional variables and discovering structures of causal graphs, allowing for understanding causal effects. Importantly, GenAI with causality enables counterfactual predictions, which renders the potential consequences of a specific intervention that we have yet to execute. For example, with \(p\left(y|x,\mathrm{do}\left(z=z_{0}\right)\right)\), policymakers can evaluate the influence of socio-economic policies, denoted by \(z_{0}\), without incurring additional costs. Despite these fascinating capabilities, there remain several tricky questions in the field. (i) Is fully representation disentanglement achievable with generative models? (ii) How can we identify causal generative models in the presence of small-scale datasets and many unobserved confounders? ## Experimental design matters in GenAI's adaptability and robustness Let us rethink the critical factors contributing to GPT-like models' success. In addition to prompt engineering, the languages' generative process must be capable of capturing the masked input-output coupling pattern in the corpus, mapping these linked entities to a knowledge base, and continually improving its performance by incorporating new input-output pairs. Hence, when users initiate queries for specific contextual terms, the knowledge base can effectively locate the precise information and provide feedback. The above process inspires the task distribution design for GenAI. Task diversity nurtures the generalization capability of models across various scenarios, as this aligns with traditional statistical learning theory. The sampled tasks should be representative, covering a broad range of scenarios, particularly in zero-shot or few-shot learning. However, increasing task diversity requires larger model sizes and comes at the cost of higher computational expenses. For instance, GPT-3 has 175 billion parameters and has been trained on over 570GB of text data from diverse tasks. Generating tasks is problem-specific, and masked learning has emerged as one of the most popular heuristics. Nevertheless, exhaustively exploring all scenarios in training large models can be computationally demanding. As an example, consider **Fig. (1.b)**, where the number of masked scenarios to complete grows exponentially with respect to the complexity of data \(|\mathcal{X}|\) in a combinatorial sense. Conversely, we raise two key issues: (i) Is there a principle to balance average performance and adaptability to worst-case scenarios, particularly when loss values exhibit heavy tails? (ii) How can we automatically design task distributions in a dataset or instance-wise sense to improve generalization? These issues require further attention in the development of desirable GenAI. ## Geometric priors can be powerful inductive biases in boosting GenAI Generally, we refer to constraints or encoded knowledge in hypotheses space as inductive bias. As stated in Max Welling's comment [1] on the Bitter Lesson [2], machine learning cannot generalize well without inductive biases. Inductive biases are especially beneficial when dealing with data insufficiency, as it guides the learning process in a more reasonable direction. Here we concentrate on the geometric inductive bias, which conserves the geometric structures of datasets. At a high level, these structures resort to symmetry and scale separation principles [3], particularly necessary in generative modeling. Take the equivariance in symmetry as an example: Human cognitive systems can naturally capture the rotation, translation, reflection, and scaling of signals, implying that the reasonable abstraction of concepts is equivariant with respect to these transformations, as shown in **Fig. (1.c)**. Another way to apply geometric priors is selective data augmentation by imposing transformation in the data space, meaning that data itself can be inductive bias in modeling. Generative modeling transformations of the dataset can also be attractive in deep geometric learning. The geometric prior and recent advances have been verified to be effective in GenAI for scientific discoveries, e.g., molecule design and drug development, better capturing the complex interactions between atoms and predicting the properties of drug candidates. This constitutes a promising avenue for the application of geometric priors in the field of AI4Science. While geometric priors show promise in GenAI, important questions still need answers: (i) Are there universal routines to automatically generate geometric priors in GenAI applications? (ii) How can we alleviate the computation burden when incorporating them through constraints or data augmentation? Addressing these questions could help us understand the role of geometric priors in generative modeling and facilitate their use in practice. ### Multi-views are required to evaluate performance of models for GenAI GenAI primarily counts more on the data generation mechanism. Given the inherent subjectivity and variability of specific applications, there exist no universally applicable criteria to evaluate generative performance. For reliability and usefulness, we propose to establish multi-view evaluation systems that consider fidelity, diversity, and safety in **Fig. (1.d)**. The fidelity in generation is critical in risk-sensitive applications like dialogue systems in medical science, and standard metrics are log-likelihoods or statistical divergence such as inception score. Diversity is a fundamental characteristic of generative modeling, with the purpose of capturing the complete possible examples in the probability space. At least two factors influence generation diversity: the extent of observability and the complexity of the dataset semantics. Observability extent refers to the level of accessible context details, namely prior information. For example, in image inpainting, the diversity of generated images decreases as more pixels are observed. Empirically in large language models, increasing the corpus' size and complexity brings more varied and creative text generation. When using the Bayesian framework, efficient probabilistic programming requires stochastic optimization algorithms to avoid posterior or conditional prior collapse to guarantee the diversity [4]. Additionally, security is an increasingly important concern in GenAI. For example, dataset bias, such as deliberate manipulation or unintentional sampling bias, can significantly affect the performance and orientation of large language models. Notably, the trend of GenAI is to allow interactions with open environments, incrementally access Internet information, and evolves in a continual learning way, securing generative models from attacks at a data level seems urgent. Undoubtedly, there is a solid allure for exploring a model-agnostic and domain-agnostic evaluation schema that is end-to-end and integrates multi-views at both the sample and distribution levels. ## Large-scale generative simulation artificial intelligence is on the way The concept of GenAI has been developed for decades. Until recently, it has impressed us with substantial breakthroughs in natural language processing and computer vision, actively engaging in industrial scenarios. Noticing the practical challenges, e.g., limited learning resources, and overly dependencies on scientific discovery empiricism, we nominate large-scale generative simulation artificial intelligence (LS-GenAI) as the next hotspot for GenAI to connect. The roadmap of GenAI in **Fig. (1.e)** relies on previously considered elements and can be framed as the doubly generative paradigm for simulation and sequential decision-making. Specifically, the simulation system needs to be identifiable with a few observations, and the decision-making modules can afford fast adaptation utility in time-sensitive scenarios, e.g., autonomous driving. At the intersection of simulation science and artificial intelligence, LS-GenAI also has particular use in robotics and life systems, reducing realistic sampling complexity, accelerating scientific progress, and catalyzing discoveries. One prime example of LS-GenAI's potential originates from clinical research. In this context, a high-fidelity biomedical simulation system, operating at the individual level, can create environments to allow the examination of the treatment effects on patients and reduce dependencies on expert experience. In spite of numerous realistic benefits, developing LS-GenAI is nontrivial. The demands of massive real-world data, the lack of high-fidelity world models, and the weak adaptability of these world models have complicated the process of constructing ubiquitous decision-making systems, e.g., in interventional clinical research. In service of the utilities in LS-GenAI, more sophisticated simulation and learning tools must be integrated. Apart from building high-fidelity simulation environments, or world models5, it is essential to support customization for different decision-making tasks such that the task of our interest can be among them. Meanwhile, the world exhibits a hierarchical structure, such as the atomic, cellular, tissue, and organismal levels in human body systems, or spatiotemporal scales, and retaining multi-scale in generation augmented by symbolic computation can reveal more accurate complex dynamics. Other demands lie in handling partial observability with the inaccessible inherent system state and unpacking the black box to separate function approximation and causal effects. The primary goals of LS-GenAI are to assist in meaningful experimental design and enable fast adaptation of learned skills. Achieving these will ultimately enrich the utilities of GenAI in a broader range of real-world scenarios.
2303.17578
Online Learning and Disambiguations of Partial Concept Classes
In a recent article, Alon, Hanneke, Holzman, and Moran (FOCS '21) introduced a unifying framework to study the learnability of classes of partial concepts. One of the central questions studied in their work is whether the learnability of a partial concept class is always inherited from the learnability of some ``extension'' of it to a total concept class. They showed this is not the case for PAC learning but left the problem open for the stronger notion of online learnability. We resolve this problem by constructing a class of partial concepts that is online learnable, but no extension of it to a class of total concepts is online learnable (or even PAC learnable).
Tsun-Ming Cheung, Hamed Hatami, Pooya Hatami, Kaave Hosseini
2023-03-30T17:46:50Z
http://arxiv.org/abs/2303.17578v1
# Online Learning and Disambiguations of Partial Concept Classes ###### Abstract In a recent article, Alon, Hanneke, Holzman, and Moran (FOCS '21) introduced a unifying framework to study the learnability of classes of _partial_ concepts. One of the central questions studied in their work is whether the learnability of a partial concept class is always inherited from the learnability of some "extension" of it to a total concept class. They showed this is not the case for PAC learning but left the problem open for the stronger notion of online learnability. We resolve this problem by constructing a class of partial concepts that is online learnable, but no extension of it to a class of total concepts is online learnable (or even PAC learnable). ## 1 Introduction In many practical learning problems, the learning task is tractable because we are only required to predict the labels of the data points that satisfy specific properties. In the setting of binary classification problems, instead of learning a total concept \(h:\mathcal{X}\to\{0,1\}\), we are often content with learning a partial version of it \(\widehat{h}:\mathcal{X}\to\{0,1,\star\}\), where \(\widehat{h}(x)=\star\) means that both \(0\) and \(1\) are acceptable predictions. This relaxation of allowing unspecified predictions renders a wider range of learning tasks tractable. Consider, for example, predicting whether a person approves or disapproves of various political stances by observing their previous voting pattern. This person might not hold a strong opinion about particular political sentiments, and it might be impossible to predict their vote on those issues based on their previous history. However, the learning task might become possible if we allow both "approve" and "disapprove" as acceptable predictions in those cases where a firm conviction is lacking. A well-studied example of this phenomenon is learning half-spaces with a large margin. In this problem, the domain is the set of points in a bounded region in an arbitrary Euclidean space, and the concepts are half-spaces that map each point to \(1\) or \(0\) depending on whether they belong to the half-space or not. It is well-known that when the dimension of the underlying Euclidean space is large, one needs many samples to learn a half-space. However, in the large margin setting, we are only required to correctly predict the label of a point if its distance from the defining hyperplane is bounded from below by some margin. Standard learning algorithms for this task, such as the classical Perceptron algorithm, due to Rosenblatt [14], show that this relaxation of the learning requirement makes the problem tractable even for high-dimensional Euclidean spaces. Motivated by such examples, Alon, Hanneke, Holzman, and Moran [1] initiated a systematic study of the learnability of partial concept classes \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\). They focused on the two frameworks of _probably approximately correct (PAC) learning_ and _online learning_. We refer to [1] for the definition of PAC learnability of partial concept classes. We define online learnability in Definition 1.4. PAC learning is an elegant theoretical framework characterized by the combinatorial parameter of the Vapnik-Chervonenkis (VC) dimension. The fundamental theorem of PAC learning states that a total binary concept class is PAC learnable if and only if its VC dimension is finite. Similarly, online learnability of total concept classes is characterized by a combinatorial parameter called the Littlestone dimension (LD). We formally define the VC dimension and the Littlestone dimension in Definitions 2.2 and 2.3 respectively. Alon, Hanneke, Holzman, and Moran [1] proved that these characterizations of PAC and online learnability extend to the setting of partial concept classes. **Theorem 1.1** ([1, Theorems 1 and 15]).: _Let \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) be a partial concept class._ * \(\mathbb{H}\) _is PAC learnable if and only if_ \(\operatorname{VC}(\mathbb{H})<\infty\)_._ * \(\mathbb{H}\) _is online learnable if and only if_ \(\operatorname{LD}(\mathbb{H})<\infty\)_._ It follows from the definitions of VC and LD dimensions that for every partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\), we have \(\operatorname{VC}(\mathbb{H})\leq\operatorname{LD}(\mathbb{H})\). In particular, online learnability always implies PAC learnability. One of the central questions studied in [1] is whether the learnability of a partial concept class is always inherited from the learnability of some total concept class. To make this question precise, we need to define the notion of disambiguation of a partial concept class. While we defer the formal definitions to Section 2.2, one may understand a _strong disambiguation_ of a partial class as simply an assignment of each \(\star\) to either \(1\) or \(0\) for each partial concept in the class. When \(\mathcal{X}\) is infinite, it is more natural to consider the weaker notion of _disambiguation_ that we shall define in Definition 2.5. When \(\mathcal{X}\) is finite, the notions of disambiguation and strong disambiguation coincide. Consider the problem of learning the partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) in PAC learning or online learning. If the partial concept class \(\mathbb{H}\) has a disambiguation \(\overline{\mathbb{H}}\subseteq\{0,1\}^{\mathcal{X}}\) that is PAC learnable, then \(\mathbb{H}\) is PAC learnable. This follows from \(\operatorname{VC}(\mathbb{H})\leq\operatorname{VC}(\overline{\mathbb{H}})\), or simply by running the PAC learning algorithm of \(\overline{\mathbb{H}}\) on \(\mathbb{H}\). Similarly, if a disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}\) is online learnable, then \(\mathbb{H}\) is online learnable. Is the learnability of every partial concept class inherited from the learnability of some disambiguation to a total concept class? **Question 1.2** (Informal [1]).: _Does every learnable partial class have a learnable disambiguation?_ Equipped with the VC dimension characterization of Theorem 1.1, [1] proved that for PAC learning, the answer to Question 1.2 is _negative_. **Theorem 1.3** ([1, Theorem 11]).: _For every \(n\in\mathbb{N}\), there exists a partial concept class \(\mathbb{H}_{n}\subseteq\{0,1,\star\}^{[n]}\) with \(\operatorname{VC}(\mathbb{H}_{n})=1\) such that any disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}_{n}\) has \(\operatorname{VC}(\overline{\mathbb{H}})\geq(\log n)^{1-o(1)}\). Moreover, for \(\mathcal{X}=\mathbb{N}\), there exists \(\mathbb{H}_{\infty}\subseteq\{0,1,\star\}^{\mathcal{X}}\) with \(\operatorname{VC}(\mathbb{H}_{\infty})=1\) such that \(\operatorname{VC}(\overline{\mathbb{H}})=\infty\) for every disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}_{\infty}\)._ While Theorem 1.3 gives a strong negative answer to Question 1.2 in the case of PAC learning, the question was left open for online learning. Roughly speaking, this question strengthens the bounded-VC assumption on \(\mathbb{H}\) to bounded _Littlestone dimension_ (LD), which pertains to _online learnability_ of \(\mathbb{H}\). The authors in [1] also proposed a second open problem that replaces the bounded-VC dimension assumption by the assumption of _polynomial growth_. This assumption is weaker than bounded LD but stronger than bounded VC dimension. As we discuss below, our main result resolves these two open problems. Online learnability.Online learning is performed in a sequence of consecutive rounds, where at round \(t\), the learner is presented with an instance \(x_{t}\in\mathcal{X}\) and is required to predict its label. After predicting the label, the correct label \(y_{t}\in\{0,1\}\) is revealed to the learner. Note that even for partial concept classes, we require that the correct label is \(0\) or \(1\). The learner's goal is to make as few prediction mistakes as possible during this process. We assume that the true labels are always _realizable_, i.e. there is a partial concept \(h\in\mathbb{H}\) with \(h(x_{i})=y_{i}\) for all \(i=1,\ldots,t\). **Definition 1.4** (Online Learnability).: _A partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) is online learnable if there is a mistake bound \(m\coloneqq m(\mathbb{H})\in\mathbb{N}\) such that for every \(T\in\mathbb{N}\), there exists a learning algorithm that on every realizable sequence \((x_{i},y_{i})_{i=1,\ldots,T}\) makes at most \(m\) mistakes._ Online learnability for total classes is equivalent to the bounded Littlestone dimension. In Theorem 1.1, Alon, Hanneke, Holzman, and Moran [1] showed that the same equivalence carries out in the setting of partial classes. They asked the following formulation of Question 1.2. _If a partial class is online learnable, is there a disambiguation of it that is online learnable?_ More precisely, they pose the following question: **Problem 1.5** ([1]).: _Let \(\mathbb{H}\) be a partial class with \(\operatorname{LD}(\mathbb{H})<\infty\). Does there exist a disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}\) with \(\operatorname{LD}(\overline{\mathbb{H}})<\infty\)? Is there one with \(\operatorname{VC}(\overline{\mathbb{H}})<\infty\)?_ We give a negative answer to Problem 1.5: **Theorem 1.6** (Main Theorem).: _For every \(n\in\mathbb{N}\), there exists a partial concept class \(\mathbb{H}_{n}\subseteq\{0,1,\star\}^{[n]}\) with \(\operatorname{LD}(\mathbb{H}_{n})\leq 2\) such that every disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}_{n}\) satisfies \(\operatorname{LD}(\overline{\mathbb{H}})\geq\operatorname{VC}(\overline{ \mathbb{H}})=\Omega(\log\log n).\) Consequently, for \(\mathcal{X}=\mathbb{N}\), there exists \(\mathbb{H}_{\infty}\subseteq\{0,1,\star\}^{\mathcal{X}}\) with \(\operatorname{LD}(\mathbb{H}_{\infty})\leq 2\) and \(\operatorname{LD}(\overline{\mathbb{H}})\geq\operatorname{VC}(\overline{ \mathbb{H}})=\infty\) for every disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}_{\infty}\)._ Polynomial growth.A general strategy to prove a super-constant lower bound on the VC dimension of a total concept class \(\mathbb{H}\subseteq\{0,1\}^{n}\) is to show that the class is of super-polynomial size. This is the approach utilized in Theorem 1.3 and Theorem 1.6. For a total concept class \(\mathbb{H}\subseteq\{0,1\}^{n}\) with VC dimension \(d\), one has \(2^{d}\leq|\mathbb{H}|\leq O(n^{d})\): the lower bound is immediate from the definition of VC dimension, and the upper bound is the consequence of the celebrated Sauer-Shelah-Perles (SSP) lemma. **Theorem 1.7** (Sauer-Shelah-Perles lemma [14]).: _Let \(\mathbb{H}\subseteq\{0,1\}^{n}\) and \(\operatorname{VC}(\mathbb{H})=d\). Then_ \[|\mathbb{H}|\leq\binom{n}{\leq d}\coloneqq\sum_{i=0}^{d}\binom{n}{i}=O(n^{d}).\] The direct analog of the SSP lemma is not true for partial concept classes: [1] proved that there exists \(\mathbb{H}\subseteq\{0,1,\star\}^{[n]}\) with \(\operatorname{VC}(\mathbb{H})=1\) such that every disambiguation \(\overline{\mathbb{H}}\) has size \(|\overline{\mathbb{H}}|\geq n^{\Omega(\log n)}\). This result, combined with the SSP lemma for total classes, immediately implies Theorem 1.3. Interestingly, under the stronger assumption of the bounded Littlestone dimension, the polynomial growth behavior of the original SSP lemma remains valid. **Theorem 1.8** ([1]).: _Every partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{[n]}\) with \(\operatorname{LD}(\mathbb{H})\leq d\) has a disambiguation \(\overline{\mathbb{H}}\) with \(|\overline{\mathbb{H}}|\leq O(n^{d})\)._ We say that a partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) has _polynomial growth with parameter \(d\in\mathbb{N}\)_ if for every finite \(\mathcal{X}^{\prime}\subseteq\mathcal{X}\), there is a disambiguation \(\overline{\mathbb{H}}|_{\mathcal{X}^{\prime}}\) of \(\mathbb{H}|_{\mathcal{X}^{\prime}}\) of size at most \(O(|\mathcal{X}^{\prime}|^{d})\). Note that by Theorem 1.8, every partial concept class with Littlestone dimension \(d\) has polynomial growth with parameter \(d\). Alon, Hanneke, Holzman, and Moran asked the following question: **Problem 1.9** ([1]).: _Let \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) be a partial concept class with polynomial growth. Does there exist a disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}\) such that \(\operatorname{VC}(\overline{\mathbb{H}})<\infty\)?_ Note that Problem 1.9 cannot be resolved (in the negative) by a naive application of the SSP lemma to disambiguations of \(\mathbb{H}\) or its restrictions. However, Theorem 1.6 combined with Theorem 1.8 refutes Problem 1.9 as well. **Theorem 1.10**.: _For every \(n\in\mathbb{N}\), there is \(\mathbb{H}\subseteq\{0,1,\star\}^{[n]}\) with polynomial growth with parameter \(2\) such that every disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}\) has \(\operatorname{VC}(\overline{\mathbb{H}})=\Omega(\log\log n)\)._ _Consequently, for \(\mathcal{X}=\mathbb{N}\), there exists \(\mathbb{H}_{\infty}\subseteq\{0,1,\star\}^{\mathcal{X}}\) with polynomial growth with parameter \(2\) such that every disambiguation \(\overline{\mathbb{H}_{\infty}}\) of \(\mathbb{H}_{\infty}\) has \(\operatorname{VC}(\overline{\mathbb{H}_{\infty}})=\infty\)._ The Alon-Saks-Seymour Problem.The proof of Theorem 1.3 in [1] hinges on the breakthrough result of Goos [14] and its subsequent improvements [1] that led to almost optimal super-polynomial bounds on the "biclique partition number versus chromatic number" problem of Alon, Saks, and Seymour. The _biclique partition number_ of a graph \(G\), denoted by \(\operatorname{bp}(G)\), is the smallest number of complete bipartite graphs (bicliques) that partition the edge set of \(G\). Alon, Saks, and Seymour conjectured that the chromatic number of a graph with biclique partition number \(k\) is at most \(k+1\). Huang and Sudakov refuted the Alon-Saks-Seymour conjecture in [10] by establishing a superlinear gap between the two parameters. Later in a breakthrough, Goos [14] proved a superpolynomial separation. Our main result, Theorem 1.6, also builds on the aforementioned graph constructions. However, unlike previous works, our theorem demands a reasonable upper bound on the number of vertices. Since the constructions result from a complex sequence of reductions involving query complexity, communication complexity, and graph theory [1, 1, 1, 2], it is necessary to scrutinize them to ensure that the required parameters are met. We present a reorganized and partly simplified sequence of constructions in Section 3.3 that establishes the following theorem. **Theorem 1.11** (Small-size refutation of the Alon-Saks-Seymour conjecture).: _There exists a graph \(G\) on \(2^{\Theta(k^{4}\log^{3}k)}\) vertices that admits a biclique partition of size \(2^{O(k\log^{4}k)}\) but its chromatic number is at least \(2^{\Omega(k^{2})}\)._ Theorem 1.11 is essentially due to [1]. Our contribution to this theorem is obtaining an explicit and optimized bound on the size of \(G\). Standard Optimal Algorithm.Theorem 1.6 provides an example partial class with Littlestone dimension \(\leq 2\), such that the VC dimension of every disambiguation is \(\Omega(\log\log n)\). Whether one can improve the \(\Omega(\log\log n)\) lower bound is unclear. In particular, it is an interesting question whether every disambiguation of a partial class of Littlestone dimension at most \(2\) has VC dimension \(O(\log\log n)\). One natural candidate approach for obtaining such an upper bound would be to utilize the Standard Optimal Algorithm (SOA). SOA is an online learning algorithm devised by Littlestone [14] that can learn classes with bounded Littlestone dimensions. Alon, Hanneke, Holzman, and Moran, in their proof of Theorem 1.8, showed that applying SOA to a partial concept class \(\mathbb{H}\) with Littlestone dimension \(d\) yields a disambiguation of size \(|\overline{\mathbb{H}}|\leq O(n^{d})\) and consequently VC dimension \(O(d\log n)\). This shows that the lower bound of Theorem 1.6 on VC dimension of disambiguations cannot be improved beyond \(O(\log n)\). It is hence natural to ask whether it is possible to obtain an improved upper bound on the VC dimension of the SOA-based disambiguation. We answer this question in the negative by constructing a family of partial concept classes \(\mathbb{H}\) of Littlestone dimension \(d\) where the disambiguation obtained by the SOA algorithm has VC dimension \(\Omega(d\log(n/d))\). **Theorem 1.12**.: _For every natural numbers \(d\leq n\), there exists a partial concept class \(\mathbb{H}_{n,d}\subseteq\{0,1,\star\}^{[n]}\) with \(d\leq\mathrm{LD}(\mathbb{H}_{n,d})\leq d+1\) such that the SOA disambiguation of \(\mathbb{H}_{n,d}\) has VC dimension \(\Omega(d\log(n/d))\)._ ## 2 Preliminaries and Background For a positive integer \(k\), we denote \([k]\coloneqq\{1,\ldots,k\}\). We adopt the convention that \(\{0,1\}^{0}\) or \(\{0,1,\star\}^{0}\) contains the empty string only, which we denote by (). We adopt the standard computer science asymptotic notations, such as Big-O, and use the asymptotic tilde notations to hide poly-logarithmic factors. ### VC Dimension and Littlestone Dimension Let \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) be a partial concept class. When the domain \(\mathcal{X}\) is finite, we sometimes view \(\mathbb{H}\) as a partial matrix \(\mathbf{M}_{\mathcal{X}\times\mathbb{H}}\), where each row corresponds to a point \(x\in\mathcal{X}\) and each column corresponds to a concept \(h\in\mathbb{H}\), and the entries are defined as \(\mathbf{M}(x,h)=h(x)\). Next, we define the VC dimension and the Littlestone dimension of partial classes, which generalize the definitions of these notions for total classes. As shown in [1], the VC and Littlestone dimensions for partial classes capture PAC and online learnability, respectively. **Definition 2.1** (Shattered set).: _A finite set of points \(C=\{x_{1},\ldots,x_{n}\}\subseteq\mathcal{X}\) is shattered by a partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) if for every pattern \(y\in\{0,1\}^{n}\), there exists \(h\in\mathbb{H}\) with \(h(x_{i})=y_{i}\) for all \(i\in[n]\)._ **Definition 2.2** (VC dimension).: _The VC dimension of a partial class \(\mathbb{H}\), denoted by \(\mathrm{VC}(\mathbb{H})\), is the maximum \(d\) such that there exists a size-\(d\) subset of \(\mathcal{X}\) that is shattered by \(\mathbb{H}\). If no such largest \(d\) exists, define \(\mathrm{VC}(\mathbb{H})=\infty\)._ Viewed as a matrix, the VC dimension of \(\mathbb{H}\) is the maximum \(d\) such that the associated partial matrix \(\mathbf{M}_{\mathcal{X}\times\mathbb{H}}\) contains a zero/one submatrix of dimensions \(d\times 2^{d}\), where the columns enumerate all \(d\)-bit zero/one patterns. The Littlestone dimension is defined through the shattering of decision trees instead of sets. Consider a full binary decision tree of height \(d\) where every non-leaf \(v\) is labelled with an element \(x_{v}\in\mathcal{X}\). We identify every node of this tree by the string \(v\in\bigcup_{k=0}^{d}\{0,1\}^{k}\) that corresponds to the path from the root to the node. That is, the root is the empty string, its children are the two elements in \(\{0,1\}\), and more generally, the children of a node \(\vec{v}\in\{0,1\}^{k}\) are the two strings \(\vec{v}0\) and \(\vec{v}1\) in \(\{0,1\}^{k+1}\). We say that such a tree is _shattered_ by a partial concept class \(\mathbb{H}\) if for every leaf \(y\in\{0,1\}^{d}\), there exists \(h\in\mathbb{H}\) such that \(h(x_{y[<i]})=y_{i}\) for each \(i\in[d]\), where \(y[<i]\) is the first \((i-1)\)-th bits of \(y\). In other words, applying the decision tree to \(h\) will result in the leaf \(y\). **Definition 2.3** (Littlestone dimension).: _The Littlestone dimension of a partial concept class \(\mathbb{H}\), denoted by \(\operatorname{LD}(\mathbb{H})\), is the maximum \(d\) such that there is an \(\mathcal{X}\)-labelled height-\(d\) full binary decision tree that is shattered by \(\mathbb{H}\). If no such largest \(d\) exists, define \(\operatorname{LD}(\mathbb{H})=\infty\)._ The _dual_ of a concept class \(\mathbb{H}\) is the concept class with the roles of points and concepts exchanged. Concretely, the dual class of \(\mathbb{H}\in\{0,1,\star\}^{\mathcal{X}}\), denoted by \(\mathbb{H}^{\top}\), is the collection of functions \(f_{x}:\mathbb{H}\to\{0,1,\star\}\) for every \(x\in\mathcal{X}\), which is defined by \(f_{x}(h)=h(x)\) for each \(h\in\mathbb{H}\). When \(\mathcal{X}\) is finite, taking the dual corresponds to transposing the matrix of the concept class. The VC-dimension of the dual-class is related to that of the primal class by the inequality \[\operatorname{VC}(\mathbb{H}^{\top})\leq 2^{\operatorname{VC}(\mathbb{H})+1}-1\] (see [14]), which translates to a lower bound of the VC-dimension of the primal class. ### Disambiguations We start by formally defining _strong disambiguation_ and _disambiguation_. As mentioned earlier, the two notions coincide when the domain \(\mathcal{X}\) is finite. **Definition 2.4** (Strong Disambiguation).: _A strong disambiguation of a partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) is a total concept class \(\overline{\mathbb{H}}\subseteq\{0,1\}^{\mathcal{X}}\) such that for every \(h\in\mathbb{H}\), there exists a \(\bar{h}\in\overline{\mathbb{H}}\) that is consistent with \(h\) on the points \(h^{-1}(\{0,1\})\)._ **Definition 2.5** (Disambiguation).: _A disambiguation of a partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) is a total concept class \(\overline{\mathbb{H}}\subseteq\{0,1\}^{\mathcal{X}}\) such that for every \(h\in\mathbb{H}\) and every finite \(S\subseteq h^{-1}(\{0,1\})\), there exists \(\bar{h}\in\overline{\mathbb{H}}\) that is consistent with \(h\) on \(S\)._ A learning algorithm can often provide a disambiguation of a partial concept class by assigning the prediction of the algorithm to unspecified values. Relevant to our work is the disambiguation by the Standard Optimal Algorithm of Littlestone. It was observed in [1] that this algorithm can provide "efficient" disambiguations of partial classes with bounded Littlestone dimensions. We describe this disambiguation next. Consider a partial concept class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) with a countable domain \(\mathcal{X}\) and an ordering \(x_{1},x_{2},\ldots\) of \(\mathcal{X}\). Given \(\vec{b}\in\{0,1,\star\}^{k}\), let \(\mathbb{H}|_{\vec{b}}\) be the set of concepts \(h\) where \(h(x_{i})=b_{i}\) for every \(i\in[k]\). For convenience, we identify \(\mathbb{H}|_{()}=\mathbb{H}\). For the purpose of the algorithm, we adopt the convention \(\operatorname{LD}(\emptyset)=-1\). The SOA obtains a disambiguation iteratively and assigns a \(0/1\) value to each \(\star\) in \(\mathbb{H}\): for each \(k\in\mathbb{N}\), consider \(\mathbb{H}|_{\vec{b}}\) for every \(\vec{b}\in\{0,1\}^{k-1}\). Pick \(c\in\{0,1\}\) which maximizes \(\operatorname{LD}(\mathbb{H}|_{\vec{b}c})\), breaking ties by favoring \(c=0\), and assign \(c\) to \(h(x_{k})=\star\) for every \(h\in\mathbb{H}|_{\vec{b}_{\star}}\). We use the notation \(\overline{\mathbb{H}}^{\text{SOA}}\) for the SOA disambiguation of a partial concept class \(\mathbb{H}\). As mentioned earlier, for a partial class with Littlestone dimension \(d\), Theorem 1.8 gives an upper bound of \(\binom{n}{\leq d}=O(n^{d})\) on \(\left|\overline{\mathbb{H}}^{\text{SOA}}\right|\). The theorem follows from the mistake bound of SOA for online learning, which relies on the crucial property that at least one choice of \(c\in\{0,1\}\) satisfies \(\text{LD}(\mathbb{H}|_{\overline{b}c})\leq\text{LD}(\mathbb{H}|_{\overline{b} })-1\) whenever \(\mathbb{H}|_{\overline{b}}\neq\emptyset\). ## 3 Proofs In this section, we present the proofs of Theorems 1.6, 1.10, 1.11 and 1.12. ### Proofs of Theorems 1.6 and 1.10 As mentioned earlier, Theorem 1.10 is an immediate corollary of Theorem 1.6 and Theorem 1.8. We focus on proving Theorem 1.6. Suppose \(G=(V,E)\) is the graph supplied by Theorem 1.11 on \(|V|=n=2^{\Theta(k^{4}\log^{3}k)}\) vertices with a biclique partition of size \(m=2^{O(k\log^{4}k)}\). We will use \(G\) to build a partial concept class \(\mathbb{G}\subseteq\{0,1,\star\}^{V}\). This construction is simply the dual of the partial concept class of [1] in their proof of Theorem 1.6. Let \(\{B_{1},\ldots,B_{m}\}\) be the size-\(m\) biclique partition of the edges of \(G\). We fix an orientation \(B_{i}=L_{i}\times R_{i}\) for each biclique. Define \(\mathbb{G}\subseteq\{0,1,\star\}^{V}\) as follows. For each \(i\in[m]\), associate a concept \(h_{i}:V\to\{0,1,\star\}\) to the biclique \(B_{i}\), defined by \[h_{i}(v)=\begin{cases}0&\text{if }v\in L_{i}\\ 1&\text{if }v\in R_{i}\\ \star&\text{otherwise}\end{cases}\.\] We first observe that the Littlestone dimension of this concept class is at most \(2\). **Claim 3.1**.: \(\text{LD}(\mathbb{G})\leq 2\)_._ Proof.: We show that \(\mathbb{G}\), viewed as a matrix, does not contain \(\begin{bmatrix}1&0\\ 1&0\end{bmatrix}\) as a submatrix and then show that the existence of this submatrix is necessary for having a Littlestone dimension greater than \(2\). If \(\begin{bmatrix}1&0\\ 1&0\end{bmatrix}\) appears in \(\mathbb{G}\) as a submatrix, then there exist \(i\neq j\) and \(u\neq v\in V(G)\) such that \(h_{i}(v)=h_{j}(v)=1\) and \(h_{i}(u)=h_{j}(u)=0\). However, this means that \(v\in R_{i}\cap R_{j}\) and \(u\in L_{i}\cap L_{j}\), which in turn implies that the edge \(\{u,v\}\) is covered by both \(B_{i}\) and \(B_{j}\), contradicting the assumption that each edge is covered exactly once. On the other hand, for a class \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) with Littlestone dimension greater than \(2\), there exists a shattered \(\mathcal{X}\)-labelled height-\(3\) full binary tree. In particular, there exists \(h,h^{\prime}\in\mathbb{H}\) and points \(x_{()},x_{1},x_{10}\) such that \[\begin{array}{ll}h(x_{()})=1,&h(x_{1})=0,&h(x_{10})=0,\\ h^{\prime}(x_{()})=1,&h^{\prime}(x_{1})=0,&h^{\prime}(x_{10})=1.\end{array}\] This means that the submatrix restricted to the columns \(\{x_{()},x_{1}\}\) and the rows \(\{h,h^{\prime}\}\) is \(\begin{bmatrix}1&0\\ 1&0\end{bmatrix}\). We conclude that \(\text{LD}(\mathbb{G})\leq 2\). Proof of Theorem 1.6.: Consider the partial concept class \(\mathbb{G}\subseteq\{0,1,\star\}^{V}\) above. By Claim 3.1, we have \(\operatorname{LD}(\mathbb{G})\leq 2\). We show that for every disambiguation \(\overline{\mathbb{G}}\) of \(\mathbb{G}\), we have \(\operatorname{VC}(\overline{\mathbb{G}})\geq\Omega(\log\log n)\). The argument here is similar to the proof of Theorem 1.3. Consider a disambiguation \(\overline{\mathbb{G}}\) of \(\mathbb{G}\). Note that if two columns \(u\) and \(v\) are identical in \(\overline{\mathbb{G}}\), then there is no edge between \(u\) and \(v\), as otherwise, some \(h_{i}\) would have assigned \(0\) to one of \(u\) and \(v\) and \(1\) to the other. Therefore, if two columns \(u\) and \(v\) are identical, we can color the corresponding vertices with the same color. Consequently, the number of distinct columns in \(\overline{\mathbb{G}}\) is at least the chromatic number \(\chi(G)\geq 2^{\Omega(k^{2})}\). By the SSP lemma (Theorem 1.7), if \(\operatorname{VC}(\overline{\mathbb{G}}^{\top})\leq d\), then \(\overline{\mathbb{G}}\) must have at most \(O(m^{d})\) distinct columns. Therefore, \[2^{\Omega(k^{2})}\leq O(m^{d}).\] Substituting \(m=2^{\tilde{O}(k)}\) shows that \(d=\hat{\Omega}(k)\). Finally, \[\operatorname{VC}(\overline{\mathbb{G}})\geq\Omega(\log\operatorname{VC}( \overline{\mathbb{G}}^{\top}))\geq\Omega(\log k)\geq\Omega(\log\log n).\] This completes the proof of the first part of Theorem 1.6. For the second part, we adopt the same construction in the proof of [1, Theorem 11]. Let \(\mathbb{H}_{\infty}\) be a union of disjoint copies of \(\mathbb{H}_{n}\) over \(n\in\mathbb{N}\), each supported on a domain \(\mathcal{X}_{n}\) mutually disjoint from others and the partial concepts of \(\mathbb{H}_{n}\) extend outside of its domain by \(\star\). Since any disambiguation \(\mathbb{H}\) of \(\mathbb{H}_{\infty}\) simultaneously disambiguates all \(\mathbb{H}_{n}\), the Sauer-Shelah-Perles lemma implies that \(\operatorname{VC}(\mathbb{H})\) must be infinite. ### Disambiguations via the SOA algorithm (Theorem 1.12) This section is dedicated to the proof of Theorem 1.12. Proof of Theorem 1.12.: We prove the statement by showing that for every \(r,d\in\mathbb{N}\), there exists a partial concept class \(\mathbb{H}_{r,d}\) on \([n]\), where \(n=d(2^{r}+r)\), such that \(d\leq\operatorname{LD}(\mathbb{H}_{r,d})\leq d+1\) and the SOA disambiguation has VC dimension \(\geq dr\) and at least \(2^{dr}\) distinct rows. The other cases of \(n\) follow by trivially extending the domain. For any \(r,d\in\mathbb{N}\), define \[\mathcal{F}_{r,d}=\{F\subseteq[d2^{r}]:\,|F|=d\}.\] Note that \(|\mathcal{F}_{r,d}|=\binom{d2^{r}}{d}\geq 2^{dr}\). We enumerate the sets in \(\mathcal{F}_{r,d}\) as \(F_{1},\ldots,F_{\binom{d2^{r}}{d}}\) in the natural order. Next, we define the partial concept class \(\mathbb{H}_{r,d}\) on domain \([d(2^{r}+r)]\). The class consists of the partial concepts \(h_{i,j}\) for \(i\in[\binom{d2^{r}}{d}]\) and \(j\in[dr]\) defined as follows: \[h_{i,j}(x)=\begin{cases}1&\text{ if }x\in F_{i}\\ 0&\text{ if }x\in[d2^{r}]\setminus F_{i}\\ \beta(i,j)&\text{ if }x=d2^{r}+j\\ \star&\text{ otherwise}\end{cases},\] where \(\beta(i,j)\) denotes \(j\)-th bit of the \(dr\)-bit binary representation of \(i\) if \(i\in[2^{dr}]\), and \(\beta(i,j)=\star\) otherwise. We first prove that \(d\leq\mathrm{LD}(\mathbb{H}_{r,d})\leq d+1\). Note that there is a set of \(2^{d}\) indices \(I\subseteq[d2^{r}]\) which \[\{F_{i}\cap[d]:i\in I\}=\mathcal{P}([d]),\] therefore \([d]\) can be shattered by \(\{h_{i,1}:\,i\in I\}\) and hence \(\mathrm{LD}(\mathbb{H}_{r,d})\geq\mathrm{VC}(\mathbb{H}_{r,d})\geq d\). On the other hand, note that \(|f^{-1}(1)|\leq d+1\) for any \(f\in\mathbb{H}_{r,d}\), which implies that \(\mathrm{LD}(\mathbb{H}_{r,d})\leq d+1\). Next, we consider the SOA disambiguation. We claim that \(\{d2^{r}+1,\ldots,d(2^{r}+r)\}\) is shattered by \(\{h_{i,1}:i\in[2^{dr}]\}\). There are no disambiguations for \(x\in[d2^{r}]\). For \(x>d2^{r}\), note that for any \(\vec{b}\in\{0,1\}^{x-1}\), either \(\mathbb{H}_{r,d}|_{\vec{b}}=\emptyset\) or \[\mathbb{H}_{r,d}|_{\vec{b}}=\{h_{i,j}:j\in[dr]\},\] where \(i\in[d2^{r}]\) such that \(F_{i}=\{k\in[d2^{r}]:\,b_{k}=1\}\). We focus on the latter case and restrict to \(i\in[2^{dr}]\). There is exactly one \(c\in\{0,1\}\) such that \(\mathbb{H}_{r,d}|_{\vec{b}c}\neq\emptyset\), namely \(c=\beta(i,x-d2^{r})\) and in this case \(\mathbb{H}_{r,d}|_{\vec{b}c}=\{h_{i,c}\}\). This forces the algorithm to disambiguate every function \(f\) with \(\vec{b}\in\{0,1\}^{x-1}\) by setting \(f(x)=h_{i,c}(x)=\beta(i,x-d2^{r})\). In this manner, every \(h_{i,j}\) is eventually disambiguated into the same total function: \[\overline{h_{i,j}}(x)=\begin{cases}1&\text{ if }x\in F_{i}\\ 0&\text{ if }x\in[d2^{r}]\setminus F_{i}\\ \beta(i,x-d2^{r})&\text{ if }x>d2^{r}\end{cases}.\] In particular, for every \(i\in[2^{dr}]\), the bit string \((\overline{h_{i,1}}(d2^{r}+1),\ldots,\overline{h_{i,1}}(d2^{r}+dr))\) is the \(dr\)-bit binary representation of \(i\). This provides a witness for which \(\mathrm{VC}(\overline{\mathbb{H}_{r,d}}^{\mathrm{SOA}})\geq dr\). As an illustration, we provide the matrix representation of \(\mathbb{H}_{1,2}\) and some essential steps of the SOA disambiguation below in Fig. 1. Figure 1: \(\mathbb{H}_{1,2}\) and its SOA disambiguation ### Small-size refutation of the Alon-Saks-Seymour conjecture (Theorem 1.11) In this section, we present the construction of Theorem 1.11 in detail. The starting point is constructing a Boolean function due to [1] in query complexity. This Boolean function then goes through several reductions to be converted into a graph, as described below. We first introduce some basic definitions related to the notion of _certificate complexity_. Let \(f:\{0,1\}^{n}\to\{0,1\}\) be a Boolean function. For \(b\in\{0,1\}\) and an input \(x\in f^{-1}(b)\), a partial input \(\rho\in\{0,1,\star\}^{n}\) is called a \(b\)-certificate if \(x\) is consistent with \(\rho\) and for every \(x^{\prime}\in\{0,1\}^{n}\) consistent with \(\rho\), we have \(f(x^{\prime})=b\). The size of \(\rho\) is the number of non-\(\star\) entries of \(\rho\). Define \(\operatorname{C}_{b}(f,x)\) as the smallest size of a \(b\)-certificate for \(x\). The \(b\)-certificate complexity of \(f\), denoted \(\operatorname{C}_{b}(f)\), is the maximum of \(\operatorname{C}_{b}(f,x)\) over all \(x\in f^{-1}(b)\). The _unambiguous_\(b\)-certificate complexity of \(f\), denoted \(\operatorname{UC}_{b}(f)\), is the smallest \(k\) such that 1. Every input \(x\in f^{-1}(b)\) has a \(b\)-certificate \(\rho_{x}\) of size at most \(k\); 2. For every \(x\neq y\) in \(f^{-1}(b)\), we have \(\rho_{x}\neq\rho_{y}\). The main result of [1] is the following separation between \(\operatorname{UC}_{1}\) and \(\operatorname{C}_{0}\). **Theorem 3.2** ([1, Theorem 1]).: _There is a function \(f:\{0,1\}^{12n^{4}\log^{2}n}\to\{0,1\}\) such that \(\operatorname{UC}_{1}(f)=O(n\log^{3}n)\) and \(\operatorname{C}_{0}(f)=\Omega(n^{2})\)._ The next step of the construction is to transform the function separating the certificate complexities \(\operatorname{UC}_{1}\) and \(\operatorname{C}_{0}\) into a communication problem. This is achieved by the "lifting" trick: given a function \(f:\{0,1\}^{n}\to\{0,1\}\) and a "gadget" function \(g:\{0,1\}^{k}\times\{0,1\}^{k}\to\{0,1\}\), we define \(f\circ g^{n}:\{0,1\}^{nk}\times\{0,1\}^{nk}\to\{0,1\}\) as \[f\circ g^{n}([x_{1},\ldots,x_{n}],[y_{1},\ldots,y_{n}])=f(g(x_{1},y_{1}), \ldots,g(x_{n},y_{n})).\] For a communication problem \(f:\{0,1\}^{m}\times\{0,1\}^{m}\to\{0,1\}\) and \(b\in\{0,1\}\), let \(\operatorname{Cov}_{b}(f)\) denote the minimum number of \(b\)-monochromatic rectangles required to cover all the \(b\)-entries of \(f\). We denote by \(\operatorname{UCov}_{b}(f)\) the minimum number of \(b\)-monochromatic rectangles required to _partition_ all the \(b\)-entries of \(f\). The following theorem provides a connection between the communication complexity parameters and the certificate complexity parameters. **Theorem 3.3** ([1, Theorem 33]).: _There exists a gadget \(g:\{0,1\}^{k}\times\{0,1\}^{k}\to\{0,1\}\) with \(k=\Omega(\log n)\) such that for every \(f:\{0,1\}^{n}\to\{0,1\}\), we have_ \[\log\operatorname{Cov}_{b}(f\circ g^{n})=\Omega(k\operatorname{C}_{b}(f)).\] Note that for every \(b\in\{0,1\}\), we have \(\log\operatorname{UCov}_{b}(f\circ g^{n})\leq 2k\operatorname{UC}_{b}(f)\). This combined with Theorem 3.3 allows one to "lift" the \(\operatorname{UC}_{1}\) vs \(\operatorname{C}_{0}\) separation of Theorem 3.2 into a \(\operatorname{UCov}_{1}\) vs \(\operatorname{Cov}_{0}\) separation. **Corollary 3.4**.: _There exists a function \(f:\{0,1\}^{O(n^{4}\log^{3}n)}\times\{0,1\}^{O(n^{4}\log^{3}n)}\to\{0,1\}\) such that_ \[\log\operatorname{Cov}_{0}(f)=\Omega(n^{2})\qquad\text{and}\qquad\log \operatorname{UCov}_{1}(f)=n\log^{4}n.\] Next, we show how to convert these communication parameters to graph parameters of the biclique partition number and chromatic number. **Lemma 3.5**.: _Let \(h:\{0,1\}^{t}\times\{0,1\}^{t}\to\{0,1\}\) be a Boolean function with \(\operatorname{Cov}_{0}(h)=c\) and \(\operatorname{UCov}_{1}(h)=m\). There exists a graph \(G=(V,E)\) on at most \(2^{2t}\) vertices with \(\operatorname{bp}(G)\leq m^{2}\) and \(\chi(G)\geq\sqrt{c}\)._ Proof.: Define the graph \(G\) with \(V\coloneqq h^{-1}(0)\) as follows. Two vertices \((x,y),(x^{\prime},y^{\prime})\in V\) are adjacent in \(G\) iff \(h(x,y^{\prime})=1\) or \(h(x^{\prime},y)=1\). By construction, if \(\{(x_{1},y_{1}),\ldots,(x_{\ell},y_{\ell})\}\subseteq V\) is an independent set, then \(\{x_{1},\ldots,x_{\ell}\}\times\{y_{1},\ldots,y_{\ell}\}\) is a \(0\)-monochromatic rectangle for \(h\). Thus every proper vertex coloring of \(G\) with \(\chi(G)\) colors corresponds to a \(0\)-cover of \(h\) with \(\chi(G)\) many \(0\)-monochromatic rectangles. Therefore, \(\chi(G)\geq c\). We next show that there exists a small set of bicliques such that every edge of \(E\) is covered at least once and at most twice by these bicliques. Let \(h^{-1}(1)=\bigcup_{i=1}^{m}(A_{i}\times B_{i})\) be a partition of \(h^{-1}(1)\) into \(m\) many \(1\)-monochromatic rectangles. Note that every \(1\)-monochromatic rectangle \(A_{i}\times B_{i}\) corresponds to a biclique \(Q_{i}\coloneqq S_{i}^{-}\times S_{i}^{+}\) in \(G\), where \[S_{i}^{-}\coloneqq\{(x,y)\in V(G):\ x\in A_{i}\}\text{ and }S_{i}^{+}=\{(x,y) \in V(G):\ y\in B_{i}\}.\] Notice that each edge \(\{(x,y),(x^{\prime},y^{\prime})\}\) of \(G\) is covered at least once by \(Q_{1},\ldots,Q_{m}\), and it is covered at most twice, the latter happening when \(h(x,y^{\prime})=h(x^{\prime},y)=1\). We have thus constructed a graph \(G\) on at most \(2^{2t}\) vertices such that \(\chi(G)\geq c\), and there are at most \(m\) bicliques where every edge in \(G\) appears in at least one and at most two bicliques. Define \(H_{2}\) as the subgraph of \(G\) that consists of all the edges covered by exactly two bicliques among \(Q_{1},\ldots,Q_{m}\). For every \(i,j\in[m]\), define \(Q_{ij}=(S_{i}^{-}\cap S_{j}^{+})\times(S_{i}^{+}\cap S_{j}^{-})\). Note that each \(Q_{ij}\) is a biclique of \(H_{2}\), and moreover, each edge of \(H_{2}\) appears in exactly one \(\dot{Q}_{ij}\). Hence, the biclique partition number of \(H_{2}\) is at most \(m^{2}\). Now, if \(\chi(H_{2})\geq\sqrt{c}\), we obtain \(H_{2}\) as the desired graph. Suppose otherwise that \(\chi(H_{2})<\sqrt{c}\), and consider a proper vertex coloring of \(H_{2}\) with \(\sqrt{c}\) colors with color classes \(V_{1},\ldots,V_{\sqrt{c}}\). Since \(\chi(G)\geq c\), there must exist \(i\) such that the induced subgraph of \(G\) on \(V_{i}\), denoted by \(G[V_{i}]\), satisfies \(\chi(G[V_{i}])\geq\sqrt{c}\). Since \(V_{i}\) is an independent set of \(H_{2}\), thus the restrictions of bicliques \(Q_{1},\ldots,Q_{m}\) to \(V_{i}\) form a biclique partition of \(G[V_{i}]\). Lemma 3.5 and Corollary 3.4 together imply Theorem 1.11. **Remark 3.6**.: _In addition to providing effective bounds on the size of the graph, Lemma 3.5 also simplifies the original chain of reductions utilized in prior work [1, 2, 1, 1] toward achieving a super-polynomial separation between the biclique partition and chromatic numbers. We will briefly describe the original proof below and highlight the differences._ 1. _Similar to our proof of Theorem_ 1.11_, the chain of reduction begins with the function_ \(f\) _provided by Corollary_ 3.4_, such that_ \[\log\operatorname{Cov}_{0}(f)=\Omega(n^{2})\qquad\text{and}\qquad\log \operatorname{UCov}_{1}(f)=n\log^{4}n.\] 2. _Yannakasis_ _[_1_]_ _(see also_ _[_2_, Figure 1]__) showed how to use_ \(f\) _to construct a graph_ \(F\) _on_ \(\operatorname{UCov}_{1}(f)=2^{O(n\log^{4}n)}\) _vertices such that every Clique-Stable set separator of_ \(F\) _is of size at least_ \(\operatorname{Cov}_{0}(f)=2^{\Omega(n^{2})}\)_. Here, a Clique-Stable set separator is a collection of cuts in_ \(F\) _such that for every disjoint pair_ \((C,I)\) _of a clique_ \(C\) _and a stable set_ \(I\) _in_ \(F\)_, there is a cut_ \((A,B)\) _in the collection with_ \(C\subseteq A\) _and_ \(I\subseteq B\)_._ _._ 3. _Bousquet et. al.,_ _[_BLT14_, Lemma 23]_ _show how to use_ \(F\) _to construct a new graph_ \(G\) _with the so-called oriented biclique packing number at most_ \(2^{n\log^{4}n}\) _and chromatic number_ \(\chi(G)\geq 2^{\Omega(n^{2})}\)_._ 4. _The graph_ \(G\) _is then turned into a separation between the biclique partition number and chromatic number in a different graph_ \(H\) _via a final reduction in_ _[_BLT14_]__._ _The above chain of reductions is not sufficient for our application because the graph \(G\) of Step (iii) has a vertex for each pair \((C,I)\) of a clique \(C\) and a stable set \(I\) of \(F\), and as a result, there are no effective upper-bounds on the number of vertices of \(G\). Our proof of Theorem 1.11 bypasses Step (ii) and employs a more direct approach to construct a small-size graph \(G\) that has similar properties to the graph \(G\) of Step (iii)._ ## 4 Concluding remarks A few natural questions remain unanswered. The first question is whether a similar example \(\mathbb{H}\) for Theorem 1.6 with the stronger assumption \(\operatorname{LD}(\mathbb{H})=1\) exists. **Problem 4.1**.: _Let \(\mathbb{H}\) be a partial class with \(\operatorname{LD}(\mathbb{H})=1\). Does there exist a disambiguation of \(\mathbb{H}\) by a total class \(\overline{\mathbb{H}}\) such that \(\operatorname{LD}(\overline{\mathbb{H}})<\infty\)? Is there one with \(\operatorname{VC}(\overline{\mathbb{H}})<\infty\)?_ Theorem 1.10 shows that for partial classes, having polynomial growth is not a sufficient condition for PAC learnability. A natural candidate reinstatement of the theorem is to work with the more restrictive assumption of linear growth. **Problem 4.2**.: _Let \(\mathbb{H}\subseteq\{0,1,\star\}^{\mathcal{X}}\) have polynomial growth with parameter \(1\). Does there exist a disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}\) with \(\operatorname{VC}(\overline{\mathbb{H}})<\infty\)?_ Another question is whether one can improve the lower bound of \(\Omega(\log\log n)\) in Theorem 1.6 to \(\Omega(\log n)\). **Problem 4.3**.: _Can the lower bound in Theorem 1.6 be improved to \(\operatorname{VC}(\overline{\mathbb{H}})\geq\Omega(\log n)\)?_ Forbidding combinatorial patterns.A natural method to prove upper bounds on the VC dimension of a concept class is establishing that it does not contain a specific combinatorial pattern. For example, the construction for Theorem 1.3 in [1] utilized the fact that the concept class (viewed as a matrix) does not contain the combinatorial patterns \(\begin{bmatrix}1&1\\ 0&0\end{bmatrix}\) and \(\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\), which are patterns that are in any concept class \(\mathbb{H}\) with \(\operatorname{VC}(\mathbb{H})\geq 2\). Similarly, the dual construction in Theorem 1.6 forbids the pattern \(\begin{bmatrix}1&0\\ 1&0\end{bmatrix}\), a compulsory pattern for any concept class \(\mathbb{H}\) with \(\operatorname{LD}(\mathbb{H})\geq 3\). **Problem 4.4**.: _Suppose \(\mathbb{H}\subseteq\{0,1,\star\}^{[n]}\) does not contain the pattern \(\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\). Does every disambiguation \(\overline{\mathbb{H}}\) of \(\mathbb{H}\) satisfy \(\operatorname{VC}(\overline{\mathbb{H}})=O(1)\)?_ AcknowledgementWe wish to thank Mika Goos for clarifying the reductions in [1, 1, 1, 1].
2310.17440
Gibbs optimal design of experiments
Bayesian optimal design of experiments is a well-established approach to planning experiments. Briefly, a probability distribution, known as a statistical model, for the responses is assumed which is dependent on a vector of unknown parameters. A utility function is then specified which gives the gain in information for estimating the true value of the parameters using the Bayesian posterior distribution. A Bayesian optimal design is given by maximising the expectation of the utility with respect to the joint distribution given by the statistical model and prior distribution for the true parameter values. The approach takes account of the experimental aim via specification of the utility and of all assumed sources of uncertainty via the expected utility. However, it is predicated on the specification of the statistical model. Recently, a new type of statistical inference, known as Gibbs (or General Bayesian) inference, has been advanced. This is Bayesian-like, in that uncertainty on unknown quantities is represented by a posterior distribution, but does not necessarily rely on specification of a statistical model. Thus the resulting inference should be less sensitive to misspecification of the statistical model. The purpose of this paper is to propose Gibbs optimal design: a framework for optimal design of experiments for Gibbs inference. The concept behind the framework is introduced along with a computational approach to find Gibbs optimal designs in practice. The framework is demonstrated on exemplars including linear models, and experiments with count and time-to-event responses.
Antony M. Overstall, Jacinta Holloway-Brown, James M. McGree
2023-10-26T14:50:07Z
http://arxiv.org/abs/2310.17440v1
# Gibbs optimal design of experiments ###### Abstract Bayesian optimal design of experiments is a well-established approach to planning experiments. Briefly, a probability distribution, known as a statistical model, for the responses is assumed which is dependent on a vector of unknown parameters. A utility function is then specified which gives the gain in information for estimating the true value of the parameters using the Bayesian posterior distribution. A Bayesian optimal design is given by maximising the expectation of the utility with respect to the joint distribution given by the statistical model and prior distribution for the true parameter values. The approach takes account of the experimental aim via specification of the utility and of all assumed sources of uncertainty via the expected utility. However, it is predicated on the specification of the statistical model. Recently, a new type of statistical inference, known as Gibbs (or General Bayesian) inference, has been advanced. This is Bayesian-like, in that uncertainty on unknown quantities is represented by a posterior distribution, but does not necessarily rely on specification of a statistical model. Thus the resulting inference should be less sensitive to misspecification of the statistical model. The purpose of this paper is to propose Gibbs optimal design: a framework for optimal design of experiments for Gibbs inference. The concept behind the framework is introduced along with a computational approach to find Gibbs optimal designs in practice. The framework is demonstrated on exemplars including linear models, and experiments with count and time-to-event responses. _Keywords:_ Gibbs statistical inference; loss function, robust design of experiments; utility function Introduction Experiments are key to the scientific method. They are used to systematically investigate the physical relationship between a series of controllable variables and a response variable. In this paper, an experiment consists of a fixed number of runs, where each run involves the specification of all controllable variables and the subsequent observation of a response. The experimental aim is then addressed, usually by the estimation of a statistical model, or models. The quality of this analysis can be highly dependent on the design of the experiment: the specification of the controllable variables for each run (see, for example, [7]). _Decision-theoretic Bayesian optimal design of experiments_(Chaloner and Verdinelli, 1995, termed here as _Bayesian optimal design_ for brevity) provides a principled approach to planning experiments. The Bayesian optimal design approach can briefly be described as follows. First, a statistical model is assumed. This is a probability distribution for the responses, fully specified apart from dependence on a vector of unknown parameters. A prior distribution is also assumed representing prior knowledge about the true values of the parameters, i.e. those parameter values that make the statistical model coincide with the true probability distribution for the responses. A utility function is then specified, returning the gain in information, in estimating the true values of the parameters, from the responses obtained via a given design. The expected utility is given by the expectation of the utility with respect to the responses and true parameter values, under the joint probability distribution implied by the statistical model and prior distribution. Lastly, a Bayesian optimal design maximises the expected utility over the space of all designs. The advantages of the Bayesian optimal design approach include the following. Firstly, the approach explicitly takes account of the aim of the experiment through specification of the utility function. Indeed, the approach can easily be extended for model selection and/or prediction experimental aims. Second, by taking the expectation of the utility, it incorporates all known sources of uncertainty. The patent disadvantage is that the objective function given by the expected utility is rarely available in closed form. Instead, an often computationally expensive numerical routine is used to approximate the expected utility which then needs to be maximised over a potentially high-dimensional design space. However, in the last decade, new computational methodology has been proposed to find Bayesian designs more efficiently than previously or find designs for scenarios which were previously out of reach (see, for example, Rainforth et al., and reference therein). A more fundamental disadvantage is that the statistical model for the responses is specified before the responses are observed. The resulting design can be highly tailored to a statistical model which may actually be significantly misspecified, compromising the accuracy and precision of future statistical inference. Recently, there has been progress in Bayesian-like statistical inference that does not require the specification of a probability distribution, i.e. a statistical model. Instead a loss function is specified identifying desirable parameter values for given responses (those that minimise the loss). The specification of the loss does not necessarily follow from specification of a probability distribution for the responses. A so-called (unnormalised) Gibbs posterior distribution for the parameters is then given by the product of the exponential of the negative loss and a prior distribution for target parameter values. The target parameter values are the parameters which minimise the expectation of the loss with respect the true (but unknown and unspecified) probability distribution of the responses (Bissiri et al., 2016). Gibbs inference (inference using the Gibbs posterior distribution) can be seen as a Bayesian-like analogue of classical M-estimation (for example, Hayashi, 2000, Chapter 7). Bissiri et al. (2016) provide a thorough theoretical treatment of Gibbs inference. In particular, they show that Gibbs inference provides coherent inference about the target parameter values. This paper proposes a decision-theoretic Gibbs optimal design of experiments framework (referred to as Gibbs optimal design for brevity). Similar to Bayesian optimal design, a utility function is specified: a function returning gain in information, in estimating the target parameter values, using responses obtained via a given design, where dependence on the responses is through the Gibbs posterior distribution. However the expected utility does not immediately follow. The absence of a statistical model, means there is no joint probability distribution for the responses and target parameter values. Instead, we propose taking expectation of the utility function with respect to a probability distribution for the responses which we term the designer distribution. This user-specified designer distribution should be flexible enough to be close to the true probability distribution for the responses, certainly closer than the statistical model. Specification of the designer distribution can be aided by the fact that it does not need to be "useful", i.e. it will not be fitted to the observed responses of the experiment and need not be capable of addressing the experimental aim. To demonstrate the concept of the designer distribution, consider the following simple example. Suppose the aim of the experiment is to investigate the relationship between a single controllable variable, \(x\), and a continuous response. It is assumed that the mean response is given by a polynomial of \(x\) where the coefficients are unknown parameters. A reasonable loss is sum of squares: the sum of the squared differences between the responses and their mean (given by the polynomial). A suitable designer distribution is the unique-treatment model (e.g. Gilmour and Trinca, 2012): a probability distribution where each unique \(x\) value in the experiment exhibits a unique mean. The unique-treatment model is not a useful model, i.e. it does not allow one to effectively learn the relationship between \(x\) and the response. However, it is a flexible model likely to be closer to capturing the relationship between \(x\) and the response, than a polynomial. The topic considered in this paper fits within the field of _robust design of experiments_. One of the first treatments of this topic was the work of Box and Draper (1959) who considered design of experiments for a statistical model with mean response a linear model with an intercept and first-order term, for a single controllable variable, when the true probability distribution has a mean response also containing a quadratic term. A relatively up-to-date review of robust design of experiments is given by Wiens (2015) who considers the topic under misspecification of both the mean and error structure of the statistical model, as well as from the point of view of different applications (including dose-response, clinical trials and computer experiments). The remainder of the paper is organised as follows. Section 2 provides brief mathematical descriptions of Bayesian design and Gibbs inference, before Section 3 introduces the concept of Gibbs optimal design. Section 3.2 considers Gibbs optimal design for the class of linear models where closed form expressions for the expected utility exist for common utilities. Section 3.3 describes a computational approach for approximating the expected utility in more complex models. Finally, we demonstrate Gibbs optimal design on illustrative examples in Section 4. ## 2 Background ### Design problem setup Suppose the experiment aims to investigate the relationship between \(k\) controllable variables denoted \(\mathbf{x}=\left(x_{1},\ldots,x_{k}\right)^{\mbox{\tiny T}}\in\mathbb{X}\), and a measurable response denoted by \(y\). The experiment consists of \(n\) runs, where the \(i\)th run involves specifying \(\mathbf{x}_{i}=\left(x_{i1},\ldots,x_{ik}\right)^{\mbox{\tiny T}}\in\mathbb{X}\) and observing response \(y_{i}\), for \(i=1,\ldots,n\). Let \(\mathbf{y}=\left(y_{1},\ldots,y_{n}\right)^{\mbox{\tiny T}}\) denote the \(n\times 1\) vector of responses and \(X\) the \(n\times k\) design matrix with \(i\)th row \(\mathbf{x}_{i}^{\mbox{\tiny T}}\), for \(i=1,\ldots,n\). It is assumed that \(\mathbf{y}\) are realisations from a multivariate probability distribution denoted \(\mathcal{T}(X)\), which is the true but unknown response-generating probability distribution. This paper addresses the specification of \(X\) to learn the most about the true response-generating probability distribution. ### Bayesian optimal design Bayesian optimal design relies on the specification of a statistical model and utility function. The statistical model, denoted \(\mathcal{S}(\mathbf{t};X)\), is a probability distribution for the responses \(\mathbf{y}\) used to represent the true response-generating probability distribution, \(\mathcal{T}(X)\). The statistical model is fully specified up to a \(p\times 1\) vector of parameters \(\mathbf{t}=(t_{1},\ldots,t_{p})^{\mbox{\tiny T}}\in\Theta\) with parameter space \(\Theta\subset\mathbb{R}^{p}\). Suppose there exist values \(\boldsymbol{\theta}_{T}\) of the parameters such that the statistical model and true response-generating probability distribution coincide, i.e. \(\mathcal{T}(X)=\mathcal{S}(\boldsymbol{\theta}_{T};X)\). If \(\boldsymbol{\theta}_{T}\) is known, then the true response-generating distribution is completely known so, in some form, estimating \(\boldsymbol{\theta}_{T}\) is the aim of the experiment. Bayesian inference for this aim is achieved by evaluating the Bayesian posterior distribution, denoted \(\mathcal{B}(\mathbf{y};X)\) and given by \[\pi_{\mathcal{B}}(\mathbf{t}|\mathbf{y};X)\propto\pi(\mathbf{y}|\mathbf{t};X) \pi_{T}(\mathbf{t}). \tag{1}\] In (1), \(\pi(\mathbf{y}|\mathbf{t};X)\) is the likelihood function: the probability density/mass function of \(\mathcal{S}(\mathbf{t};X)\), and \(\pi_{T}(\cdot)\) is the probability density function (pdf) for the prior distribution of \(\boldsymbol{\theta}_{T}\) (denoted \(\mathcal{P}_{T}\)). For simplicity, we assume that the prior distribution \(\mathcal{P}_{T}\) does not depend on the design \(X\). However, the technicalities of Bayesian design change little if \(\mathcal{P}_{T}\) depends on \(X\). The utility function is denoted \(u_{\mathcal{B}}(\mathbf{t},\mathbf{y},X)\). It gives the gain in information of estimating the parameter values \(\mathbf{t}\) using the Bayesian posterior distribution conditional on responses \(\mathbf{y}\) collected via design \(X\). The Bayesian expected utility is \[U_{\mathcal{B}}(X)=\mathrm{E}_{\mathcal{P}_{T}}\left\{\mathrm{E}_{\mathcal{S} (\boldsymbol{\theta}_{T},X)}\left[u_{\mathcal{B}}(\boldsymbol{\theta}_{T}, \mathbf{y},X)\right]\right\}. \tag{2}\] In (2), the utility is evaluated at the true parameter values \(\boldsymbol{\theta}_{T}\), indicating that the experimental aim is to estimate these values. The inner expectation is with respect to the responses \(\mathbf{y}\) under \(\mathcal{S}(\boldsymbol{\theta}_{T};X)\), and the outer expectation with respect to the true parameter values \(\boldsymbol{\theta}_{T}\), under the prior distribution, \(\mathcal{P}_{\mathcal{T}}\). A Bayesian optimal design is given by maximising \(U_{\mathcal{B}}(X)\) over the design space \(\mathbb{D}=\mathbb{X}^{n}\). A commonly-used utility function, and one used in this paper to demonstrate concepts, is negative squared error (NSE; see, for example, [7], Section 2.5.1) given by \[u_{\mathcal{B},NSE}(\mathbf{t},\mathbf{y},X)=-\left\|\mathbf{t}-\mathrm{E}_{ \mathcal{B}(\mathbf{y};X)}\left(\boldsymbol{\theta}_{T}\right)\right\|_{2}^{2},\] where \(\mathrm{E}_{\mathcal{B}(\mathbf{y};X)}\left(\boldsymbol{\theta}_{T}\right)\) is the Bayesian posterior mean of \(\boldsymbol{\theta}_{T}\) and \(\left\|\mathbf{u}\right\|_{2}=\sqrt{\sum_{j}^{p}u_{j}}\) for a \(p\times 1\) vector \(\mathbf{u}=(u_{1},\ldots,u_{p})^{T}\). The Bayesian expected NSE utility is equal to the negative expected trace of the posterior variance matrix for \(\boldsymbol{\theta}_{T}\), where expectation is with respect to the marginal distribution of \(\mathbf{y}\), under the statistical model and prior distribution \(\mathcal{P}_{T}\). Notwithstanding the conceptual advantages of Bayesian optimal design listed in Section 1, the disadvantage of finding Bayesian optimal designs in practice is clear. Firstly, the utility depends on \(\mathbf{y}\) through the Bayesian posterior distribution. For most cases, this distribution is not available in closed form meaning the utility will not be available in closed form. Second, the integration required to evaluate the Bayesian expected utility given by (2) will also not be analytically tractable. Although it is straightforward to derive a numerical approximation to the Bayesian expected utility \(U_{\mathcal{B}}(X)\), this approximation then needs to be maximised over the \(nk\)-dimensional design space \(\mathbb{D}\). The dimensionality of \(\mathbb{D}\) can be high if the number of runs \(n\) and/or number of controllable variables \(k\) are large. This limits the effectiveness of commonly used numerical approximations such as, for example, Monte Carlo integration. However, in the last decade significant progress has been in the development of new computational methodology to find Bayesian optimal designs for a range of problems. See, for example, the review papers of Ryan et al. (2016) and Rainforth et al., and references therein. ### Gibbs inference Gibbs inference (also known as general Bayesian inference) is an approach to statistical inference which is Bayesian-like, i.e. information is summarised by a probability distribution for unknown quantities, but does not necessarily rely on specification of a statistical model. Instead a loss function, denoted \(\ell(\mathbf{t};\mathbf{y},X)\) is specified. This function identifies desirable parameter values for observed responses \(\mathbf{y}\). Notably, the corresponding classical M-estimators (see, for example, Hayashi, 2000, Chapter 7) are defined \[\boldsymbol{\hat{\theta}}_{\ell}(\mathbf{y};X)=\arg\min_{\mathbf{t}\in\Theta} \ell(\mathbf{t};\mathbf{y},X).\] The Gibbs posterior distribution, denoted \(\mathcal{G}_{\ell}(\mathbf{y};X)\), is given by \[\pi_{\ell}\left(\mathbf{t}|\mathbf{y},X\right)\propto\exp\left[-w\ell( \mathbf{t};\mathbf{y},X)\right]\pi_{\ell}(\mathbf{t}), \tag{3}\] where \(\pi_{\ell}(\cdot)\) is the pdf of the prior distribution (denoted \(\mathcal{P}_{\ell}\)) for the target parameter values (discussed below) and \(w>0\) is a calibration weight (also discussed below). In (3), the quantity \(\exp\left[-w\ell(\mathbf{t};\mathbf{y},X)\right]\) is known as the generalised likelihood. Similar to Bayesian inference, we assume that the prior distribution \(\mathcal{P}_{\ell}\) does not depend on the design \(X\). Bissiri et al. (2016) showed that the Gibbs posterior distribution provides coherent inference about the target parameter values defined as \[\boldsymbol{\theta}_{\ell,\mathcal{T}}(X)=\arg\min_{\mathbf{t}\in\Theta}L_{ \mathcal{T}(X)}(\mathbf{t}),\] where \(L_{\mathcal{T}(X)}(\mathbf{t})=\mathrm{E}_{\mathcal{T}(X)}\left[\ell(\mathbf{ t};\mathbf{y},X)\right]\) is the expected loss with respect to the responses \(\mathbf{y}\) under the true response-generating distribution, \(\mathcal{T}(X)\). Suppose the responses and design can be partitioned as \(\mathbf{y}^{T}=\left(\mathbf{y}_{1}^{T},\mathbf{y}_{2}^{T}\right)^{T}\) and \(X=\left(X_{1}^{T},X_{2}^{T}\right)^{T}\). Coherent inference, in this case, means that the Gibbs posterior formed from the complete data, i.e. by using generalised likelihood \(\exp\left[-w\ell(\mathbf{t};\mathbf{y},X)\right]\) with prior \(\mathcal{P}_{\ell}\) is identical to the Gibbs posterior from first evaluating the Gibbs posterior \(\mathcal{G}_{\ell}(\mathbf{y}_{1};X_{1})\) and then using this as a prior distribution with generalised likelihood \(\exp\left[-w\ell(\mathbf{t};\mathbf{y}_{2},X_{2})\right]\). The calibration weight, \(w\), controls the rate of learning from prior, \(\mathcal{P}_{\ell}\), to Gibbs posterior, \(\mathcal{G}_{\ell}(\mathbf{y};X)\). To see this, consider the extreme cases. As \(w\to 0\), then \(\mathcal{G}_{\ell}(\mathbf{y};X)\rightarrow\mathcal{P}_{\ell}\), and, as \(w\rightarrow\infty\), then \(\mathcal{G}_{\ell}(\mathbf{y};X)\) converges to a point mass at \(\boldsymbol{\hat{\theta}}_{\ell}(\mathbf{y};X)\). Therefore, the specification of the calibration weight is crucial for Gibbs inference. Bissiri et al. (2016) described various approaches to the specification of the calibration weight. These include (a) choosing \(w\) so that the prior distribution \(\mathcal{P}_{\ell}\) contributes a fraction (e.g. \(1/n\)) of the information in the generalised likelihood (b) choosing \(w\) to maintain operational characteristics, i.e. so that properties of the Gibbs posterior match those of the classical M-estimators; or (c) allowing \(w\) to be unknown with a hierarchical prior distribution. This paper does not advocate for any one of these approaches. The Gibbs optimal design framework, as introduced in the next section, does not rely on any particular calibration method and can be used with any of the methods considered by Bissiri et al. (2016). ### Bayesian inference as Gibbs inference The Bayesian posterior distribution as a Gibbs posterior can be obtained under the self-information loss, i.e. the negative log-likelihood \[\ell_{SI}\left(\mathbf{t};\mathbf{y},X\right)=-\log\pi\left(\mathbf{y}| \mathbf{t};X\right),\] with \(w=1\), i.e. \(\mathcal{G}_{SI}(\mathbf{y};X)=\mathcal{B}(\mathbf{y};X)\). In this case, the M-estimators, \(\hat{\boldsymbol{\theta}}_{SI}(\mathbf{y};X)\), are the maximum likelihood estimators. The corresponding expected loss is \[L_{SI,\mathcal{T}(X)}(\mathbf{t})=\mathrm{E}_{\mathcal{T}(X)}\left[-\log\pi \left(\mathbf{y}|\mathbf{t};X\right)\right],\] which, up to a constant independent of \(\mathbf{t}\), is the Kullback-Leibler (KL) divergence between the true response-generating distribution and the statistical model. Thus the target parameter values, denoted \(\boldsymbol{\theta}_{SI,T(X)}\), minimise this divergence. Under a correctly specified statistical model, where there exist \(\boldsymbol{\theta}_{T}\) such that \(\mathcal{S}(\boldsymbol{\theta}_{T};X)=\mathcal{T}(X)\), then \(\boldsymbol{\theta}_{SI,T(X)}=\boldsymbol{\theta}_{T}\) with a minimised Kullback-Leibler divergence of zero. Now consider the case of a misspecified statistical model, where there do not exist parameter values \(\boldsymbol{\theta}_{T}\) such that \(\mathcal{S}(\boldsymbol{\theta}_{T};X)=\mathcal{T}(X)\). In this case, Bayesian inference still provides coherent inference about \(\boldsymbol{\theta}_{SI,T(X)}\); the parameter values that minimise the distance (in the sense of Kullback-Leibler divergence) between the true response-generating distribution \(\mathcal{T}(X)\) and the statistical model, \(\mathcal{S}(\mathbf{t},X)\). ### Running example: linear model In this sub-section we introduce a running example on fitting a linear model. We use this example to demonstrate the concepts discussed in this section as well as those introduced in the next section when we propose Gibbs optimal design. Suppose \(y\) is a continuous response and the true data generating process \(\mathcal{T}(X)\) has \[y_{i}=\mu(\mathbf{x}_{i})+e_{i}, \tag{4}\] for \(i=1,\ldots,n\), where \(e_{1},\ldots,e_{n}\) are independent and identically distributed random errors with mean zero and variance \(\sigma^{2}<\infty\). The true mean response, \(\mu(\mathbf{x})\), for controllable factors \(\mathbf{x}\) is unknown. The specification of a statistical model, \(\mathcal{S}(\mathbf{t};X)\) starts by approximating the unknown true mean response, \(\mu(\mathbf{x})\), by a linear function \(\mathbf{f}(\mathbf{x})^{\mathrm{ T}}\mathbf{t}\) where \(\mathbf{f}:\mathcal{X}\rightarrow\mathbb{R}^{p}\) is a specified regression function. For example, with \(k=1\) controllable factor, \(\mathbf{f}(x)=(1,x,x^{2})^{T}\) implements a linear model with intercept, first-order and quadratic terms (\(p=3\)). Consider the sum of squares (SS) loss function given by \[\ell_{SS}(\mathbf{t};\mathbf{y},X)=\sum_{i=1}^{n}\left[y_{i}-\mathbf{f}( \mathbf{x}_{i})^{T}\mathbf{t}\right]^{2}.\] Under the SS loss, the M-estimators are the familiar least squares estimators \(\hat{\boldsymbol{\theta}}_{SS}=\left(F^{\mathrm{ T}}F\right)^{-1}F^{\mathrm{ T}}\mathbf{y}\), where \(F\) is the \(n\times p\) model matrix with \(i\)th row given by \(\mathbf{f}(\mathbf{x}_{i})^{T}\), for \(i=1,\ldots,n\). Under an improper prior distribution for target parameters \(\boldsymbol{\theta}_{SS}\) (to be discussed below), the Gibbs posterior distribution is a multivariate normal distribution with mean \(\hat{\boldsymbol{\theta}}_{SS}\) and variance \(\left(F^{T}F\right)^{-1}/2w\). A reasonable specification for \(w\) is \(1/2\sigma^{2}\) which results in the Gibbs posterior variance being equal to the frequentist sampling variance of \(\hat{\boldsymbol{\theta}}_{SS}\). This is an example of "maintaining operational characteristics" as suggested by Bissiri et al. (2016). Initially, suppose \(\sigma^{2}\) is known. The target parameters are given by minimising the expected SS loss \[L_{SS}(\mathbf{t})=n\sigma^{2}+\sum_{i=1}^{n}\left[\mu(\mathbf{x}_{i})- \mathbf{f}(\mathbf{x}_{i})^{T}\mathbf{t}\right]^{2}\] and are given by \(\boldsymbol{\theta}_{SS}=\left(F^{\mathrm{ T}}F\right)^{-1}F^{\mathrm{ T}}\boldsymbol{\mu}\), where \(\boldsymbol{\mu}\) is an \(n\times 1\) vector with \(i\)th element \(\mu(\mathbf{x}_{i})\), for \(i=1,\ldots,n\). The interpretation is that \(\boldsymbol{\theta}_{SS}\) minimise the sum of squares between \(\mu(\mathbf{x})\) and \(\mathbf{f}(\mathbf{x})^{T}\boldsymbol{\theta}_{SS}\), at the treatments in the design \(X\). The Gibbs posterior is identical to the Bayesian posterior having assumed that the distribution of the errors \(e_{1},\ldots,e_{n}\) (or, equivalently, the responses \(y_{1},\ldots,y_{n}\)) is normal. However the Gibbs posterior does not assume any distribution for the errors/responses. This is analogous to how the least squares estimator is equal to the maximum likelihood estimators (having assumed the errors/responses are normal) but with the former not assuming any distribution for the errors/responses. Now suppose the error variance, \(\sigma^{2}\), is unknown. We replace it by an estimator under the unique-treatment model (e.g. Gilmour and Trinca, 2012). Under this model, each unique treatment induces a unique mean response. Specifically, let \(\bar{\mathbf{x}}_{1},\ldots,\bar{\mathbf{x}}_{q}\) be the unique treatments, i.e. the unique values of \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\), where \(q\leq n\). If \(q<n\), i.e. there is at least one repeated treatment, then an estimator of \(\sigma^{2}\) is given by \[\hat{\sigma}^{2}=\frac{1}{n-q}\mathbf{y}^{T}\left(I_{n}-H_{Z}\right)\mathbf{y},\] where \(H_{Z}=Z\left(Z^{T}Z\right)^{-1}Z^{T}\) and \(Z\) is the \(n\times q\) matrix with \(ij\)th element \(Z_{ij}=1\) if \(\mathbf{x}_{i}=\bar{\mathbf{x}}_{j}\) and zero otherwise, for \(i=1,\ldots,n\) and \(j=1,\ldots,q\). This estimator is independent of the choice of regression function \(\mathbf{f}(\mathbf{x})\). Note that now the loss function is actually \(\ell_{SS}(\mathbf{t};\mathbf{y},X)/2\hat{\sigma}^{2}\), since \(\hat{\sigma}^{2}\) depends on \(\mathbf{y}\), and \(w=1\). However, the target parameters, \(\boldsymbol{\theta}_{SS}\), can be shown to be unchanged. Under the same improper uniform prior distribution for \(\boldsymbol{\theta}_{SS}\), now the Gibbs posterior is multivariate normal with mean \(\hat{\boldsymbol{\theta}}_{SS}\) and variance \(\hat{\sigma}^{2}\left(F^{T}F\right)^{-1}\). There is a constraint that \(q<n\) (i.e. for there to be at least one repeated treatment in the design) for the estimator \(\hat{\sigma}^{2}\) and, therefore, the Gibbs posterior to exist. ## 3 Gibbs optimal design of experiments ### Expected utility under the design distribution The proposal is to extend the Bayesian optimal design framework to Gibbs statistical inference. Similar to Bayesian optimal design, the Gibbs optimal design framework will involve maximising an expected utility function over the design space \(\mathbb{D}\). The challenge is that, with Bayesian optimal design, the expectation of the utility is taken with respect to the responses and true parameter values, \(\boldsymbol{\theta}_{T}\), under a joint distribution given by the statistical model and prior distribution for \(\boldsymbol{\theta}_{T}\). Neither, a statistical model nor true parameter values necessarily exist under a Gibbs inference analysis. We propose that the utility function be evaluated at the target parameter values. As defined in Section 2.3, the target parameter values depend on the true response-generating distribution. However, this is unknown, so instead we replace these by the target parameter values under a distribution we term the _designer distribution_. Furthermore, the expectation of the utility function is then taken under this designer distribution to give the objective function. The purpose of the designer distribution is to represent the unknown true response-generating distribution \(\mathcal{T}(X)\), in a similar way to the statistical model \(S(\mathbf{t},X)\). This prompts the question: why not use the designer distribution as the statistical model? However, note that, post-experiment, the designer distribution will not be used as a statistical model for the observed responses. This means that it does not need to be "useful", i.e. it does not need to be able to attain the aims of the experiment. Consider the linear model example in Section 2.5. A reasonable designer distribution is the unique-treatment model. This model should provide a flexible representation of the true response-generating distribution but would be ineffective in learning the relationship between the controllable variables and response. Formally, let \(\mathcal{D}(\mathbf{r};X)\) denote the designer distribution, which depends on hyper-variables \(\mathbf{r}\). The idea is that we may formulate more than one approximation to \(\mathcal{T}(X)\), indexed by different hyper-variables. A hyper-variable distribution, denoted \(\mathcal{C}\), is specified for \(\mathbf{r}\). Similar to the prior distributions \(\mathcal{P}_{T}\) and \(\mathcal{P}_{\ell}\), the hyper-variable distribution is assumed not to depend on design \(X\). The target parameter values under the designer distribution are \[\boldsymbol{\theta}_{\ell,\mathcal{D}}(\mathbf{r};X)=\arg\min_{\mathbf{t}\in \Theta}\mathrm{E}_{\mathcal{D}(\mathbf{r};X)}\left[\ell(\mathbf{t};\mathbf{y},X)\right]\] and are a function of the hyper-variables \(\mathbf{r}\) and design \(X\). The utility function is denoted \(u_{\mathcal{G}}(\mathbf{t},\mathbf{y},X)\). This can have the same functional form as a utility used under Bayesian optimal design. The difference is dependence on responses \(\mathbf{y}\) and design \(X\) is via the Gibbs posterior (instead of the Bayesian posterior). The Gibbs expected utility is then given by \[U_{\mathcal{G}}(X)=\mathrm{E}_{\mathcal{C}}\left\{\mathrm{E}_{\mathcal{D}( \mathbf{r};X)}\left[u(\boldsymbol{\theta}_{\ell,\mathcal{D}}(\mathbf{r};X), \mathbf{y},X)\right]\right\}. \tag{5}\] The inner expectation is with respect to the responses, \(\mathbf{y}\), under the designer distribution, \(\mathcal{D}(\mathbf{r};X)\) and the outer expectation is with respect to the hyper-variables, \(\mathbf{r}\), under \(\mathcal{C}\). Note that utility is evaluated at the target parameter values \(\boldsymbol{\theta}_{\ell,D}(\mathbf{r};X)\) rather than the true parameter values \(\boldsymbol{\theta}_{T}\). This reflects the fact that the experimental aim is to estimate the target parameter values. Bayesian optimal design is a special case of Gibbs optimal design, under the self-information loss and the designer distribution being the statistical model. This leads to the target parameters and hyper-variables being the true parameter values. Finally \(\mathcal{C}\) is chosen as the prior distribution for the true parameter values. However, this tells us that the use of Bayesian design implicitly assumes that the statistical model is true, i.e. a very strong assumption. In the literature, we are aware of two modifications of Bayesian optimal design which are also special cases of Gibbs optimal design. Firstly, Etzioni and Kadane (1993) considers the case where the outer expectation in the Bayesian expected utility (2) is taken under a different prior distribution to that used to form the Bayesian posterior. This can represent the scenario where two separate individuals are (a) designing the experiment and (b) analysing the observed responses, and who may have differing prior beliefs on the true parameter values. In this case, the loss is self-information, and the designer distribution coincides with the statistical model. However, \(\mathcal{P}_{\ell}\) is the prior distribution representing the prior beliefs of the individual analysing the observed responses, and \(\mathcal{C}\) is the prior distribution representing the prior beliefs of the individual designing the experiment. Overstall and McGree (2022) considered a Bayesian optimal design framework whereby the inner expectation in the Bayesian expected utility (2), is taken under an alternative model for the responses. The motivation for doing so was to introduce robustness into the Bayesian optimal design process, for example, by making the alternative model more complex than the statistical model. The proposal in this paper is to extend this further to where the inference does not have to be Bayesian. ### Return to running example: linear models We return to the linear model example from Section 2.5 and consider a Gibbs optimal design. The design of experiments for linear models is a significant topic in the literature. For textbook treatments, see, for example, Atkinson et al. (2007), Goos and Jones (2011) and Morris (2011). The designer distribution is given by the assumed true response-generating distribution given by \(\mathcal{T}(X)\) in Section 2.5. In this case, the hyper-variables are \({\bf r}=\left(\mu(\cdot),\sigma^{2}\right)\), i.e. the mean response function and error variance, respectively. The target parameter values under the designer distribution are \[\mathbf{\theta}_{SS,\mathcal{D}}({\bf r},X)=\left(F^{T}F\right)^{-1}F^ {T}Z\bar{\mathbf{\mu}},\] where \(\bar{\mathbf{\mu}}=\left(\mu(\bar{\bf x}_{1}),\ldots,\mu(\bar{\bf x}_ {q})\right)^{T}\) is the \(q\times 1\) vector of unique mean responses. Under the improper uniform prior distribution for the target parameter values, the Gibbs expected NSE utility is \[U_{\mathcal{G},NSE}(X)=-{\rm E}_{\mathcal{C}}\left(\sigma^{2}\right){\rm tr} \left[\left(F^{T}F\right)^{-1}\right]h_{1}(n-q), \tag{6}\] where \(h_{1}(d)=1\) if \(d>0\) and \(\infty\) otherwise. Note that the term \(h_{1}(n-q)\) in (6) ensures that there is at least one repeated design point, and that \(\hat{\sigma}^{2}\) (and, hence, the Gibbs posterior) exists. We also consider the Shannon information (SH; see, for example, Chaloner and Verdinelli 1995) utility given by \[u_{SH}({\bf t},{\bf y},X)=\log\pi_{\mathcal{G}}({\bf t}|{\bf y},X).\] If we make the assumption that the responses in the designer distribution are normally distributed, then the Gibbs expected SH utility is \[U_{\mathcal{G},SH}(X) = -\frac{p}{2}\left[\log(2\pi)+\log 2+{\rm E}_{\mathcal{C}} \left(\log\sigma^{2}\right)\right]-\frac{p}{2}\left[\psi\left(\frac{n-q}{2} \right)-\log(n-q)+\frac{n-q}{n-q-2}\right] \tag{7}\] \[\qquad+\frac{1}{2}\log\left|F^{T}F\right|\] where \(\psi(\cdot)\) is the digamma function. Justifications of expressions (6) and (7) are provided in Appendix A. Thus we can find the Gibbs optimal designs under the NSE and SH utilities by maximising \[-{\rm tr}\left[\left(F^{T}F\right)^{-1}\right]h_{1}(d)\qquad\mbox{and}\qquad- \frac{p}{2}h_{2}(d)+\frac{1}{2}\log\left|F^{T}F\right|, \tag{8}\] respectively, where \(h_{2}(d)=\psi(d/2)-\log(d)+d/(d-2)\) and \(d=n-q\) is the pure error degrees of freedom (see, for example Gilmour and Trinca, 2012). Note that we do not need to specify an actual distribution \(\mathcal{C}\) for the hyper-variables \({\bf r}=\left(\mu(\cdot),\sigma^{2}\right)\). It can be seen from the expressions in (8) that the objective functions for Gibbs optimal designs, under the negative squared error and Shannon information utilities, are modified versions of the objective functions for Bayesian optimal designs, under the same utilities (and non-informative priors for the regression parameters). Furthermore, these Bayesian optimal designs, under non-informative priors, are equiavlent to D- and A-optimal designs under frequentist inference. The modification to the objective functions is provided by the \(h_{1}(d)\) and \(h_{2}(d)\) functions. For the SH utility, the function \(h_{2}(d)\) is decreasing in the pure error degrees of freedom. Therefore, the maximising the expected SH utility compromises between increasing replication of treatments and precision of the Gibbs posterior (as measured by \(\log|F^{T}F|\)). For the NSE utility, the compromise is through the constraint \(d>0\) to ensure the Gibbs posterior exists. This modification by \(h_{1}(\cdot)\) and \(h_{2}(\cdot)\) follows from the chosen specification of \(w\). Other methods may result in different modifications and different objective functions. ### Computational approach In Section 3.2, under a linear model, we were able to find closed form expressions for the Gibbs expected utility. However, in general, this will not be possible. In this section, we outline an exemplar computational approach that can be used to approximately find Gibbs optimal designs. It essentially uses a combination of a normal approximation to approximate the Gibbs posterior and a Monte Carlo integration to approximate the Gibbs expected utility. The approximate Gibbs expected utility is then maximised using the approximate coordinate exchange (ACE; Overstall and Woods 2017) algorithm. In the literature, the combination of Monte Carlo and normal approximations has been used to find Bayesian designs (Long et al., 2013), including in partnership with ACE (Overstall et al., 2018). It is anticipated that many existing computational approaches used to find Bayesian designs can be repurposed to find Gibbs optimal designs. #### 3.3.1 Approximating the expected utility Let \(\tilde{\mathbf{\theta}}_{\ell}(\mathbf{y};X)=\arg\max_{\mathbf{t}\in\Theta}\left[- w\ell(\mathbf{t};\mathbf{y},X)+\log\pi_{\ell}(\mathbf{t})\right]\) denote the Gibbs posterior mode and let \[\tilde{\Sigma}_{\ell}(\mathbf{y};X)=-\left[-w\frac{\partial\ell(\tilde{\mathbf{ \theta}}_{\ell}(\mathbf{y};X);\mathbf{y},X)}{\partial\mathbf{t}\partial \mathbf{t}^{T}}+\frac{\partial\log\pi_{\ell}(\tilde{\mathbf{\theta}}_{\ell}( \mathbf{y};X))}{\partial\mathbf{t}\partial\mathbf{t}^{T}}\right]^{-1},\] denote the negative inverse Hessian matrix of the log Gibbs posterior pdf evaluated at the Gibbs posterior mode. Then the Gibbs posterior distribution is approximated by a normal distribution with mean \(\tilde{\mathbf{\theta}}_{\ell}(\mathbf{y};X)\) and variance \(\tilde{\Sigma}_{\ell}(\mathbf{y};X)\). This is analogous to normal approximations used for Bayesian posterior distributions (see, for example O'Hagan and Forster, 2004, page 237). In the examples in Section 4, we use a quasi-Newton method to numerically evaluate the Gibbs posterior mode. Now, due to the tractability of the normal distribution, approximations for many utility functions ensue. We denote such approximations by \(\tilde{u}(\mathbf{t},\mathbf{y},X)\). For example, approximations for the NSE and SH utilities are \[\tilde{u}_{NSE}(\mathbf{t},\mathbf{y},X) = -\|\mathbf{t}-\tilde{\mathbf{\theta}}_{\ell}(\mathbf{y};X)\|_{2}^{2},\] \[\tilde{u}_{SH}(\mathbf{t},\mathbf{y},X) = -\frac{p}{2}\log\left(2\pi\right)-\frac{1}{2}\log|\tilde{\Sigma}_ {\ell}(\mathbf{y};X)|-\frac{1}{2}\left(\mathbf{t}-\tilde{\mathbf{\theta}}_{\ell} (\mathbf{y};X)\right)^{T}\tilde{\Sigma}_{\ell}(\mathbf{y};X)^{-1}\left( \mathbf{t}-\tilde{\mathbf{\theta}}_{\ell}(\mathbf{y};X)\right),\] respectively. A Monte Carlo approximation to the expected utility is then given by \[\hat{U}_{G}(X)=\frac{1}{B}\sum_{b=1}^{B}\hat{u}(\mathbf{\theta}_{\ell,D}(\mathbf{ r}_{b},X),\mathbf{y}_{b},X),\] where \(\left\{\mathbf{r}_{b},\mathbf{y}_{b}\right\}_{b=1}^{B}\) is a sample from the joint distribution of responses \(\mathbf{y}\) and hyper-variables \(\mathbf{r}\) implied by designer distribution \(\mathcal{D}(\mathbf{r},X)\) and \(\mathcal{C}\). For clarity, the following algorithm can be used to form the Monte Carlo approximation to the expected Gibbs utility. 1. Inputs are design, \(X\); loss function, \(\ell(\mathbf{t};\mathbf{y},X)\); designer distribution, \(\mathcal{D}(\mathbf{r};X)\), hyper-variable distribution, \(\mathcal{C}\); and Monte Carlo sample size \(B\). 2. For \(b=1,\ldots,B\), complete the following steps. 1. Generate hyper-variables \(\mathbf{r}_{b}\sim\mathcal{C}\). 2. Determine target parameter values under designer distribution \(\mathcal{D}(\mathbf{r}_{b};X)\), i.e. \[\mathbf{\theta}_{\ell,\mathcal{D}}(\mathbf{r}_{b};X)=\arg\min_{\mathbf{t}\in\Theta }\mathrm{E}_{\mathcal{D}(\mathbf{r}_{b};X)}\left[\ell(\mathbf{t};\mathbf{y},X )\right].\] 3. Generate responses \(\mathbf{y}_{b}\sim\mathcal{D}(\mathbf{r}_{b};X)\). 4. Find Gibbs posterior mode \[\tilde{\boldsymbol{\theta}}(\mathbf{y}_{b};X)=\arg\max_{\mathbf{t}\in\Theta} \left[-w\ell(\mathbf{t};\mathbf{y}_{b},X)+\log\pi_{\ell}(\mathbf{t})\right]\] and \[\tilde{\Sigma}_{\ell}(\mathbf{y}_{b};X)=-w\frac{\partial\ell(\tilde{ \boldsymbol{\theta}}_{\ell}(\mathbf{y}_{b};X);\mathbf{y}_{b},X)}{\partial \mathbf{t}\partial\mathbf{t}^{T}}+\frac{\partial\log\pi_{\ell}(\tilde{ \boldsymbol{\theta}}_{\ell}(\mathbf{y}_{b};X))}{\partial\mathbf{t}\partial \mathbf{t}^{T}}\right]^{-1}.\] 5. Using the normal distribution \(\mathrm{N}\left[\tilde{\boldsymbol{\theta}}_{\ell}(\mathbf{y}_{b};X),\tilde {\Sigma}_{\ell}(\mathbf{y}_{b};X)\right]\) as an approximation to the Gibbs posterior, form the approximation to the utility \(\tilde{u}_{b}=\tilde{u}_{\mathcal{G}}(\boldsymbol{\theta}_{\ell,\mathcal{D}} (\mathbf{r}_{b};X),\mathbf{y}_{b},X)\). 3. The Monte Carlo approximation to the Gibbs expected utility is \[\tilde{U}_{\mathcal{G}}=\frac{1}{B}\sum_{b=1}^{B}\tilde{u}_{b}.\] In Step 2.(b), we are required to find the target parameter values \(\boldsymbol{\theta}_{\ell,\mathcal{D}}(\mathbf{r}_{b};X)\) under the designer distribution \(\mathcal{D}(\mathbf{r}_{b};X)\). This is because the utility function in the inner expectation of the expected Gibbs utility (5) is evaluated at the target parameter values. How this is accomplished will depend on the application but will typically require a numerical optimisation. We address this issue for the examples in Section 4. #### 3.3.2 Maximising the approximate expected utility Unfortunately, the Monte Carlo approximation to the Gibbs expected utility, given by (2) is not straightforward to maximise to yield the Gibbs optimal design. This is because it is computationally expensive to evaluate (one evaluation requires \(B\) numerical maximisations to find the Gibbs posterior mode, and possible \(B\) further numerical maximisations to find the target parameter values) and it is stochastic. To solve this problem, we use the ACE algorithm. Very briefly, ACE uses a cyclic descent algorithm (commonly called coordinate exchange in the design of experiments literature, Meyer and Nachtsheim 1995) to maximise a Gaussian process prediction of the expected utility, sequentially over each one-dimensional element of the design space. For more details on the ACE algorithm, see Overstall and Woods (2017). While many methods could be used to maximise the approximate Gibbs expected utility, the ACE algorithm has been chosen due to it being compatible with any Monte Carlo approximation to the expected utility and an implementation being readily availability in software (Overstall et al., 2020). ## 4 Examples ### Count responses and overdispersion In this example, the aim of the experiment it to learn the relationship between controllable variables and count responses. Design of experiments for count responses has previously been considered by, for example, Myers et al. (2010, Chapter 8), Russell (2019, Chapter 5) and 7. The typical approach is to assume the responses are realisations of a Poisson distribution. Accordingly, a Bayesian optimal design can then be found. Specifically, we may assume \[y_{i} \sim \text{Poisson}\left[\exp\left(\eta_{i}\right)\right] \tag{9}\] \[\eta_{i} = \mathbf{f}(\mathbf{x}_{i})^{T}\mathbf{t}, \tag{10}\] with \(\mathbf{t}=\boldsymbol{\theta}_{T}\), along with a prior distribution, \(\mathcal{P}_{T}\), for \(\boldsymbol{\theta}_{T}\). In this example, we suppose that \(\mathbf{f}(\mathbf{x})=\left(1,x_{1},x_{2}\right)^{T}\), i.e. an intercept and main effects for \(k=2\) controllable variables, and \(\mathbb{X}=[-1,1]^{2}\) However, assuming a Poisson distribution is a strong assumption. For example, one property of the Poisson distribution is that its mean and variance are equal, \(\text{var}(y_{i})=\exp\left(\eta_{i}\right)\). In practice, this equality of mean and variance can often fail, a phenomenon known as over-dispersion (see, for example, [2], Section 4.5). In the frequentist literature, a commonly-used approach is estimation via quasi-likelihood (see, for example, Davison, 2003, pages 512-517). This does not require the assumption of a specified probability distribution for the response and allows the variance to be equal to the mean multiplied by a dispersion parameter \(\phi\). The quasi log-likelihood is proportional to the log-likelihood under the statistical model given by (9), but multiplied by a factor of \(1/\phi\), i.e. \[l_{QL}(\mathbf{t};\mathbf{y},X)=\frac{1}{\phi}\sum_{i=1}^{n}\left[y_{i}\eta_{ i}-\exp\left(\eta_{i}\right)\right],\] where \(\eta_{i}\) is given by (10). This means that the maximum quasi-likelihood estimators are identical to the maximum likelihood estimators but the asymptotic variance is inflated by a factor of \(\phi\), to account for over-dispersion. The dispersion parameter is estimated from the responses. We can use Gibbs inference using the negative quasi log-likelihood as the loss function (but removing the factor of \(1/\phi\)), i.e. \[\ell_{QL}(\mathbf{t};\mathbf{y},X)=\sum_{i=1}^{n}\left[\exp\left(\eta_{i} \right)-y_{i}\eta_{i}\right],\] where \(\eta_{i}\) is given by (10). The calibration weight, \(w\), actually corresponds to \(1/\phi\). We specify \(w\) to be the reciprocal of the estimator of \(\phi\), i.e. \[w=\frac{n-p}{\sum_{i=1}^{n}\left[y_{i}-\exp(\hat{\eta}_{i})\right]^{2}/\exp( \hat{\eta}_{i})},\] where \(\hat{\eta}_{i}=\mathbf{f}(\mathbf{x}_{i})^{T}\hat{\boldsymbol{\theta}}_{QL}\) with \[\hat{\boldsymbol{\theta}}_{QL}=\arg\min_{\mathbf{t}\in\Theta}\ell_{QL}( \mathbf{t};\mathbf{y},X)=\arg\max_{\mathbf{t}\in\Theta}l_{QL}(\mathbf{t}; \mathbf{y},X),\] being the maximum (quasi) likelihood estimators. For the designer distribution, we modify the unique-treatment model from Section 3.2 to the count response scenario. We suppose \[y_{i}\sim\text{neg}-\text{bin}\left(\mu_{i},\alpha_{i}\right),\] for \(i=1,\ldots,n\), where \(\text{neg}-\text{bin}(\mu,\alpha)\) denotes a negative binomial distribution with mean \(\mu\) and variance \(\mu+\mu^{2}/\alpha\). The mean for the \(i\)th response is given by \[\log\mu_{i}=\mathbf{f}(\mathbf{x}_{i})^{T}\boldsymbol{\beta}+\mathbf{z}_{i}^ {T}\boldsymbol{\tau}, \tag{11}\] where \(\mathbf{z}_{i}=\left(Z_{i1},\ldots,Z_{iq}\right)^{T}\) is the \(q\times 1\) vector identifying which of the \(q\) unique treatments is subjected to the \(i\)th run, and \(\boldsymbol{\tau}=\left(\tau_{1},\ldots,\tau_{q}\right)^{T}\) is the \(q\times 1\) vector of unique-treatment effects. The mean is chosen as (11) so that the elements of \(\boldsymbol{\tau}\) represent the discrepancy between the true mean and the assumed mean under the quasi-likelihood specification (on the log scale). This allows a type of parity when we compare Bayesian and Gibbs optimal designs. The scale parameter is \(\alpha_{i}=\mu_{i}^{2}/(\kappa\mu_{i}-\mu_{i})\), which results in \(\mathrm{var}(y_{i})=\kappa\mu_{i}\). Thus \(\kappa>1\) controls the extent of over-dispersion. Under the designer model, the hyper-variables are \(\mathbf{r}=(\boldsymbol{\beta},\boldsymbol{\tau},\kappa)\). The target parameter values are \[\boldsymbol{\theta}_{\ell_{Q},D}(X,\boldsymbol{\beta},\kappa,\boldsymbol{ \tau})=\arg\min\sum_{i=1}^{n}\left[\exp\left(\eta_{i}\right)-\mu_{i}\eta_{i} \right],\] where \(\eta_{i}\) is given by (10) and \(\mu_{i}\) is given by (11). Firstly, \(\boldsymbol{\theta}_{\ell_{Q},D}(X,\boldsymbol{\beta},\boldsymbol{\theta}, \boldsymbol{\tau})\) are independent of \(\kappa\). Second, the target parameter values, \(\boldsymbol{\theta}_{\ell_{Q},D}(X,\boldsymbol{\beta},\theta,\boldsymbol{\tau})\), under the designer distribution are the maximum likelihood likelihood estimators under a Poisson GLM. Therefore, in Step 2.(a) of the algorithm to approximate the Gibbs expected utility, in Section 3.3, we can readily use Fisher scoring to calculate the target parameter values. We find Bayesian and Gibbs optimal designs, for \(n=10,20,\ldots,50\), under the NSE utility function. For the Bayesian optimal design, we assume the prior distribution, \(\mathcal{P}_{T}\), is such that the elements of \(\boldsymbol{\theta}_{T}\) are independent, each having a \(\mathrm{N}(0,1)\) distribution. For the Gibbs optimal design, for the hyper-variable distribution, we assume \(\boldsymbol{\beta}\), \(\boldsymbol{\tau}\) and \(\kappa\) are independent. We assume the elements of \(\boldsymbol{\tau}\) are independent, each having a \(\mathrm{U}\left(-2,2\right)\) distribution, and suppose that the elements of \(\boldsymbol{\beta}\) are independent, each having a \(\mathrm{N}(0,1)\) distribution. We specify \(\kappa\sim U(1,5)\). Figure 1(a) shows plots of the Gibbs expected negative squared error utility for both Gibbs (black circles) and Bayesian optimal designs (grey circles). It also shows the Bayesian expected negative squared error utility for both designs (triangles with Gibbs black and Bayesian grey). It can be seen that both designs are sub-optimal under the alternative expected utility. However, the difference on the the scale of the expected Bayesian utility is larger than on the scale of the expected Gibbs utility. Informally, less is lost by not making so strong assumptions and those being true than by making those assumptions and those not being true. Figures 1(b) and (c) shows plots of \(x_{2}\) and \(x_{1}\) for the Gibbs and Bayesian optimal designs, respectively. The size of the plotting character is proportional to the number of repeated designs points at each support point. Clearly, the Bayesian optimal design places points at the extremes of the design region. Whereas, the Gibbs optimal design has points in the interior. ### Time-to-event responses and proportional hazards In this example, the aim of the experiment it to learn the relationship between controllable variables and a time-to-event response (possible subjected to non-informative right censoring). Previous approaches for designing these type of experiments have been proposed by, for example, \(\mathtt{?}\), \(\mathtt{?}\), and \(\mathtt{?}\). It is common for time-to-event responses to assume a proportional hazards model (see, for example, Davison, 2003, Section 10.8.2). This is where the pdf for the response is \[f(y_{i};\mathbf{t},\mathbf{x}_{i})=h_{0}(y_{i})\exp\left[\eta_{i}-e^{\eta_{i}} \int_{0}^{y_{i}}h_{0}(u)\mathrm{d}u\right], \tag{12}\] where \(h_{0}(\cdot)\) is a baseline hazard and \(\eta_{i}=\mathbf{f}(\mathbf{x}_{i})^{T}\mathbf{t}\). In this example, we suppose that \(\mathbf{f}(\mathbf{x})=\left(1,x_{1},x_{2},x_{3}\right)^{T}\), i.e. an intercept and main effects for \(k=3\) controllable variables. A Bayesian analysis requires that a Figure 1: Plots for the count response experiment. Sub-panel (a) shows plots of the Gibbs and Bayesian expected negative squared error utility against number of runs, \(n\), for the Gibbs and Bayesian optimal designs. Sub-panels (b) and (c) shows plots of \(x_{2}\) against \(x_{1}\) for the Gibbs and Bayesian optimal designs, respectively. particular form for the baseline hazard is assumed (see, for example, 7, Chapter 2). By contrast, under the frequentist approach, the Cox proportional hazards model does not requires a particular form for the baseline hazard to be assumed, with estimators given by maximising the partial likelihood. #### 4.2.1 Gibbs optimal designs We consider Gibbs inference using the negative partial log-likelihood as a loss function. Bissiri et al. (2016) considered such an analysis for investigating the relationship of bio-markers and survival time for colon cancer. They set \(w=1\) which can be informally justified so as to make the Gibbs posterior variance equal to the sampling variance of the maximum partial likelihood estimators. If we assume that all responses are unique (i.e. there are no tied responses), then the loss function is \[\ell_{PL}(\mathbf{y},\mathbf{c};\mathbf{t})=\sum_{i=1}^{n}c_{i}\left[\log\sum _{j\in R_{i}}e^{\eta_{j}}-\eta_{i}\right],\] where \(\mathbf{c}=(c_{1},\ldots,c_{n})\), with \(c_{i}=0\) if the \(i\)th response is right censored at \(y_{i}\), and \(c_{i}=1\) otherwise, and \(R_{i}=\{j\in(1,\ldots,n)|y_{j}\geq y_{i}\}\) is the risk set, i.e. the subset of \(\{1,\ldots,n\}\) of responses which have not been observed or right censored before \(y_{i}\). For the designer distribution, we assume \(y_{i}\) and \(c_{i}\) are independent. For the censoring indicators, it is assumed that \(c_{i}\sim\text{Bernoulli}(\rho)\) for some \(\rho\in[0,1]\) controlling the probability of right censoring, with \(\rho=1\) corresponding to there being no censoring. If the designer distribution for \(y_{i}\) is a proportional hazards distribution, i.e. has pdf given by (12), with \(\eta_{i}=\mathbf{f}(\mathbf{x}_{i})^{T}\boldsymbol{\beta}\), then the target parameter values are \(\boldsymbol{\theta}_{PL,D}(\mathbf{r},\mathbf{c},X)=\boldsymbol{\beta}\). Moreover, the maximum partial likelihood estimators and Gibbs posterior are invariant to the actual choice of baseline hazard. Therefore, for simplicity, we use the exponential distribution with \(h_{0}(u)=1\). The hyper-variables are \(\mathbf{r}=(\boldsymbol{\beta},\rho)\). The hyper-variable distribution is such that \(\boldsymbol{\beta}\) and \(\rho\) are independent. The elements of \(\boldsymbol{\beta}\) are assumed independent with each element having a \(\text{N}(0,5)\) distribution. We fix \(\rho\) and investigate sensitivity to its specification. We consider number of runs, \(n=10,20,\ldots,50\), censoring probability \(\rho=0.5,0.75,1.00\) and the NSE utility function. Figure 2(a) shows a plot of Gibbs expected NSE utility against number of runs, \(n\), for the different values of \(\rho\). It makes intuitive sense, that as \(\rho\) increases (probability of censoring decreases), the Gibbs expected utility increases since less information is lost. The effect of amount of the censoring on the actual design was investigated and it was found that the optimal design was only negligibly affected by probability of censoring. For example, the optimal design for \(n=10\) and \(\rho=0.5\) has a very similar value of Gibbs expected utility (with \(\rho=1\)) as the optimal design found by maximising the Gibbs expected utility (with \(\rho=1\)). For comparison we compare the Gibbs optimal designs to Bayesian optimal designs under a Weibull proportional hazards model. In this case, \(h_{0}(u)=\psi u^{\psi-1}\), where \(\psi>0\) is an unknown nuisance parameter. If \(\psi>1\) (\(\psi<1\)), then the baseline hazard is an increasing (decreasing) function of \(u\). The log-likelihood is given by \[\log\pi(\mathbf{y},\mathbf{c}|\boldsymbol{\beta},\psi;X)=\sum_{i=1}^{n}I(c_{i} =1)\left[\eta_{i}+\log\psi+(\psi-1)\log y_{i}\right]-e^{\eta_{i}}y_{i}^{\psi}.\] The elements of \(\boldsymbol{\beta}\) and \(\mathbf{c}\) are assumed to have the same distributions as under the expected Gibbs utility above. The parameter \(\psi\) is assumed to have a \(\text{U}[0.5,1.5]\) distribution. We present results for the case where the censoring probability is \(\rho=0.75\). The results for \(\rho=0.5\) and \(\rho=1\) were qualitatively the same. Table 1 show the estimated expected utility efficiency of the Gibbs \begin{table} \begin{tabular}{l r r r r r} \hline Number of runs, \(n\) & 10 & 20 & 30 & 40 & 50 \\ \hline \(\tilde{U}_{G}(\bar{X}_{B})/\tilde{U}_{G}(\bar{X}_{G})\) & 0.037 & 0.061 & 0.131 & 0.159 & 0.175 \\ \(\tilde{U}_{B}(\bar{X}_{G})/\tilde{U}_{B}(\bar{X}_{B})\) & 0.270 & 0.510 & 0.600 & 0.650 & 0.680 \\ \hline \end{tabular} \end{table} Table 1: Estimated expected utility efficiency of the Gibbs and Bayesian optimal designs under the alternative utility. Figure 2: Panel (a): Plot of expected negative squared error utility against number of runs, \(n\), for different levels of censoring (controlled by \(\rho\)). Panels (b)-(d): Plots of \(x_{1}\), \(x_{2}\) and \(x_{3}\) for the Gibbs and Bayesian optimal designs. The size of the plotting character is proportional to the number of repetitions of the design points. and Bayesian optimal designs under the alternative utility. Suppose for a given value of \(n\) and \(\rho\), \(\bar{X}_{G}\) and \(\bar{X}_{B}\) denote the Gibbs and Bayesian optimal design, respectively. Then, for example, the efficiency of \(\bar{X}_{G}\) under the expected Bayesian utility is \(U_{B}(\bar{X}_{G})/U_{B}(\bar{X}_{B})\). The conclusion is similar to the count response experiment in Section 4.1, in that less information is lost by not making so strong assumptions (i.e. assuming a Weibull distribution) and those being true than by making those assumptions and those not being true. Figure 1(b)-(c) show projections of the design points. The size of the plotting symbol is proportional to the number of repeated design points. A similar phenomenon to the count response experiment in Section 4.1 occurs, in that the Gibbs optimal design has more points in the interior. ## 5 Discussion This paper proposes a new design of experiments framework called Gibbs optimal design. This framework extends the decision-theoretic Bayesian optimal design of experiments approach to account for potentially misspecified statistical models. Closed form expressions were developed for the objective functions for experiments where a linear model would be an appropriate. A computational approach was outlined for other types of experiments. The framework was demonstrated on illustrative examples involving count and time-to-event responses. From the examples, it was found that less information is lost by not making so strong assumptions, and those being true, than by making those assumptions and the assumptions not being true. The key is the specification of the designer distribution. It needs to be flexible enough that it is plausible that the true response-generating distribution is a special case. In the illustrative examples, a unique-treatment model was employed as the designer distribution. The framework should open up avenues for further research. One interesting area is that the target parameter values of Gibbs inference under a fixed design are dependent on the design. This can be viewed as an unattractive property, and future work will attempt to address this. One potential idea is that the utility function be evaluated at _desired target parameter values_, where these are the limit of the target parameter values as \(n\to\infty\) (and under certain conditions) and have physical interpretation independent of the design. As an example, consider the linear model in Section 3.2. Under certain conditions, REF showed that the target parameter values under the sum of squares loss converged to the parameter values that minimise the \(L^{2}\) norm (with respect to \(\mathbf{x}\in\mathbb{X}\)) of the difference between the designer distribution mean response and \(\mathbf{f}(\mathbf{x})^{T}\mathbf{t}\). These parameter values have pleasant physical interpretation. A further area of research is to investigate the sensitivity of the Gibbs optimal design to the method chosen to specify \(w\). In the linear model example in Section 3.2, it was found that the expected Gibbs utility was modified to encourage repeated design points. This originated from the chosen method for specifying \(w\) relying on repeated design points to estimate the response variance. Other specification methods could use another approach, e.g. non-parametric smoothing, to estimate the response variance. This would result in a different objective function and different Gibbs optimal design. ## Appendix A Justification of expression for Gibbs expected utilities in Section 3.2 In this section, we justify the expressions for Gibbs expected NSE and SH utility given by (6) and (7), respectively. ### Negative squared error The Gibbs posterior mean is given by \[\mathrm{E}_{\mathcal{G}}\left[\boldsymbol{\theta}_{SS,\mathcal{D}}(\mathbf{r};X) \right]=\hat{\boldsymbol{\theta}}_{SS}(\mathbf{y};X).\] Therefore, the inner expectation (i.e. with respect to \(\mathbf{y}\) under \(\mathcal{D}(\mathbf{r};X)\)) in the Gibbs expected NSE utility is \[\mathrm{E}_{\mathcal{D}(\mathbf{r};X)}\left[u_{\mathcal{G},NSE} \left(\boldsymbol{\theta}_{SS,\mathcal{D}}(\mathbf{r};X),\mathbf{y},X\right)\right] = \mathrm{E}_{\mathcal{D}(\mathbf{r};X)}\left[\|\boldsymbol{\theta} _{SS,\mathcal{D}}(\mathbf{r};X)-\hat{\boldsymbol{\theta}}_{SS}(\mathbf{y};X)\|_ {2}^{2}\right]\] \[= -\sigma^{2}\mathrm{tr}\left[\left(F^{T}F\right)^{-1}\right],\] since \(\mathrm{E}_{\mathcal{D}(\mathbf{r};X)}\left[\hat{\boldsymbol{\theta}}_{SS}( \mathbf{y};X)\right]=\boldsymbol{\theta}_{SS,\mathcal{D}}(\mathbf{r};X)\). Now the Gibbs expected NSE utility is \[U_{\mathcal{G},NSE}(X) = \mathrm{E}_{\mathcal{C}}\left[-\sigma^{2}\mathrm{tr}\left[\left( F^{T}F\right)^{-1}\right]\right],\] \[= \mathrm{E}_{\mathcal{C}}\left(-\sigma^{2}\right)\mathrm{tr}\left[ \left(F^{T}F\right)^{-1}\right].\] A constraint of \(n>q\) ensures that the Gibbs posterior distribution exists. ### Shannon information gain The following result will prove useful. **Lemma A.1**.: _If a linear model has model matrix \(F\), and the corresponding unique-treatment model has model matrix \(Z\), with \(H_{Z}=Z\left(Z^{T}Z\right)^{-1}Z\), then \(F=H_{Z}F\)._ Proof.: Without loss of generality, unique-treatment \(\bar{\mathbf{x}}_{j}\) is repeated \(n_{j}\geq 1\) times, for \(j=1,\ldots,q\). Then \(n=\sum_{j=1}^{q}n_{j}\). Let \(\bar{\mathbf{f}}_{j}=\mathbf{f}(\bar{\mathbf{x}}_{j})\), for \(j=1,\ldots,q\). Then the model matrix \(F\) can be written \[F=\left(\begin{array}{c}\bar{F}_{1}\\ \vdots\\ \bar{F}_{q}\end{array}\right)\] where \(\bar{F}_{j}\) is an \(n_{j}\times p\) matrix with each row given by \(\bar{\mathbf{f}}_{j}\), i.e. the rows of \(\bar{F}_{j}\) are identical. The corresponding \(H_{Z}\) matrix can be written in the following block diagonal form \[H_{Z}=\left(\begin{array}{ccc}\frac{1}{n_{1}}J_{n_{1}}&0\\ 0&\ddots&0\\ &0&\frac{1}{n_{q}}J_{n_{q}}\end{array}\right),\] where \(J_{n_{j}}\) is the \(n_{j}\times n_{j}\) matrix of ones. Then \[H_{Z}F=\left(\begin{array}{c}\frac{1}{n_{1}}J_{n_{1}}\bar{F}_{1}\\ \vdots\\ \frac{1}{n_{q}}J_{n_{q}}\bar{F}_{q}\end{array}\right),\] where \(J_{n_{j}}\bar{F}_{j}=n_{j}\bar{F}_{j}\). Therefore \(H_{Z}F=F\) as required. The Gibbs posterior distribution is \(\mathrm{N}\left[\hat{\mathbf{\theta}}_{SS}(\mathbf{y};X),\hat{\sigma}^{2}\left(F^{T}F \right)^{-1}\right]\) with log pdf \[\log\pi_{\mathcal{G}}\left(\mathbf{t}|\mathbf{y};X\right) = -\frac{p}{2}\log\left(2\pi\right)-\frac{p}{2}\log\hat{\sigma}^{2}+ \frac{1}{2}\log|F^{T}F|-\frac{1}{2\hat{\sigma}^{2}}\left[\mathbf{t}-\hat{\mathbf{ \theta}}_{SS}(\mathbf{y};X)\right]^{T}F^{T}F\left[\mathbf{t}-\hat{\mathbf{\theta}}_ {SS}(\mathbf{y};X)\right] \tag{13}\] \[= -\frac{p}{2}\log\left(2\pi\right)+\frac{p}{2}\log d-\frac{p}{2} \log\left[\mathbf{y}^{T}\left(I_{n}-H_{Z}\right)\mathbf{y}\right]+\frac{1}{2} \log|F^{T}F|\] \[-\frac{d\mathbf{t}^{T}F^{T}F\mathbf{t}}{2\mathbf{y}^{T}\left(I_{ n}-H_{Z}\right)\mathbf{y}}+\frac{d\mathbf{t}^{T}F^{T}\mathbf{y}}{\mathbf{y}^{T} \left(I_{n}-H_{Z}\right)\mathbf{y}}-\frac{dy^{T}H_{F}\mathbf{y}}{2\mathbf{y}^ {T}\left(I_{n}-H_{Z}\right)\mathbf{y}},\] where \(H_{F}=F\left(F^{T}F\right)^{-1}F\). The line (13) follows from \(\hat{\sigma}^{2}=\mathbf{y}^{T}\left(I_{n}-H_{Z}\right)\mathbf{y}/(n-q)\) and \(d=n-q\) is the pure error degrees of freedom. The designer distribution, \(\mathcal{D}(\mathbf{r};X)\), is such that \(\mathbf{y}\sim\mathrm{N}\left(Z\bar{\mathbf{\mu}},\sigma^{2}I_{n}\right)\). From Lemma A.1, \(\left(I_{n}-H_{Z}\right)H_{F}=H_{F}-H_{Z}F\left(F^{T}F\right)^{-1}F=0\). This means, by Craig's theorem, (see, for example, [7], Theorem 10.2), that the quadratic forms \(\mathbf{y}^{T}H_{F}\mathbf{y}\) and \(\mathbf{y}^{T}\left(I_{n}-H_{Z}\right)\mathbf{y}\) are independent. Similarly, from Lemma A.1, \(F^{T}\left(I_{n}-H_{Z}\right)=\left[\left(I_{n}-H_{Z}\right)F\right]^{T}=0\), meaning that \(F^{T}\mathbf{y}\) and \(\mathbf{y}^{T}\left(I_{n}-H_{Z}\right)\mathbf{y}\) are independent (see, for example, [7], Theorem 10.3). Lastly, since \(I_{n}-H_{Z}\) is idempotent, and \(\bar{\mathbf{\mu}}^{T}Z^{T}(I_{n}-H_{Z})Z\bar{\mathbf{\mu}}=0\), then \(\mathbf{y}^{T}\left(I_{n}-H_{Z}\right)\mathbf{y}/\sigma^{2}\sim\chi_{d}^{2}\) (see, for example, [7], Theorem 10.1). Using properties of the chi-squared distribution, \[\mathrm{E}_{\mathcal{D}(\mathbf{r};X)}\left[\frac{1}{\mathbf{y}^ {T}\left(I_{n}-H_{Z}\right)\mathbf{y}}\right] = \frac{1}{\sigma^{2}\left(d-2\right)}\] \[\mathrm{E}_{\mathcal{D}(\mathbf{r};X)}\left\{\log\left[\mathbf{y} ^{T}\left(I_{n}-H_{Z}\right)\mathbf{y}\right]\right\} = \log\sigma^{2}+\log 2+\psi\left(\frac{d}{2}\right).\] It follows that the inner expectation (i.e. with respect to \(\mathbf{y}\) under \(\mathcal{D}(\mathbf{r};X)\)) in the Gibbs expected SH utility is \[\mathrm{E}_{\mathcal{D}(\mathbf{r};X)}\left[u_{\mathcal{G},SH} \left(\mathbf{\theta}_{SS,\mathcal{D}}(\mathbf{r};X),\mathbf{y},X\right)\right] = \mathrm{E}_{\mathcal{D}(\mathbf{r};X)}\left[\log\pi_{\mathcal{G}} \left(\mathbf{\theta}_{SS,\mathcal{D}}(\mathbf{r};X)|\mathbf{y};X\right)\right] \tag{14}\] \[= -\frac{p}{2}\left[\log(2\pi)+\log\sigma^{2}+\log 2\right]-\frac{p}{2}h_{2}(d)+ \frac{1}{2}\log|F^{T}F|,\] since \(\mathbf{\theta}_{SS,\mathcal{D}}(\mathbf{r};X)=\left(F^{T}F\right)^{-1}F^{T}Z\bar{ \mathbf{\mu}}\). Equation (7) follows from taking expectation of (14) with respect to the hyper-variable distribution \(\mathcal{C}\).
2303.10217
Quantifying Space-Time Load Shifting Flexibility in Electricity Markets
The power grid is undergoing significant restructuring driven by the adoption of wind/solar power and the incorporation of new flexible technologies that can shift load in space and time (e.g., data centers, battery storage, and modular manufacturing). Load shifting is needed to mitigate space-time fluctuations associated with wind/solar power and other disruptions (e.g., extreme weather). The impact of load shifting on electricity markets is typically quantified via sensitivity analysis, which aims to assess impacts in terms of price volatility and total welfare. This sensitivity approach does not explicitly quantify operational flexibility (e.g., range or probability of feasible operation). In this work, we present a computational framework to enable this; specifically, we quantify operational flexibility by assessing how much uncertainty in net loads (which capture uncertain power injections/withdrawals) can be tolerated by the system under varying levels of load shifting capacity. The proposed framework combines optimization formulations that quantify operational flexibility with power grid models that capture load shifting in the form of virtual links (pathways that transfer load across space-time). Our case studies reveal that adding a single virtual link that shifts load in either space or time can lead to dramatic improvements in system-wide flexibility; this is because shifting relieves space-time congestion that results from transmission constraints and generator ramping constraints. Our results provide insights into how the incorporation of flexible technologies can lead to non-intuitive, system-wide gains in flexibility.
Weiqi Zhang, Victor M. Zavala
2023-03-17T19:17:51Z
http://arxiv.org/abs/2303.10217v1
# Quantifying Space-Time Load Shifting Flexibility in Electricity Markets ###### Abstract The power grid is undergoing significant restructuring driven by the adoption of wind/solar power and the incorporation of new flexible technologies that can shift load in space and time (e.g., data centers, battery storage, and modular manufacturing). Load shifting is needed to mitigate space-time fluctuations associated with wind/solar power and other disruptions (e.g., extreme weather). The impact of load shifting on electricity markets is typically quantified via sensitivity analysis, which aims to assess impacts in terms of price volatility and total welfare. This sensitivity approach does not explicitly quantify operational flexibility (e.g., range or probability of feasible operation). In this work, we present a computational framework to enable this; specifically, we quantify operational flexibility by assessing how much uncertainty in net loads (which capture uncertain power injections/withdrawals) can be tolerated by the system under varying levels of load shifting capacity. The proposed framework combines optimization formulations that quantify operational flexibility with power grid models that capture load shifting in the form of virtual links (pathways that transfer load across space-time). Our case studies reveal that adding a single virtual link that shifts load in either space or time can lead to dramatic improvements in system-wide flexibility; this is because shifting relieves space-time congestion that results from transmission constraints and generator ramping constraints. Our results provide insights into how the incorporation of flexible technologies can lead to non-intuitive, system-wide gains in flexibility. **Keywords**: electricity markets; renewable power; flexibility analysis; optimization ## 1 Introduction The power grid is undergoing major structural changes as incorporation of renewable energy introduces a high level of spatio-temporal variability and uncertainty. Under this trend, leveraging flexibility from various technologies is key to achieving effective and efficient decarbonization for the power grid [4]. Specifically, load shifting has been widely studied as a mechanism that can help mitigate variability and uncertainty [2]. Examples of shiftable loads include electric vehicles [15, 17], batteries [19], and buildings [9]. Flexibility is normally defined based on the ability to shift loads temporally, but this notion can also be extended to capture geographical flexibility, which can be enabled by spatially-distributed systems such as data centers [12] and modular manufacturing facilities (e.g., ammonia) [18]. While harnessing shifting flexibility is necessary, current energy market designs are not fully capable of remunerating shifting flexibility directly. For instance, traditional electricity markets for demand response focus on remunerating peak shaving or load shedding [1, 11]. Recently, a pricing scheme for shiftable loads has been proposed to provide a direct incentive for load-shifting flexibility, which is computed without considering power network models [24]. An electricity market design has also been recently proposed to directly price and remunerate load-shifting flexibility [26, 28]. Here, shifting flexibility is captured in the form of virtual links, which are non-physical pathways that transfer power across space and time. The virtual link concept has also been used to remunerate flexibility provided by energy storage systems (e.g., load/power shifting is represented as a temporal virtual shift) [27]. Impacts of flexibility on power grid performance are typically studied via sensitivity analysis and use a wide variety of metrics [10]. Unfortunately, sensitivity approaches can be computationally intensive and do not directly quantify operational flexibility (i.e., ability of the system to maintain feasible operation). Flexibility analysis has been widely studied in the field of process systems engineering, dating back from the seminal work by Grossmann and co-workers [8]. In this framework, flexibility is defined as the ability for a system to maintain feasibility (satisfy all its constraints) given a set of uncertain parameters/data. This approach has been applied to various systems such as chemical processes [13], power systems [5, 22], and autonomous vehicles [23]. Operational flexibilty can be quantified using deterministic and stochastic approaches. The deterministic approach proposed in [7] measures flexibility by taking a robust optimization view of the problem (i.e. identifying worst-case uncertainty that can be tolerated) [25]. Pulsipher and co-workers have recently shown that this approach can be extended to capture different types of uncertainty representations (e.g., ellipsoidal) [14]. The stochastic approach defines flexibility in terms of probability that the system retains feasible operation and this requires sampling/quadrature approaches to compute probabilities [20]. A benefit of the deterministic approach is that it bypasses the need for sampling and can capture a large number of uncertain parameters; however, this approach is restricted to convex constraints. On the other hand, the stochastic approach can accommodate nonconvex constraints but might require large number of samples to enable accurate computations. In this work, we apply the deterministic flexibility analysis framework of Pulsipher and co-workers to an electricity market clearing formulation that captures load shifting flexibility in the form of virtual links [26]. We explore the question of measuring/quantifying the improvement in flexibility as a result of adding virtual links systems. Our results show that adding a _single pair_ of virtual links can dramatically expand the feasible space of the power grid system in non-intuitive ways, resulting in large gains in flexibility (up to 250% for a spatial shifting case and 80% for a temporal shifting case). We also explore the cost of flexibility provision by considering economic/budget constraints; the results show that modest increases in operating cost can result in substantial improvements in flexibility. Overall, we believe that our framework can provide valuable insights into how the deployment of new flexible technologies can help increase flexibility and mitigate increasing levels of uncertainty that power grids are facing. The framework can also be used to identify what types of technologies (and how many) might be needed to achieve various flexibility levels. The manuscript is structured as follows. Section 2 goes over the market clearing framework that we study. Section 3 reviews the flexibility analysis framework applied. Section 4 presents numerical results of the case studies. Section 5 concludes the paper. ## 2 Power Grid Model with Virtual Links In this paper, we consider the economic dispatch formulation presented in [26, 28]. This formulation can be interpreted as a market clearing formulation. Similar formulations (known as coordinated management) are also applied to environmental supply chain studies [16]. Independent system operators (ISOs) solve these types of formulations to determine optimal power allocations to stakeholders. The market system setup is visualized in Figure 1; this considers a set of suppliers (owner of generators) \(\mathcal{S}\) and consumers (owners of loads) \(\mathcal{D}\) connected to a transmission network comprised of geographical nodes \(\mathcal{N}\) and transmission lines \(\mathcal{L}\) (owned by transmission service providers). Each supplier \(i\in\mathcal{S}\) is connected to the power grid at node \(n(i)\in\mathcal{N}\). The supplier offers available capacity \(\bar{p}_{i,t}\in[0,\infty)\) and ramping capacity \(\Delta p_{i}\in[0,\bar{p}_{i}]\). The set of suppliers at node \(n\) is defined as \(\mathcal{S}_{n}:=\{i\in\mathcal{S}\,|\,n(i)=n\}\subseteq\mathcal{S}\). The market will decide the cleared allocation \(p_{i,t}\) (amount of power injected) for each supplier \(i\in\mathcal{S}\) at time \(t\in\mathcal{T}\). We use \(p\) to denote the collection of all cleared allocations. The transmission network consists of the node set \(\mathcal{N}\) and a set of transmission lines \(\mathcal{L}\). Each line \(l\in\mathcal{L}\) is associated with a sending node \(\operatorname{snd}(l)\in\mathcal{N}\) and receiving node \(\operatorname{rec}(l)\in\mathcal{N}\). The definitions of \(\operatorname{snd}(l)\) and \(\operatorname{rec}(l)\) are interchangeable because power can flow in either direction. For each node \(n\in\mathcal{N}\), we define its set of receiving lines \(\mathcal{L}_{n}^{\text{rec}}:=\{l\in\mathcal{L}\,|\,n=\operatorname{rec}(l)\} \subseteq\mathcal{L}\) and its set of sending lines \(\mathcal{L}_{n}^{\text{snd}}:=\{l\in\mathcal{L}\,|\,n=\operatorname{snd}(l)\} \subseteq\mathcal{L}\). The problems identifies flows \(f_{l,t}\) that satisfy capacity bounds \(f_{l,t}\in[-\bar{f}_{l},\bar{f}_{l}]\) and direct-current (DC) power flow equations: \[f_{l,t}=B_{l}(\theta_{\operatorname{snd}(l),t}-\theta_{\operatorname{rec}(l),t }), \tag{2.1}\] where \(B_{l}\in\mathbb{R}_{+}\) is the line susceptance and \(\theta_{n,t}\in\mathbb{R}\) is the phase angle at node \(n\in\mathcal{N}\). We note that the DC power flow model is a linear model that approximates the actual power flow physics under small phase angle difference conditions. We use this linear model to avoid quadratic constraints in Figure 1: Illustration of a base electricity market system with three time intervals. formulation (3.9), which make the flexibility analysis much more computationally expensive. We model consumers as a net aggregated demand \(D_{n,t}\in[0,+\infty)\) at node \(n\) and time \(t\). The net demand considers injection of renewable power, and are thus treated as uncertain parameters. In this paper, we assume that loads cannot be curtailed and thus the amount of net demand \(D_{n,t}\) must be satisfied. However, this does not mean that the loads are inflexible; specifically, we will see that the loads can be shifted to alternative space-time locations and be served at such locations. The base electricity market clearing framework is as follows: \[\min_{p,f,\theta,\delta} \sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{S}}\alpha_{i,t}^{p}p_{i,t}\] (2.2a) s.t. \[\sum_{l\in\mathcal{L}_{n}^{\text{rec}}}f_{l,t}+\sum_{i\in\mathcal{ S}_{n}}p_{i,t}=\sum_{l\in\mathcal{L}_{n}^{\text{end}}}f_{l,t}+D_{n,t},\quad n \in\mathcal{N},t\in\mathcal{T} \tag{2.2b}\] \[f_{l,t}=B_{l}(\theta_{\text{snd}(l),t}-\theta_{\text{rec}(l),t}),\quad l\in\mathcal{L},t\in\mathcal{T}\] (2.2c) \[-\Delta\bar{p}_{i}\leq p_{i,t}-p_{i,t-1}\leq\Delta\bar{p}_{i}, \quad i\in\mathcal{G},t\in[2,3,...,T]\] (2.2d) \[0\leq p\leq\bar{p},\,-\bar{f}\leq f\leq\bar{f} \tag{2.2e}\] Here, the objective function (2.2a) is the total generation cost. Constraints (2.2b) capture power balance at each space time node. Constraints (2.2c) capture the DC power flow model. Constraints (2.2d) capture ramping constraints for each generator. Constraints (2.2e) capture the capacity for generators and transmission lines. Note that the ramping capacities of generators determine how much temporal flexibility the system has; if generators have small ramping capacities, the system will not able to respond to time-varying demands. In addition, note that transmission capacities determine how much spatial flexibility the system has, but the effect is more complex as this also involved the network topology/connectivity. The combination of the physical model, ramping capacities, and transmission capacities define the operational flexibility of the system (i.e., its ability to absorb demands). Formulation (2.2) does not capture shifting flexibility that emerging technologies (i.e. data centers, energy storage, and distributed manufacturing facilities) can offer. Zhang et al. [26] extended this formulation to capture flexibility in the form of virtual links. Virtual links are a modeling abstraction that captures space-time shifting flexibility from different types of technologies. In general, virtual links represent the non-physical transfer of power injection/extraction from space-time location \((n,t)\) to another \((n^{\prime},t^{\prime})\). Note that over one virtual link, it is allowed to have either \(t=t^{\prime}\) or \(n=n^{\prime}\) to capture purely spatial or temporal shifting, but the equalities cannot hold simultaneously. The set of virtual links can be seen as an additional infrastructure layer (on top of the physical system) that can be leveraged to respond to satisfy demands, as visualized in Figure 2. The formulation considers a set of virtual links \(\mathcal{V}\); each virtual link \(v\in\mathcal{V}\) is associated with a sending space-time node \(\text{snd}(v)=(n_{\text{snd}(v)},t_{\text{snd}(v)})\) and a receiving space-time node \(\text{rec}(v)=(n_{\text{rec}(v)},t_{\text{rec}(v)})\). We define \(\mathcal{V}_{n,t}^{\text{snd}}:=\{v\in\mathcal{V}\,|\,\text{snd}(v)=(n,t)\} \subseteq\mathcal{V}\), \(\mathcal{V}_{n,t}^{\text{rec}}:=\{v\in\mathcal{V}\,|\,\text{rec}(v)=(n,t)\}\subseteq \mathcal{V}\) to be the set of sending and receiving virtual links at space-time node \((n,t)\). This setting captures the special case in which \(v\) is a spatial virtual link if it connects nodes at different locations but same time (\(n_{\text{snd}(v)}\neq n_{\text{rec}(v)},t_{\text{snd}(v)}=t_{\text{rec}(v)}\)), and the special case in which \(v\) is a temporal virtual link if it connects nodes at different times but at the same location (\(t_{\text{snd}(v)}\neq t_{\text{rec}(v)},n_{\text{snd}(v)}=n_{\text{rec}(v)}\)). The clearing formulation needs to decide the virtual link allocation \(\delta_{v}\) (amount of power injection/extraction shifted) subject to the operation model. Incorporating virtual links leads to the general clearing formulation shown below: \[\min_{p,f,\theta,\delta} \sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{S}_{i}}\alpha_{i,t}^{p}p_ {i,t}+\sum_{v\in\mathcal{V}}\alpha_{v}^{\delta}\delta_{v}\] (2.3a) s.t. \[\sum_{l\in\mathcal{L}_{n}^{\text{rec}}}f_{l,t}+\sum_{i\in \mathcal{S}_{n}}p_{i,t}+\sum_{v\in\mathcal{V}_{n,t}^{\text{ind}}}\delta_{v}= \sum_{l\in\mathcal{L}_{n}^{\text{ind}}}f_{l,t}+\sum_{v\in\mathcal{V}_{n,t}^{ \text{rec}}}\delta_{v}+D_{n,t},\quad n\in\mathcal{N},t\in\mathcal{T} \tag{2.3b}\] \[f_{l,t}=B_{l}(\theta_{\text{snd}(l),t}-\theta_{\text{rec}(l),t}),\quad l\in\mathcal{L},t\in\mathcal{T}\] (2.3c) \[-\Delta\bar{p}_{i}\leq p_{i,t}-p_{i,t-1}\leq\Delta\bar{p}_{i}, \quad i\in\mathcal{G},t\in[2,3,...,T]\] (2.3d) \[0\leq p\leq\bar{p},\,-\bar{f}\leq f\leq\bar{f}\] (2.3e) \[c_{E}(\delta)=0;\,c_{I}(\delta)\geq 0 \tag{2.3f}\] The objective function (2.3a) is the total cost, consisting of generation costs and virtual link costs. Power balance constraints (2.3b) now capture the effect of virtual links. In addition to constraints (2.3c)-(2.3e), constraints (2.3f) capture a set of equality and inequality constraints that model the operations of virtual link providers. Specific forms of constraints depend on the characteristics of flexibility providers. For instance, the constraints for virtual links provided by a data center can be modeled as follows [26]: \[D_{n,t}+\sum_{v\in\mathcal{V}_{n,t}^{\text{rec}}}\delta_{v}-\sum_ {v\in\mathcal{V}_{n,t}^{\text{snd}}}\delta_{v}\geq 0,\quad n\in\mathcal{N},t \in\mathcal{T} \tag{2.4a}\] \[0\leq\delta\leq\bar{\delta} \tag{2.4b}\] Here, constraints (2.4a) capture the bound on the realized load at each data center site. Constraints (2.4b) capture the limit on the amount of loads allowed to shift via each virtual link. We note that, in general, the virtual link allocation variables \(\delta_{v}\) are allowed to be either positive or negative. A positive Figure 2: Illustration of an electricity market system with a set of virtual links \(\mathcal{V}\), with five time intervals. The set of virtual links form a space-time network that acts as an additional infrastructure layer to match electricity demand and supply, on top of the transmission network. value means that the virtual link is shifting power extraction (e.g., load shifting), while a negative value means that the virtual link is shifting power injection (e.g., can be done using batteries). ## 3 Flexibility Analysis Framework To quantify the flexibility in a given market system (2.3), we need a framework to compute a scalar value which measures the flexibility of a market system given a configuration (or lack thereof) of virtual links. This will allow us to compare how flexible the system is with different levels of load-shifting flexibility captured in the form of virtual links. In this work we apply the flexibility analysis framework proposed by Pulsipher et al. [14] to derive an optimization formulation for the system. The framework computes the so-called flexibility index (denoted as \(F\)); this index measures the size of uncertainty under which system operation is feasible (satisfies all constraints). Here, we briefly review key concepts of the framework. Consider a general system defined by the constraint set: \[g_{j}(x;\xi,\bar{\delta})\leq 0,\quad j\in\mathcal{J}^{I} \tag{3.5a}\] \[h_{j}(x;\xi,\bar{\delta})=0,\quad j\in\mathcal{J}^{E} \tag{3.5b}\] where \(g_{j}(\cdot),j\in\mathcal{J}^{I}\) are inequality constraint functions and \(h_{j}(\cdot),j\in\mathcal{J}^{E}\) equality constraint functions, \(x\) the decision variables, \(\xi\) uncertain parameters, and \(\bar{\delta}\) system design parameters. Once the design parameters \(\bar{\delta}\) are fixed, the feasibility of system (3.5) under a given realization \(\xi\) can be computed as \[\psi(\xi):= \min_{x,t} t\] (3.6a) s.t. \[g_{j}(x;\xi,\bar{\delta})\leq t,\quad j\in\mathcal{J}^{I} \tag{3.6b}\] \[h_{j}(x;\xi,\bar{\delta})=0,\quad j\in\mathcal{J}^{E} \tag{3.6c}\] For a given realization \(\xi\), the system is feasible if \(\psi(\xi)\leq 0\) (meaning there exists a solution \(x\) that satisfies all constraints in (3.5)), and infeasible otherwise. The feasible set of the system can then be defined as \(\Xi:=\{\xi\,|\,\psi(\xi)\leq 0\}\). The flexibility index problem seeks to identify the largest uncertainty set \(T(\alpha)\), parameterized by a scalar \(\alpha\in\mathbb{R}_{+}\) such that system (3.5) remains feasible for all possible realizations of the uncertain parameters \(\xi\) in \(T(\alpha)\). The size of the uncertainty set \(T(\alpha)\) scales with \(\alpha\). The uncertainty set needs to have a pre-defined shape and a nominal point (denoted as \(\bar{\xi}\)) so that it can be parameterized by a single scalar \(\alpha\). Common uncertainty sets are shown in Table 1. A natural choice of uncertainty set representation and nominal point is highly dependent on the specific applications considered. Normally, the hyperbox set is the default option for flexibility analysis as it is intuitive, requires minimal data, and is computationally inexpensive (it can be expressed as a set of linear constraints). However, the hyperbox set is not necessarily the best option for all applications; for instance, in cases where uncertain parameters are correlated in space-time (e.g., wind power or loads) it is more appropriate to use ellipsoidal sets. The flexibility index \(F\) is defined as : \[F:= \max_{\alpha\in\mathbb{R}_{+}} \alpha\] (3.7a) s.t. \[\max_{\xi\in T(\alpha)}\psi(\xi)\leq 0 \tag{3.7b}\] Problem (3.7) seeks to find the largest uncertainty set \(T(\alpha)\) such that the system (3.5) attains at least one feasible solution for all realizations of \(\xi\). Problem (3.7) is a tri-level optimization problem and is generally challenging to solve. However, [21] has shown that problem (3.7) is equivalent to finding the minimum \(\alpha\) along the boundary of the feasible set \(\Xi\) if \(T(\alpha)\) is compact and constraint functions \(g_{j}(x;\xi,\bar{\delta})\) and \(h_{j}(x;\xi,\bar{\delta})\) are Lipschitz continuous in \(x,\xi,\bar{\delta}\). Note that the boundary of the feasible set can be re-written as \(\partial\Xi=\{\xi\,|\,\psi(\xi)=0\}\). The flexibility index problem can thus be written as: \[F= \min_{\alpha\in\mathbb{R}_{+},\xi\in T(\alpha)} \alpha\] (3.8a) s.t. \[\psi(\xi)=0 \tag{3.8b}\] Figure 3 illustrates a hyperbox feasible set for a system with linear constraints. This gives an intuitive explanation on why formulations (3.7) and (3.8) are equivalent. Essentially, constraint (3.8b) enforces that the uncertainty set must have one point at the boundary of the feasible set. By minimizing \(\alpha\), formulation (3.8) seeks for the smallest uncertainty set \(T(\alpha)\) that intersects with the boundary of the feasible set. This is the largest uncertainty set that can be enclosed in the feasible set. Based on this property, a mixed-integer programming formulation is proposed that converts the equality constraint (3.8b) to the Karush-Kuhn-Tucker (KKT) conditions of problem (3.6) [6]. Doing so \begin{table} \begin{tabular}{|c|c|} \hline Shape & Uncertainty set \\ \hline Hyperbox & \(T_{box}(\alpha)=\{\xi\,|\,\bar{\xi}-\alpha\Delta\xi\leq\xi\leq\bar{\xi}- \alpha\Delta\xi\}\) \\ Ellipsoid & \(T_{box}(\alpha)=\{\xi\,|\,(\xi-\bar{\xi})^{T}V^{-1}(\xi-\bar{\xi})\leq\alpha\}\) \\ \(\ell_{p}\) norm set & \(T_{box}(\alpha)=\{\xi\,|\,\|\xi-\bar{\xi}\|_{p}\leq\alpha\}\) \\ \hline \end{tabular} \end{table} Table 1: Common representations of uncertainty sets. Figure 3: Illustration of hyperbox set for a system with linear constraints. Dashed boxes denote hyperbox sets of different \(\alpha\) values centered at the same nominal point. Any feasible set that touches the constraints will be as large as the optimal feasible set, and any feasible set that can be fit in the system cannot be larger than the optimal feasible set. enforces that \(\alpha\) must be large enough so that there exists one realization \(\xi\) that lies on the boundary of the feasible set. The resulting formulation is: \[F=\min_{\alpha,\xi,x,\lambda,s,y,\mu} \alpha\] (3.9a) s.t. \[\sum_{j\in\mathcal{J}^{I}}\lambda_{j}=1 \tag{3.9b}\] \[\sum_{j\in\mathcal{J}^{E}}\mu_{j}\nabla_{x}h_{j}(x;\xi,\bar{ \delta})+\sum_{j\in\mathcal{J}^{I}}\lambda_{j}\nabla_{x}g_{j}(x;\xi,\bar{ \delta})=0\] (3.9c) \[h_{j}(x;\xi,\bar{\delta})=0,\quad j\in\mathcal{J}^{E}\] (3.9d) \[g_{j}(x;\bar{\delta})+s_{j}=0,\quad j\in\mathcal{J}^{I}\] (3.9e) \[\lambda_{j}\leq y_{j},\quad j\in\mathcal{J}^{I}\] (3.9f) \[s_{j}\leq U(1-y_{j}),\quad j\in\mathcal{J}^{I}\] (3.9g) \[\lambda\geq 0,s\geq 0,\alpha\geq 0,y\in\{0,1\}^{|\mathcal{J}^{I}|}\] (3.9h) \[\xi\in T(\alpha) \tag{3.9i}\] In problem (3.9), \(\mu\) and \(\lambda\) are the Lagrange multipliers for the equality and inequality constraints, respectively. \(s\) denote the slack variables for the inequality constraints. \(y\) denote binary variables indicating whether inequality constraint \(j\) is active or not. The constant \(U\) is an appropriate upper bound for the slack variables \(s\). When the constraints \(g_{j}(\cdot)\), \(h_{j}(\cdot)\) and the set \(T(\alpha)\) are convex, the problem is a convex mixed-integer program (which can be solved efficiently). Any nonconvex constraint \(g_{j}(\cdot)\), \(h_{j}(\cdot)\) or a nonconvex set \(T(\alpha)\) will lead to a nonconvex mixed-integer program (which are more computationally intensive); moreover, non-convex formulations cannot guarantee feasible operation for every element of the uncertainty set (as in the convex case). ## 4 Case Study In this section, we apply the flexibility analysis framework to demonstrate how it helps answer key questions relevant to quantifying flexibility provision in different settings. The flexibility analysis framework is implemented in Julia; code and data needed to reproduce the results are available at [https://github.com/zavalab/JuliaBox/tree/master/FlexQuantVL](https://github.com/zavalab/JuliaBox/tree/master/FlexQuantVL). ### Optimal Deployment of Spatial Flexibility We demonstrate how to apply the flexibility analysis framework to answer the following question: What is the most efficient way to incorporate spatial flexibility in the system? In the context of virtual links, we reformulate the question as _finding the optimal set of virtual links that maximizes the improvement in flexibility_. We consider a purely spatial case based on the modified IEEE-118 case with Active Power Increase (API) [3] at a fixed time. The net demand at each node \(D_{n}\) is treated as an uncertain parameter, and the nominal point is selected as the reference value \(\bar{D}_{n}\) given in the case data. Thus, the hyperbox uncertainty set can be written as \[T(\alpha)=\{D\,|\,(1-0.5\alpha)\bar{D}_{n}\leq D_{n}\leq(1+0.5\alpha)\bar{D}_{ n},\,n\in\mathcal{N}\} \tag{4.10}\] Under this definition, a value of \(\alpha=1\) means that the system remains feasible for net demand values that are within 50% deviation in both directions from the reference values. The IEEE 118-bus case is visualized in Figure 4; each bus (node) is represented by a circle, and each generator is represented by a square. The transmission network structure is shown by the lines. The color of each bus represents the amount of net demand in log scale; a positive value indicates that load is higher than generation at that location. A deep blue color means there is no load attached (pure generation is present at such node). A total of 99 out of 118 buses are connected to a load, while a few nodes are generation-only. The color of each generator represents the maximum generator capacity, also in log scale. We also note that some transmission lines are marked red, meaning that their capacity constraints are active for the base case (the lines are congested). We begin by exploring the benefit of adding a pair of virtual links (in both directions) between a pair of nodes to the base system. This assesses the benefit of converting a pair of loads from being fixed to being spatially flexible. Each virtual link is added with a capacity of 30% of the sending load (only 30% of the load is shiftable). By doing this, we are effectively converting 30% of the loads from Figure 4: Network structure of the IEEE 118-bus case. Each circle denotes a bus, and each square denotes a generator attached to a bus. Transmission lines are shown between buses in lightblue. Red lines denote transmission lines with active capacity constraints in the base case. Colors denote the load level (for buses) or generation capacity (for generators) on a log scale. being fixed to being shiftable to some other bus. This simulates the change of market mechanisms that account for flexible loads that are already in-place, but were unable to offer flexibility to the power grid/markets. With 99 different loads, there are \(99*98/2=4851\) possible ways to add virtual links. We solve problem (3.9) for the flexibility index \(F\) for all these alternatives. Figure 5 shows the percentage increase in \(F\) compared to the base case for all 4851 possible pairs of virtual links, in decreasing order. The dashed lines correspond to the 100%, 50% and 10% levels, which correspond to 3.09%, 10.76% and 41.58% of all cases. It is clear that adding even a small amount of shifting can give a great boost in flexibility over the whole system, with a maximum increase of 267.97% as measured by the flexibility index. However, the effect of adding such flexibility is highly sensitive to the location to which it is added, as in more than half of the cases the improvement will not be more than 10%. Achieving greater benefits from flexibility thus requires careful selection for placement of flexible loads. An immediate question that follows is: How to choose the best few buses to deploy flexible loads? To address this question, we inspect the 20 virtual links with top flexibility increases in table 2. Interestingly, all 20 virtual links are connected with bus 92, which is connected by an active transmission line and where the load level is relatively high. The topology of relevant buses are shown in figure 6. We observe that both short-range and long-range virtual links provide great contribution to flexibility. Specifically, short-range virtual links alleviate local congestion (as shown by the congested lines 92-55 and 63-89) by adjusting local load distribution in space. On the other hand, long-range virtual links adjust shift loads away from congested areas. We also observe a clustering behavior from nodes that contribute the most flexibility to the system. The clustering behavior becomes more pronounced in the virtual link visualization shown in Figure 7, where darker line color means higher flexibility index. The results indicate that adding long-range virtual links between these three set of nodes tends to contribute the most to increasing system flexibility. Figure 5: Percentage increase in flexibility from individual virtual links (in sorted order). Dashed horizontal lines denote the level of 10%, 50% and 100% increase in flexibility (corresponding to 41.58%, 10.76%, and 3.09% of all virtual links). Figure 6: Nodes associated with the top 20 virtual links (shown in green). The red node denotes node 92, common node to all top 20 virtual links. Red lines mark transmission lines with active capacity constraints in the base case. Figure 7: Visualization of flexibility index results for all virtual links (shown in thin solid lines). Darker lines denote higher flexibility index. Only virtual links with more than 20% increase in flexibility index relative to the base case are shown. Five buses with the highest connectivity are marked red. Transmission network topology is shown by the transparent thick solid lines. ### Economic Benefits of Flexibility Provision So far we have explored how virtual links contribute to the expansion of the operational feasible set of the market formulation. One key aspect that the previous analysis does not cover is the economic benefits of incorporating flexibility. This can be critical as, in most cases, incorporating flexibility might be expensive. In this subsection we address this issue by modifying the market formulation, so that economic quality of added flexibility is guaranteed to some level. This can be done by incorporating economic constraints to the market formulation. Here, we consider the following types of economic constraints: * Total cost bound: \[\sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{S}}\alpha_{i,t}^{p}p_{i,t}+\sum_{v\in \mathcal{V}}\alpha_{v}^{\delta}\delta_{v}\leq(1+\epsilon)\psi_{0}\] * Per-unit cost bound: \[\sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{S}}\alpha_{i,t}^{p}p_{i,t}+\sum_{v\in \mathcal{V}}\alpha_{v}^{\delta}\delta_{v}\leq(1+\epsilon)\frac{\psi_{0}}{\sum_ {n,t}D_{n,t}}\sum_{n,t}d_{n,t}\] where \(\epsilon>0\) is the additional fraction of cost that the market solution is allowed to admit, and \(\psi_{0}\) is the optimal cost of the base case. The total cost bound constraints enforce that with the virtual links, the new solutions can only admit a total cost that is \(\epsilon\) times higher than the optimal cost of the base case. On the other hand, the per-unit cost bound constraints enforce a similar bound, but on the average cost per unit of load served. \begin{table} \begin{tabular}{|c c|c c|} \hline Bus 1 & Bus 2 & \(\alpha\) & \% Increase \\ \hline 92 & 55 & 0.120 & 267.97 \\ 92 & 71 & 0.118 & 261.72 \\ 92 & 97 & 0.113 & 245.97 \\ 92 & 40 & 0.112 & 243.40 \\ 92 & 114 & 0.112 & 241.76 \\ 92 & 22 & 0.111 & 241.46 \\ 92 & 59 & 0.111 & 239.81 \\ 92 & 14 & 0.111 & 239.10 \\ 92 & 23 & 0.110 & 238.51 \\ 92 & 77 & 0.110 & 238.02 \\ 92 & 36 & 0.110 & 237.58 \\ 92 & 15 & 0.110 & 236.94 \\ 92 & 9 & 0.110 & 235.78 \\ 92 & 72 & 0.110 & 235.78 \\ 92 & 43 & 0.110 & 235.78 \\ \hline \end{tabular} \end{table} Table 2: Virtual links with top increase in flexibility. To explore the effect of adding economic constraints, we run the flexibility analysis for the complete network of virtual links within nodes \([92,9,40,45]\) with total cost bound in one case, and per-unit cost bound in the other case. For each case, we run the flexibility analysis multiple times with varying level of \(\epsilon\). Figure 8 shows the experimental results with varying level of \(\epsilon\). When \(\epsilon=0\), we obtain a relative increase of -100, meaning that the flexibility index is 0. This is expected, as we are enforcing that the cost must be exactly the same as the optimal cost of the base case and, therefore, only the base load level can be feasible. For both cases, the flexibility index increases with increasing \(\epsilon\), as the economic constraints become more relaxed. Eventually when \(\epsilon>0.15\), we recover the percent increase in flexibility index with no economic constraints at all. This means that with a 15% increase/relaxation in cost budget, the full benefit of added flexibility can be realized. Comparing the two trajectories, we notice the flexibility index values of total cost bound cases are lower bounded by those of per-unit cost bound cases for all levels of \(\epsilon\). This is expected because the per-unit cost bound is supposed to be more relaxed compared to the total cost bound. The intuition behind this is that total cost bound is limiting the increase in cost incurred by having to satisfy more loads, whereas the per-unit cost bound only needs to bound the increase in average cost. ### Optimal Flexibility Deployment of Temporal Flexibility In this section, we demonstrate how to apply the flexibility analysis framework to analyze multi-period electricity markets with high renewable penetration. We consider a day-ahead market with time horizon of 24 hours and time resolution of 1 hr. The underlying power network is based on the 14-bus case study from the PGLib-OPF library [3]. Each bus is installed with renewable energy, resulting in a duck-curve demand profile. Figure 9 shows the network topology for the 14-bus case Figure 8: Percent increase in flexibility with respect to \(\epsilon\), fraction increase in cost allowed. with base load level, and the normalized net load (relative to base load) profile for each node. The net load profiles are randomly generated based on CAISO net load data. By doing so, we assume that each node is connected to renewables, which give rise to the inverted duck-curve net load shape shown in figure 8(b). We observe steep ramping of net load at round hour 8 and hour 16, which leads to need for flexibility to alleviate constraint due to generator ramping. For this system, we consider the question of choosing to install a storage system at one of the 14 buses, and the question of which pair of time points to add a pair of virtual links in both direction. For instance, by adding a virtual link to node \(n\) between times \(t_{1}\) and \(t_{2}\), we are assessing the case where a storage system is installed at node \(n\), and the storage offers power shifting services between time intervals \(t_{1}\) and \(t_{2}\). This means the storage offers to charge at \(t_{1}\) and discharge at \(t_{2}\), or charge at \(t_{2}\) and discharge at \(t_{1}\), and the market is free to clear in either direction. Figure 10 shows the increase of flexibility index for each node. We observe that adding one pair of virtual links to 11 of the 14 nodes results in the greatest increase in flexibility (80%). Selecting the geographical nodes to add flexibility is still important, as we notice that adding virtual links to the wrong node (i.e. one of nodes 2, 10, and 11) results in no improvement of flexibility at all. However, this effect is not as obvious as the 118-bus case in section 4.1. This is possibly a result of the fact that this case is small in terms of spatial size, meaning we have a small number of nodes and transmission lines, and there is not much congestion in transmission. Figure 11 shows the increase of flexibility index by each pair of virtual links for selected nodes. Only virtual links that achieve higher than 10% increase in flexibility are visualized. Other nodes are not included either because no virtual links achieve higher than 10% increase, or because they exhibit similar results to what are shown in the figure. We observe virtual links connecting hour 17 always Figure 9: (a) 14-bus case network topology and (b) 24-hour demand profile. The system has a large generator installed at node 1 and a small generator installed at node 2. achieve the highest increase in flexibility index for every node. This shows that the system has the largest temporal congestion around hour 17, which is expected as the net load profiles in figure 8(b) are steepest between hours 16 and 18. ## 5 Conclusions and Future Work In this work, we propose a framework to quantify load shifting flexibility in electricity markets. We consider a clearing formulation with virtual links that capture space-time flexibility from shifting behavior. To quantify the improvement of flexibility from virtual links, our framework applies a systematic flexibility analysis based on the notion of flexibility index. This allows us to parameterize flexibility with a scalar quantity/index that can be computed efficiently using mixed-integer programming techniques. We perform several numerical case studies to demonstrate how this framework helps analyze benefits of flexibility investment to the electricity market. Our results show that even adding one pair of virtual links can bring about huge increase in flexibility index. This is true even with the consideration of economic viability of harnessing the added flexibility. We also show that the benefit of flexibility is highly dependent on the choice of location and time to add the flexibility. This is consistent with the general observation that power system operations are often constrained by transmission line capacity and generator ramping limits. For future work, we consider developing solutions that are more scalable than off-the-shelf mixed-integer solvers, which will help solve larger (and thus more realistic) cases. We also consider applying different types of uncertainty sets other than the hyperbox set considered in this paper. Figure 10: Percentage increase in flexibility index at different buses. Bars show the minimum, median, and maximum values for each node. Only data points with a positive percentage increase in flexibility are plotted. Figure 11: Increase in flexibility index for selected nodes. Each circle represents a time interval. ## Acknowledgments We acknowledge support from the U.S. National Science Foundation under award 1832208.
2310.08560
MemGPT: Towards LLMs as Operating Systems
Large language models (LLMs) have revolutionized AI, but are constrained by limited context windows, hindering their utility in tasks like extended conversations and document analysis. To enable using context beyond limited context windows, we propose virtual context management, a technique drawing inspiration from hierarchical memory systems in traditional operating systems that provide the appearance of large memory resources through data movement between fast and slow memory. Using this technique, we introduce MemGPT (Memory-GPT), a system that intelligently manages different memory tiers in order to effectively provide extended context within the LLM's limited context window, and utilizes interrupts to manage control flow between itself and the user. We evaluate our OS-inspired design in two domains where the limited context windows of modern LLMs severely handicaps their performance: document analysis, where MemGPT is able to analyze large documents that far exceed the underlying LLM's context window, and multi-session chat, where MemGPT can create conversational agents that remember, reflect, and evolve dynamically through long-term interactions with their users. We release MemGPT code and data for our experiments at https://memgpt.ai.
Charles Packer, Sarah Wooders, Kevin Lin, Vivian Fang, Shishir G. Patil, Ion Stoica, Joseph E. Gonzalez
2023-10-12T17:51:32Z
http://arxiv.org/abs/2310.08560v2
# MemGPT: Towards LLMs as Operating Systems ###### Abstract Large language models (LLMs) have revolutionized AI, but are constrained by limited context windows, hindering their utility in tasks like extended conversations and document analysis. To enable using context beyond limited context windows, we propose virtual context management, a technique drawing inspiration from hierarchical memory systems in traditional operating systems that provide the appearance of large memory resources through data movement between fast and slow memory. Using this technique, we introduce MemGPT (Memory-GPT), a system that intelligently manages different memory tiers in order to effectively provide extended context within the LLM's limited context window, and utilizes interrupts to manage control flow between itself and the user. We evaluate our OS-inspired design in two domains where the limited context windows of modern LLMs severely handicaps their performance: document analysis, where MemGPT is able to analyze large documents that far exceed the underlying LLM's context window, and multi-session chat, where MemGPT can create conversational agents that remember, reflect, and evolve dynamically through long-term interactions with their users. We release MemGPT code and data for our experiments at [https://memgpt.ai](https://memgpt.ai). ## 1 Introduction In recent years, large language models (LLMs) and their underlying transformer architecture (Vaswani et al., 2017; Devlin et al., 2018; Brown et al., 2020; Ouyang et al., 2022) have become the cornerstone of conversational AI and have led to a wide array of consumer and enterprise applications. Despite these advances, the limited fixed-length context windows used by LLMs significantly hinders their applicability to long conversations or reasoning about long documents. For example, the most widely used open-source LLMs can only support a few dozen back-and-forth messages or reason about a short document before exceeding their maximum input length (Touvron et al., 2023). Naively extending the context length of transformers incurs a quadratic increase in computational time and memory cost due to the transformer architecture's self-attention mechanism, making the design of new long-context architectures a pressing research challenge (Dai et al., 2019; Kitaev et al., 2020; Beltagy et al., 2020). While developing longer models is an active area of research (Dong et al., 2023), even if we could overcome the computational challenges of context scaling, recent research shows that long-context models struggle to utilize additional context effectively (Liu et al., 2023a). As consequence, given the considerable resources needed to train state-of-the-art LLMs and apparent diminishing returns of context scaling, there is a critical need for alternative techniques to support long context. In this paper, we study how to provide the illusion of an infinite context while continuing to use fixed-context models. Our approach borrows from the idea of virtual memory paging that was developed to enable applications to work on datasets that far exceed the available memory. We leverage the recent progress in function calling abilities of LLM agents (Schick et al., 2023; Liu et al., 2023b) to design MemGPT, an OS-inspired LLM system for **virtual context management**. We draw inspiration from traditional OSes' hierarchical memory management to effectively "page" in and out information between context windows (analogous to "main memory" in operating systems) and external storage. MemGPT manages the control flow between the memory management, the LLM processing module, and user. This design allows for repeated context modifications during a single task, allowing the agent to more effectively utilize its limited context. In MemGPT, we treat context windows as a constrained memory resource, and design a memory hierarchy for LLMs analogous to memory tiers used in traditional OSes (Patterson et al., 1988). Applications in traditional OSes interact with _virtual memory_, which provides an illusion of there being more memory resources than are actually available in physical (i.e., main) memory by the OS paging overflow data to disk and retrieving data (via a page fault) back into memory when accessed by applications. To provide a similar illusion of longer context length (analogous to virtual memory), we allow the LLM to manage what is placed in its own context (analogous to physical memory) via an 'LLM OS', which we call MemGPT. MemGPT enables the LLM to retrieve relevant historical data missing from what is placed in-context, similar to an OS page fault. Additionally, the agent can _iteratively_ modify what is in context for a single task, in the same way a process may access virtual memory repeatedly. Figure 1 illustrates the components of MemGPT. The combined use of a memory-hierarchy, OS functions and event-based control flow allow MemGPT to handle unbounded context using LLMs that have finite context windows. To demonstrate the utility of our new OS-inspired LLM system, we evaluate MemGPT on two domains where the performance of existing LLMs is severely limited by finite context: document analysis, where the length of standard text files can quickly exceed the input capacity of modern LLMs, and conversational agents, where LLMs bound by limited conversation windows lack context awareness, persona consistency, and long-term memory during extended conversations. In both settings, MemGPT is able to overcome the limitations of finite context to outperform existing LLM-based approaches. ## 2 Memory-GPT (MemGPT) In this section, we outline the implementation of MemGPT, an OS-inspired LLM system that teaches LLMs to manage their own memory to achieve unbounded context. MemGPT's multi-level memory architecture delineates between two primary memory types: **main context** (analogous to main memory/physical memory/RAM) and **external context** (analogous to disk memory/disk storage). Main context is the standard fixed-context window in modern language models--anything in main context is considered _in-context_ and can be accessed by the LLM processor during inference. External context refers to any information that is held outside of the LLMs fixed context window. This _out-of-context_ data must always be explicitly moved into main context in order for it to be passed to Figure 1: In MemGPT (components shaded), a fixed-context LLM is augmented with a hierarchical memory system and functions that let it manage its own memory. The LLM processor takes _main context_ (analogous to OS main memory/RAM) as input, and outputs text interpreted by a parser, resulting either in a yield or a function call. MemGPT uses functions to move data between main context and _external context_ (analogous to OS disk memory). When the processor generates a function call, it can request control ahead of time to chain together functions. When yielding, the processor is paused until the next external event (e.g., a user message or scheduled interrupt). the LLM processor during inference. MemGPT provides function calls that the LLM processor can use to manage its own memory without any user intervention. ### Main context In MemGPT we refer to the LLM inputs (that are bound by the maximum number of input tokens) as the system's main context. In LLM-based conversational agents, a significant portion of main context tokens is generally used to hold a'system message' or 'preprompt' that dictates the nature of the interaction to the system, while the remainder of the tokens can be used to hold conversation data (Touvron et al., 2023; SillyTavern, 2023). This preprompt is the main way to enable the system to adopt various distinct personas without requiring finetuning of the base model; depending on the use case, the preprompt can range from basic primers (e.g., 'You are a helpful assistant.') to complex instructions comprising of thousands of tokens (e.g., a fictional character card that includes the character's background and example dialogue). Beyond conversational agents, large preprompts are also common when LLMs are used to solve complex tasks that require long instructions and/or instructions with many in-context examples (Liu et al., 2023b). Because of the importance of the preprompt in dictating system behavior, it is common for the preprompt to consume more than a thousand tokens, which means the entire context window in many modern LLMs will be exhausted only with a few dozen back-and-forth messages between the user and system. For example, a 1000 token preprompt (roughly the size of the MemGPT preprompt in our experiments) leaves space for only about 60 remaining messages when using 4K context models such as llama-2 or qpt-3.5-turbo (see Table 1 for more examples). In settings where the user is expected to communicate frequently with the system (for example, virtual companions or personalized assistants), it is easy to imagine exceeding the maximum conversation length even for models with 100k context windows in a matter of days (or potentially hours). Recursive summarization (Wu et al., 2021b) is a simple way to address overflowing context windows, however, recursive summarization is inherently lossy and eventually leads to large holes in the memory of the system (as we demonstrate in Section 3). This motivates the need for a more comprehensive way to manage memory for conversational systems that are meant to be used in long-term settings. In our experiments on multi-session chat and document analysis, we further divide main context into three components: **system instructions**, which hold the base LLM instructions (e.g., information describing MemGPT functions and control flow to the LLM), **conversational context**, which holds a first-in-first-out (FIFO) queue of recent event history (e.g., messages between the agent and user), and **working context**, which serves as a working memory scratchpad for the agent. System instructions are read-only and pinned to main context (they do not change during the lifetime of the MemGPT agent), conversational context is read-only with a special eviction policy (if the queue reaches a certain size, a portion of the front is truncated or compressed via recursive summarization), and working context is writeable by the LLM processor via function calls. Combined, the three parts of main context cannot exceed the underlying LLM processors's maximum context size, and in practice we limit the size of conversational context and working context to a fixed constant determined by the processor's context window and the system instructions length. \begin{table} \begin{tabular}{l c c c} **Model / API designation** & **Availability** & **Max Tokens** & **Max conversation length*** \\ \hline llama-1 model family & Open source & 2k tokens & 20 total messages \\ llama-2 model family & Open source & 4k tokens & 60 total messages \\ gpt-3.5-turbo & API & 4k tokens & 60 total messages \\ gpt-4 & API & 8k tokens & 140 total messages \\ gpt-3.5-turbo-16k & API & 16k tokens & 300 total messages \\ gpt-4-32k & Limited API & 32k tokens & \(\sim\)600 total messages \\ claude-instant-1 & Limited API & 100k tokens & \(\sim\)2000 total messages \\ claude-2 & Limited API & 100k tokens & \(\sim\)2000 total messages \\ \end{tabular} \end{table} Table 1: Comparing context lengths of commonly used models / APIs (data collected 9/2023). *Assuming a preprompt of 1k tokens, and an average message size of \(\sim\)50 tokens (\(\sim\)250 characters). ### External context External context refers to out-of-context storage that lies outside the context window of the LLM processor, analogous to disk memory (i.e. disk storage) in OSes. Information in external context is not immediately visible to the LLM processor, however, it can be brought into main context through appropriate function calls. In practice, the underlying storage in external context can take various forms which can be configured for specific tasks: for example, for conversational agents it may be desireable to store full chat logs between the user and agent (that MemGPT can access at a later date), and for document analysis large document collections can be stored inside external context that MemGPT can bring into restricted main context via paginated function calls to disk. In our experiments, using MemGPT for multi-session chat and document analysis, we use databases to store text documents and embeddings/vectors, provide several ways for the LLM processor to query external context: timestamp-based search, text-based search, and embedding-based search. We make a distinction between two types of external context: **recall storage**, which stores the entire history of events processed by the LLM processor (in essense the full uncompressed queue from active memory), and **archival storage**, which serves as a general read-write datastore that the agent can utilize as overflow for the in-context read-write core memory. In the context of conversational agents, archival storage allows MemGPT to store facts, experiences, preferences, etc. about the agent or user beyond the strict token limit of main context, and search over recall storage allows the MemGPT to find past interactions related to a particular query or within a specific time period. In the context of document analysis, archival storage can be used to search over (and add to) an expansive document database. ### Self-directed editing and retrieval MemGPT orchestrates data movement between main context and external context via function calls that are generated by the LLM processor. Memory edits and retrieval are entirely self-directed: MemGPT autonomously updates and searches through its own memory based on the current context. For instance, it can decide when to move items between contexts (Figure 2) and modify its main context to better reflect its evolving understanding of its current objectives and responsibilities (Figure 4). We implement self-directed editing and retrieval by providing explicit instructions within the preprompt that guide the system on how to interact with its memory systems. These instructions comprise two main components: (1) a detailed description of the memory hierarchy and their respective utilities, and (2) a function schema (complete with their natural language descriptions) that the system can call to access or modify its memory. Figure 2: An example conversation snippet where MemGPT writes details from conversation to working context without a memory warning from the system. During each inference cycle, LLM processor takes main context (concatenated into a single string) as input, and generates an output string. This output string is parsed by MemGPT to ensure correctness, and if the parser validates the function arguments the function is executed. The results, including any runtime errors that occur (e.g. trying to add to main context when it is already at maximum capacity), are then fed back to the processor by MemGPT. This feedback loop enables the system to learn from its actions and adjust its behavior accordingly. Awareness of context limits is a key aspect in making the self-editing mechanism work effectively, to this end MemGPT prompts the processor with warnings regarding token limitations to guide its memory management decisions (Figure 3). Additionally, our memory retrieval mechanisms are designed to be cognizant of these token constraints and implement pagination to prevent retrieval calls from overflowing the context window. ### Control flow and function chaining In MemGPT, _events_ trigger LLM inference: events are generalized inputs to MemGPT and can consist of user messages (in chat applications), system messages (e.g. main context capacity warnings), user interactions (e.g. an alert that a user just logged in, or an alert that they finished uploading a document), and timed events that are run on a regular schedule (allowing MemGPT to run "un-prompted" without user intervention). MemGPT processes events with a parser to convert them into plain text messages that can be appended to main context and eventually be fed as input into the LLM processor. Many practical tasks require calling multiple functions in sequence, for example, navigating through multiple pages of results from a single query or collating data from different documents in main context from separate queries. Function chaining allows MemGPT to execute multiple function calls sequentially before returning control to the user. In MemGPT, functions can be called with a special flag that requests control be immediately returned to the processor after the requested function completes execution. If this flag is present, MemGPT will add the function output to main context and (as opposed to pausing processor execution). If this flag is not present (a _yield_), MemGPT will not run the LLM processor until the next external event trigger (e.g. a user message or scheduled interrupt). Figure 3: An example conversation snippet where MemGPT writes details from conversation to memory after it receives a system alert about memory pressure. ## 3 Experiments We assess MemGPT in two long-context domains: conversational agents and document analysis. For conversational agents, we expand the existing Multi-Session Chat dataset Xu et al. (2021) and introduce two new dialogue tasks that evaluate an agent's ability to retain knowledge across long conversations. For document analysis, we benchmark MemGPT on existing tasks from Liu et al. (2023a) for question answering and key-value retrieval over lengthy documents. We also propose a new nested key-value retrieval task requiring collating information across multiple data sources, which tests the ability of an agent to collate information from multiple data sources (multi-hop retrieval). We publicly release our augmented MSC dataset, nested KV retrieval dataset, and a dataset of embeddings for 20M Wikipedia articles to facilitate future research. Our code for the full conversational and document analysis benchmarks is available at [https://memgpt.ai](https://memgpt.ai). ### MemGPT for conversational agents Conversational agents like virtual companions and personalized assistants aim to engage users in natural, long-term interactions, potentially spanning weeks, months, or even years. This creates challenges for models with fixed-length contexts, which can only reference a limited history of the conversation. An 'infinite context' agent should seamlessly handle continuous exchanges without boundary or reset. When conversing with a user, such an agent must satisfy two key criteria: * The agent should maintain conversational coherence. New facts, preferences, and events mentioned should align with prior statements from both the user and agent. * The agent should draw on long-term knowledge about the user to personalize responses. Referencing prior conversations makes dialogue more natural and engaging. We therefore assess our proposed model, MemGPT, on these two criteria: * Does MemGPT leverage its memory to improve conversation consistency? Can it remember relevant facts, preferences, and events from past interactions to maintain coherence? * Does MemGPT produce more engaging dialogue by taking advantage of memory? Does it spontaneously incorporate long-range user information to personalize messages? By evaluating on consistency and engagement, we can determine how well MemGPT handles the challenges of long-term conversational interaction compared to fixed-context baselines. Its ability to satisfy these criteria will demonstrate whether unbounded context provides meaningful benefits for conversational agents. Figure 4: An example conversation snippet where MemGPT corrects information about the user by writing to main context (and replacing a section of text in working context). #### 3.1.1 Dataset We evaluate MemGPT and our fixed-context baselines on the Multi-Session Chat (MSC) dataset introduced by Xu et al. (2021), which contains multi-session chat logs generated by human labelers, each of whom was asked to play a consistent persona for the duration of all sessions. Each multi-session chat in MSC has five total sessions, and each session consists of a roughly a dozen messages. As part of our consistency experiments, we created a new session (session 6) that contains a single question-answer response pair between the same two personas. #### 3.1.2 Deep memory retrieval task (consistency) We introduce a new 'deep memory retrieval' (DMR) task based on the MSC dataset designed to test the consistency of a conversational agent. In DMR, the conversational agent is asked a question by the user that explicitly refers back to a prior conversation and has a very narrow expected answer range (see Figure 5 for an example). We generated the DMR question-answer (QA) pairs using a separate LLM that was instructed to write a question from one user to another that could only be answered correctly using knowledge gained from the past sessions (see Appendix for further details). We evaluate the quality of the generated response against the 'gold response' using ROUGE-L scores (Lin, 2004) and an 'LLM judge', which is instructed to evaluate whether or not the generated response is consistent with the gold response (GPT-4 has been shown to have high agreement with human evaluators (Zheng et al., 2023)). In practice, we notice that the generated responses (from both MemGPT and the baselines) were generally more verbose than the gold responses; ROUGE-L (which measures the longest common subsequence between the generated and reference text) is robust to this semantic variation in correct responses since it evaluates the presence of words from the gold answer in the generated answer. We also report the precision and recall scores used in calculating the ROUGE-L (F1) score. **MemGPT utilizes memory to maintain coherence:** Table 2 shows the performance of MemGPT vs the fixed-memory baselines. We compare against three variations of fixed-context baselines: an agent that see a recursive summary of the past five conversations (summary\({}_{1:5}\)), an agent that can Figure 5: **Illustration of the deep memory retrieval task.** In the example shown, the user asks a question that can only be answered using information from a prior session (no longer in-context). Even though the answer is not immediately answerable using the in-context information, MemGPT can search through its recall storage containing prior conversations to retrieve the answer. see a recursive summary of the first four conversations (summary\({}_{1:4}\)) and the exact contents of the prior conversation (dialogue\({}_{5}\) is placed in active memory), as well as an oracle agent that can see the gold persona (for both chat participants) as well as a recursive summary. We experiment with these context variations using both GPT-3.5 and GPT-4. All of the gold persona baselines perform near-perfectly: this is because the human-labelled gold personas in the MSC dataset are detailed and intended to be a concise summary of all stated persona information in all prior chats - in other words, a well-written gold persona should contain the answer to the DMR question. Among the non-oracle fixed-context baselines, GPT-4 significantly outperforms GPT-3.5, and with both models the variations that had access to the full prior conversation in active memory perform slightly better. The drop in performance from summary\({}_{1:4}\) + dialogue\({}_{5}\) to summary\({}_{1:5}\) is expected, since the latter should contain strictly less information than the former (assuming a perfect summarizer with restricted length summarizations). MemGPT significantly outperforms both GPT-4 and GPT-3.5 in both LLM judge accuracy and ROUGE-L scores: instead of relying on recursive summaries to extend context, MemGPT is able to query past conversation history in its Recall Memory to answer the DMR questions. #### 3.1.3 Conversation opener task (engagement) In the 'conversation opener' task we evaluate an agent's ability to craft engaging messages to the user that draw from knowledge accumulated in prior conversations. To evaluate the 'engagingness' of a conversation opener using the MSC dataset, we compare the generated opener to the gold personas: an engaging conversation opener should draw from one (or several) of the data points contained in the persona, which in MSC effectively summarize the knowledge accumulated throughout all prior sessions (see Figure 6 for an example). We also compare to the human-generated gold opener, i.e., the first response in the following session. Because the quality of conversation openers is not necessarily constrained by context length (a recursive summary or even a few snippets from prior conversations is enough to craft an opener that uses prior knowledge), we use this task to ablate MemGPT's different components (rather than compare it to fixed-context baselines). We report the CSIM scores of MemGPT's openers in Table 3. We test several variations of MemGPT: MemGPT without working context (storing persona information) and recall storage (storing conversation information), MemGPT without working context or without recall storage, and MemGPT with both working context and recall storage enabled. **MemGPT utilizes memory to increase engagement:** As seen in Table 3 and Figure 6, MemGPT is able to craft engaging openers that perform similarly to and occasionally exceed the hand-written human openers. We observe that MemGPT tends to craft openers that are both more verbose and cover more aspects of the persona information than the human baseline. Additionally, we can see the storing information in working context is key to generating engaging openers. Without working context, MemGPT's openers significantly degrade in quality; having the dialogue stored in recall \begin{table} \begin{tabular}{l l c c c c} \hline \hline & & & \multicolumn{3}{c}{**ROUGE-L**} \\ \cline{3-6} **Model** & **Available information** & **Accuracy**\(\dagger\) & **F1**\(\dagger\) & **P**\(\dagger\) & **R**\(\dagger\) \\ \hline gpt-3.5\({}^{\ddagger}\) & **persona\({}_{5}\)** + summary\({}_{1:5}\) & 70.0\% & 0.190 & 0.134 & 0.674 \\ gpt-4\({}^{\ddagger}\) & **persona\({}_{5}\)** + summary\({}_{1:5}\) & 79.8\% & 0.225 & 0.151 & 0.716 \\ **MemGPT\({}^{\ddagger}\)** & **persona\({}_{5}\)** (Core) + dialogue\({}_{1:5}\) (Recall) & 84.0\% & 0.171 & 0.105 & 0.746 \\ \hline gpt-3.5 & summary\({}_{1:5}\) & 56.2\% & 0.157 & **0.114** & 0.585 \\ gpt-3.5 & summary\({}_{1:4}\) + dialogue\({}_{5}\) & 55.6\% & 0.120 & 0.080 & 0.602 \\ gpt-4 & summary\({}_{1:5}\) & 63.0\% & 0.159 & 0.101 & 0.607 \\ **gpt-4** & summary\({}_{1:4}\) + dialogue\({}_{5}\) & 79.2\% & 0.171 & 0.107 & 0.713 \\ **MemGPT** & dialogue\({}_{1:5}\) (Recall) & **82.4\%** & **0.173** & 0.106 & **0.743** \\ \hline \hline \end{tabular} \end{table} Table 2: **Deep memory retrieval (DMR) performance.** In this task, the agent is asked a specific question about a topic discussed in a prior conversation (sessions 1–5). The agent’s response is scored against the gold answer. Methods using the gold persona (oracle) are marked with \(\ddagger\): MemGPT significantly outperforms the (non-oracle) fixed-context baselines. storage does not affect the opener, since MemGPT will generally not attempt to search the conversation history before generating an opener. ### MemGPT for document analysis Document analysis also faces challenges due to the limited context windows of today's transformer models. For example, OpenAI's (closed) GPT models behind their popular ChatGPT consumer chatbot application have a limit of 32k input tokens, and the state-of-the-art open source Llama 2 models have a limit of only 4k tokens (see Table 1). Anthropic have released (closed) models handling up to 100k tokens, but many documents easily surpass that length; Stephen King's bestselling novel _The Shining_ contains around 150k words, which equates to roughly 200k tokens (words-to-token varies based on the specific tokenizer used), and legal or financial documents such as Annual Reports (SEC Form 10-K) can easily pass the million token mark. Moreover, many real document analysis tasks require drawing connections across multiple such lengthy documents. Anticipating these scenarios, it becomes difficult to envision blindly scaling up context as a solution to the fixed-context problem. Recent research (Liu et al., 2023) also raises doubts about the utility of simply scaling contexts, since they find uneven attention distributions in large context models (the model is more capable of recalling information at the beginning or end of its context window, vs tokens in the middle). To enable reasoning across documents, more flexible memory architectures such as those used in MemGPT are likely needed. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **CSIM-1 \(\Uparrow\)** & **CSIM-3 \(\Uparrow\)** & **CSIM-H \(\Uparrow\)** \\ \hline Gold opener (human baseline) & 0.823 & 0.787 & 1.000 \\ \hline **MemGPT** (working context + recall storage) & 0.817 & 0.787 & 0.760 \\ **MemGPT** (working context only) & **0.820** & **0.790** & **0.762** \\ **MemGPT** (recall storage only) & 0.749 & 0.738 & 0.737 \\ \hline \hline \end{tabular} _Gold persona (user)_ I am **studying the LSAT**. I want to be an attorney. I used to live in New Mexico in a small town. I grew up with horses. I love coffee. I love **tea**.... _Gold opener (human baseline)_ What date is your LSAT test scheduled? MemGPT (working context + recall storage) Hello there! Just brewing up a fresh pot of tea on my end. How’s your coffee? Ready to burn the midnight oil with some LSAT study? MemGPT (working context only) Hello! It’s great to have you back. It’s time for a fresh cup of coffee. As a tour guide at our local museum, I meet so many people who say a good brew helps keep them focused. How’s your LSAT study going? MemGPT (recall storage only) Hey there! It’s always a pleasure to see you online. Let’s dive into the world of shared ideas, shall we? Figure 6: An engaging conversation opener should draw from the provided persona information. Without working context, MemGPT’s openers significantly degrade in quality. Having the dialogue stored in recall memory does not affect the opener, since MemGPT will generally not attempt to search the conversation history before generating an opener. #### 3.2.1 Multi-document question-answering (Doc-QA) To evaluate MemGPT's ability to analyze documents, we benchmark MemGPT against fixed-context baselines on the retriever-reader document QA task from Liu et al. (2023a). In this task, a question is selected from the NaturalQuestions-Open dataset, and a retriever selects relevant Wikipedia documents for the question. A reader model (the LLM) is then fed these documents as input, and is asked to use the provided documents to answer the question. Similar to Liu et al. (2023a), we evaluate reader accuracy as the number of retrieved documents \(K\) increases. In our evaluation setup, both the fixed-context baselines and MemGPT use the same retriever, which selects the top \(K\) documents according using Faises efficient similarity search (Johnson et al., 2019) (which corresponds to approximate nearest neighbor search) on OpenAI's text-embedding-ada-002 embeddings. In MemGPT, the entire document set is loaded into archival storage, and the retriever naturally emerges via the archival storage search functionality (which performs embedding-based similarity search). In the fixed-context baselines, the top-\(K\) documents are fetched using the retriever independently from the LLM inference, similar to the original retriever-reader setup. We use a dump of Wikipedia from late 2018, following past work on NaturalQuestions-Open (Izacard and Grave, 2020; Izacard et al., 2021) We randomly sample a subset of 50 questions for each point in the graph. The fixed-context baselines performance is capped roughly at the performance of the retriever, as they use the information that is presented in their context window (e.g. if the embedding search retriever fails to surface the gold article using the provided question, the fixed-context baselines are guaranteed to never see the gold article). By contrast, MemGPT is effectively able to make multiple calls to the retriever by querying archival storage, allowing it to scale to larger effective context lengths. MemGPT actively retrieves documents from its archival storage (and can iteratively page through results), so the total number of documents available to MemGPT is no longer limited by the number of documents that fit within the LLM processor's context window. The document QA task is challenging for all methods due to the limitations of embedding-based similarity search. We observe that the golden document for chosen question (as annotated by NaturalQuestions-Open) often appears outside of the first dozen retrieved results, if not even further. The retriever performance translates directly to the fixed-context baseline results: GPT-3.5 and GPT-4's accuracy is relatively low with few retrieved documents, and continues to improve as additional documents are added to the context window. While MemGPT is theoretically not limited by sub-optimal retriever performance (even if the embedding-based ranking is noisy, as long as the full retriever ranking contains the gold document it can still be found with enough retriever calls via pagination), we observe that MemGPT will often stop paging through retriever results before exhausting the retriever database. For example, after sifting through a few pages of irrelevant results (missing the gold document), MemGPT will pause the pagination and ask the user to help narrow the query Figure 7: **Document QA and nested KV retrieval task performance. In both tasks, MemGPT’s performance is unaffected by increased context length. Methods such as truncation can extend the effective context lengths (past the dotted red line) of fixed length models such as GPT-4, but such compression methods will lead to performance degredation as the necessary compression grows (compression is particularly bad for key-value retrieval tasks, since it corrupts the key-value pairs).** in our evaluation, these questions are counted as failed answers, since there is no human-in-the-loop to answer MemGPT. There is also a tradeoff in retrieved document capacity created by MemGPT more complex operation: assuming MemGPT has the same token budget as a fixed-context baseline (using the same LLM), a non-trivial portion of MemGPT's token budget will be consumed by system instructions required for MemGPT's OS components (e.g. function call schemas for memory management), meaning that the total number of documents that can be held in-context at any given time is lower for MemGPT than the baselines. This tradeoff is observed in Figure 7: MemGPT has a lower average accuracy than GPT-4 (though higher than GPT-3.5), but can trivially scale to larger numbers of documents. To evaluate the fixed-context baselines against MemGPT past their default context lengths, we truncate the document segments returned by the retriever to fix the same number of documents into the available context. As expected, document truncation reduces accuracy as documents shrink as the chance of the relevant snippet (in the gold document) being omitted grows. We anticipate that MemGPT performance on document QA can be further improved with additional task instructions that reduce the chance of MemGPT returning control to the user (e.g. pausing to ask questions) and increase the chance of MemGPT reading all documents ranked by the retriever. #### 3.2.2 Nested key-value retrieval (Kv) We introduce a new task based on the synthetic Key-Value retrieval proposed in prior work (Liu et al., 2023). The goal of this task is to demonstrate how MemGPT can collate information from multiple data sources. In the original KV task, the authors generated a synthetic dataset of key-value pairs, where each key and value is a 128-bit UUID (universally unique identifier). The agent is then given a key, and asked to return the associated value for the key. We create a version of the KV task, _nested KV retrieval_, where values themselves may be keys, thus requiring the agent to perform a multi-hop lookup. In our setup, we fix the total number of UUIDs pairs to 140, corresponding to roughly 8k tokens (the context length of our GPT-4 baseline). We vary the total number of nesting levels from 0 (the initial key-value pair's value is not a key) to 4 (ie 4 total KV lookups are required to find the final value), and sample 30 different ordering configurations including both the initial key position and nesting key positions. While GPT-3.5 and GPT-4 have good performance on the original KV tasks, both struggle in the nested KV task. GPT-3.5 is unable to complete the nested variant of the task and has an immediate Figure 8: **Illustration of the nested key-value task.** In the example shown, MemGPT generates repeatedly queries archival memory to search for the latest key. Once archival memory reveals that the current key’s value is not also a key, MemGPT returns a message to the user with the final value. dropoff in performance, hitting 0 percent accuracy at 1 nesting level (we observe that its primary failure mode is to simply returns the original value). GPT-4 is better than GPT-3.5, but also suffers from a similar dropoff, and hits 0 percent accuracy by 4 nesting levels. In GPT-4's case, we observe that it often fails to recurse beyond a particular nesting level at simply returns the nested value at a previous nesting level. MemGPT on the other hand is unaffected with the number of nesting levels and is able to perform the nested lookup by accessing the key-value pairs stored in main memory repeatedly via function queries. MemGPT performance on the nested KV task demonstrates its ability to combine multiple queries to perform multi-hop lookups. ## 4 Related work Recent works have looked at improving context length that can be processed in each LLM invocation, improving search and retrieval for retrieval-augmented generation (RAG), and using language models to power interactive agents. ### Long-context LLMs The management of long contexts in LLMs is crucial for coherent and engaging dialogues for conversational agents, and for corroborating and stitching together facts from diverse sources for LLMs used in question answering (QA). One approach to tackle the limitations of fixed-length context, is through recursive summarization (Wu et al., 2021). In recursive summarization, the LLM often generates concise representations over a sliding window to fit them within the specified token length. This summarization process is inherently lossy and can lead to the unintentional loss of relevant details or subtle nuances. Given the limitations of context length on many LLM-based applications, there has been growing interest in improving the ability of LLMs to attend to longer sequences such as Press et al. (2021); Guo et al. (2021); Dong et al. (2023); Beltagy et al. (2020). MemGPT exploits and benefits from improvements to context length as it can store more information in MemGPT's main memory (as an analogy, as GPU cache's get bigger, the processor can now compute through quicker as it would benefit from high cache hits). ### Search and retrieval Search and retrieval mechanisms especially for the retrieval-augmented generation (RAG) paradigm, have been incorporated into conversational agents for tasks ranging from document question answering, customer support, to more general chatbots for entertainment. These mechanisms often utilize external databases or internal conversation logs to provide contextually relevant responses. Lin et al. (2023) for example, demonstrate how to train the retriever and LLM during instruction-tuning to improve the document recall. Other works have looked at optimizing the retriever or the LLM independently Ram et al. (2023); Borgead et al. (2022); Karpukhin et al. (2020); Lewis et al. (2020); Guu et al. (2020). Trivedi et al. (2022) interleave retrieval with Chain-of-Thoughts reasoning to improve multi-step question answering. In this work, we are agnostic for the retriever mechanism used; various retrieval mechanisms can be easily swapped or even combined as part of disk memory in MemGPT. ### LLMs as agents Recent work has explored augmenting LLMs with additional capabilities to act as agents in interactive environments. Park et al. (2023) propose adding memory to LLMs and using the LLM as a planner, and observe emergent social behaviors in a multi-agent sandbox environment (inspired by _The Sims_ video game) where agents can perform basic activities such as doing chores/hobbies, going to work, and conversing with other agents. Nakano et al. (2021) train models to search the web before answering questions, and use similar pagination concepts to MemGPT to control the underlying context size in their web-browsing environment. Yao et al. (2022) showed that interleaving chain-of-thought reasoning (Wei et al., 2022) can further improve the planning ability of interactive LLM-based agents; similarly in MemGPT, LLM is able to 'plan out loud' when executing functions (see Figure 5 and 8 for examples). Liu et al. (2023) introduced a suite of LLM-as-an-agent benchmarks to evaluate LLMs in interactive environments, including video games, thinking puzzles, and web shopping. In contrast, our work focuses on tackling the problem of tackling the problem of equipping agents with long-term memory of user inputs. ## 5 Concluding remarks and future directions In this paper, we introduced MemGPT, a novel LLM system inspired by operating systems to manage the limited context windows of large language models. By designing a memory hierarchy and control flow analogous to traditional OSes, MemGPT provides the illusion of larger context resources for LLMs. This OS-inspired approach was evaluated in two domains where existing LLM performance is constrained by finite context lengths: document analysis and conversational agents. For document analysis, MemGPT could process lengthy texts well beyond the context limits of current LLMs by effectively paging relevant context in and out of memory. For conversational agents, MemGPT enabled maintaining long-term memory, consistency, and evolvability over extended dialogues. Overall, MemGPT demonstrates that operating system techniques like hierarchical memory management and interrupts can unlock the potential of LLMs even when constrained by fixed context lengths. This work opens numerous avenues for future exploration, including applying MemGPT to other domains with massive or unbounded contexts, integrating different memory tier technologies like databases or caches, and further improving control flow and memory management policies. By bridging concepts from OS architecture into AI systems, MemGPT represents a promising new direction for maximizing the capabilities of LLMs within their fundamental limits. ### Limitations Our reference implementation leverages OpenAI GPT-4 models that are fine-tuned specifically for function calling. While the inner workings of OpenAI's models are proprietary and not publicly disclosed, OpenAI's API documentation states that when using function fine-tuned models, the function schema provided is converted into a system message that the model is trained to interpret through a fine-tuning process. While GPT models that have been finetuned for function-calling still require a parser to verify outputs as valid function syntax, we observed that GPT-4 function fine-tuned models rarely made syntactic or semantic errors on the MemGPT function set, whereas GPT-3.5 finetuned models consistently generated incorrect function calls, or used attempted to use functions incorrectly. Similarly, we also found that the most popular the Llama 2 70B model variants (even those fine-tuned for function calling) would consistently generate incorrect function calls or even hallucinate functions outside the provided schema. At present reasonable performance is only achieved using specialized GPT-4 models, however, we anticipate that future open-source models will eventually improve to the point of enabling MemGPT-style operation, either through improvements in fine-tuning (e.g. on larger function call datasets, or more specialized function call datasets), prompt engineering, or improved quality of base models. Nonetheless, for the time being reliance on the performance of proprietary closed-source models remains a significant limitation of this work.
2310.07653
Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models
The revolution of artificial intelligence content generation has been rapidly accelerated with the booming text-to-image (T2I) diffusion models. Within just two years of development, it was unprecedentedly of high-quality, diversity, and creativity that the state-of-the-art models could generate. However, a prevalent limitation persists in the effective communication with these popular T2I models, such as Stable Diffusion, using natural language descriptions. This typically makes an engaging image hard to obtain without expertise in prompt engineering with complex word compositions, magic tags, and annotations. Inspired by the recently released DALLE3 - a T2I model directly built-in ChatGPT that talks human language, we revisit the existing T2I systems endeavoring to align human intent and introduce a new task - interactive text to image (iT2I), where people can interact with LLM for interleaved high-quality image generation/edit/refinement and question answering with stronger images and text correspondences using natural language. In addressing the iT2I problem, we present a simple approach that augments LLMs for iT2I with prompting techniques and off-the-shelf T2I models. We evaluate our approach for iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT, LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a convenient and low-cost way to introduce the iT2I ability for any existing LLMs and any text-to-image models without any training while bringing little degradation on LLMs' inherent capabilities in, e.g., question answering and code generation. We hope this work could draw broader attention and provide inspiration for boosting user experience in human-machine interactions alongside the image quality of the next-generation T2I systems.
Zeqiang Lai, Xizhou Zhu, Jifeng Dai, Yu Qiao, Wenhai Wang
2023-10-11T16:53:40Z
http://arxiv.org/abs/2310.07653v2
# Mini DALL-E 3: Interactive Text to Image by ###### Abstract The revolution of artificial intelligence content generation has been rapidly accelerated with the booming text-to-image (T2I) diffusion models. Within just two years of development, it was unprecedentedly of high-quality, diversity, and creativity that the state-of-the-art models could generate. However, a prevalent limitation persists in the effective communication with these popular T2I models, such as Stable Diffusion, using natural language descriptions. This typically makes an engaging image hand to obtain without expertise in prompt engineering with complex word compositions, magic tags, and annotations. + Footnote †: Preliminary version. Work in Progress. Inspired by the recently released DALL-E 3- a T2I model directly built-in ChatGPT that talks human language, we revisit the existing T2I systems endeavoring to align human intent and introduce a new task - **interactive text to image** (iT2I), where people can interact with LLM for interleaved high-quality image generation/edit/refinement and question answering with stronger images and text correspondences using natural language. In addressing the iT2I problem, we present a simple approach that augments LLMs for iT2I with prompting techniques and off-the-shelf T2I models. We evaluate our approach for iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT, LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a convenient and low-cost way to introduce the iT2I ability for any existing LLMs and any text-to-image models without any training while bringing little degradation on LLMs' inherent capabilities in,_ e.g., _question answering and code generation. We hope this work could draw broader attention and provide inspiration for boosting user experience in human-machine interactions alongside the image quality of the next-generation T2I systems._ ## 1 Introduction The evolution of artificial intelligence content generation has been significantly accelerated by the proliferation of text-to-image (T2I) diffusion models [18, 20, 41, 43]. Within just two years of rapid development since 2021, it was unprecedentedly of high quality, diversity, and creativity that the state-of-the-art T2I models [4, 13, 39, 40, 41, 43, 55] could generate. For the first time, "talk to paint" is no longer a daydream, and complex surrealistic arts can be generated via textual descriptions, with stronger expressive ability than previous unconditional and class conditional image generation systems as shown in Fig. 2. However, it is unfortunate that most of the existing T2I models, such as Stable Diffusion [41], are still limited in understanding natural language. In other words, people have to learn to write complex text prompts to obtain the best results, which fit the used models but are not necessarily user-friendly and straightforward for humans, as illustrated by Fig. 6. As a result, this typically makes an engaging image hard to obtain without expertise in prompt engineering with proper word compositions and sometimes weird phrase organizations. Besides, there have been dozens of different textual and numerical configurations in a diffusion-based T2I pipeline, such as CFG scale, word weighting, negative prompts, and style keywords, which are also complicated for non-professional users. To make it easier for users to utilize T2I models, Stable Diffusion (SD) WebUI [2] is first created to provide a user-friendly web UI to access the latest techniques without any coding. However, a typical workflow of generating a satisfactory image usually involves several stages, _e.g._, generation, variation, super-resolution, _etc._ This makes the tab-based interface of SD-WebUI somewhat awkward to use. Therefore, ComfyUI1 was designed by utilizing a graph/nodes interface that connected different stages via nodes and edges, which makes workflows more clear. Nevertheless, these software tools still could not solve the problem of complicated configurations required for a charming image. This urges the development of Fooocus2 - a tool with a bunch of built-in optimizations and quality improvements. Fooocus frees users from complex parameter-tuning, but it still requires them to write a proper and precise text prompt for the desired images. However, this can be challenging in some cases, such as when the required scenes are artistic conceptions rather than specific objects, or when the users have no idea how to describe what they want to generate, etc. Footnote 1: [https://github.com/comfyanonymous/ComfyUI](https://github.com/comfyanonymous/ComfyUI) Footnote 2: [https://github.com/lllyastiel/Fooocus](https://github.com/lllyastiel/Fooocus) Generally, it might be difficult for users to come up with the right prompts and configurations at once, but it is much easier to tell what they want or do not want via natural language if the first version is unsatisfactory, _e.g._, "Don't be a sticker" and "Where is the dog?", as shown in Fig. 1. Moreover, it would be more straightforward to perform a multi-turn conversation with T2I models to iterate the images over and over again, mimicking the communication processes between human designers and their customers. These analyses reveal a promising direction for building the next generation of T2I systems with a new human-machine interface using natural language - a system that is able to infer users' intentions and automatically generate the proper text prompts leveraging the reasoning abilities of large language models (LLM). This is not only because natural language is the easiest way that everyone can master, but also because it frees users from brainstorming sophisticated textual descriptions and requires only simple instructions instead (see Fig. 6 for more illustrations). Inspired by the recently released demo of DALL-E 3 [35] - a powerful T2I model directly built-in ChatGPT that utilizes human language, we revisit existing techniques aimed at aligning human intent in generating images and introduce a new task called **interactive text to image** (iT2I). This task is featured by several aspects, including 1) _Multi-Turn_: users are allowed to chat with the system (typically powered by LLMs) to progressively specify requirements, shortcomings, and suggestions of the expected/generated images; 2) _Consistency_: the ability to keep identity for consistent multi-turn image editing, series characters creation, _etc._; 3) _Composability_: the ability to be composed with/built-in ex Figure 2: The evolution of image generation systems. isting chat assistants for interleaved image generation and (visual) question answering for seamless user experience. All these properties make iT2I systems powerful tools for a wide range of applications, from content generation and design to interactive storytelling and more. As an initial solution to address this problem, we propose a simple yet effective approach that enhances language models for iT2I using prompting techniques and pretrained text-to-image models. Specifically, we prompt the LLM to instruct it to generate an image with an intermediate textual description enclosed by special tags. After detecting the special tags, the description is parsed and transformed through a prompt refinement module. Then, a pre-trained T2I model is employed to generate the image. We evaluate our approach across various common use cases and different language models such as ChatGPT [7, 36], LLAMA [48], Baichuan [56] and InternLM [46]. Our results demonstrate that our approach can easily enable iT2I capabilities in any existing language model and text-to-image model without the need for additional training. Furthermore, it has minimal impact on the language models' inherent abilities in question answering and code generation. We hope this work could draw broader attention and provide inspiration for boosting user experience in human-machine interactions alongside the image quality of the next-generation T2I models. ## 2 Related Works Text-to-Image Generation.Text-to-image (T2I) generation is a widely-explored research area at the intersection of computer vision and natural language processing. Notable approaches include generative models, like Variational Autoencoders (VAE) [22, 47], Generative Adversarial Networks (GAN) [17, 21], and autoregressive models [12], which enable image synthesis guided by textual descriptions. Recent multimodal models like CLIP [38] and DALL-E [39] have further improved alignment between text and generated images, while the birth and development of diffusion models [4, 13, 40, 41, 43, 55] have pushed the boundaries of text-image interactions. Image Generation Interface.There are a variety of different approaches for image generation and editing - each possesses its own merits and drawbacks. The most straightforward ones are text-based approaches where people write text prompts for either image generations [40, 41] or image editing [6, 61]. Besides, image-based approaches are also popular. In this case, people either provide a reference image asking the T2I models to generate image variations [40, 59], or provide edge/depth maps to control the image layout [27, 34, 64], or performing image translation with a style image [1, 45], or asking generating images of a given subject [25, 57]. To facilitate the precise control, point-based approaches [31, 49] are widely adopted by utilizing state-of-the-art localization methods [23, 30]. Recently, drag-based approaches [11, 28, 29, 33, 37, 44, 62] are also proposed for more interactive experience. As for UX design, there are Rich-T2I [15] and DialogPaint [53], which share similar spirits as ours. In the literatures of integrating T2I to LLM, there are NExT-GPT [54], GILL [24], DreamLLM [9], SEED [16]. Although these methods also provide the capability for interleaved text-image generation, they are not specifically designed for iT2I mostly and are limited to image quality and multi-turn correspondence. Prompting LLMs.The in-context-learning capability [7] is one of the strongest advantages of LLMs. It enables users to freely customize LLMs for a particular task or enhance the capabilities of LLMs by simple prompting. For example, chain-of-thoughts [52] is the first prompting technique that enhances LLMs by asking them to generate a series of intermediate reasoning steps. After that, there are also a number of improved prompting techniques that leverage the heuristic of majority voting [51], backtracking [58], and graph of thoughts [5]. In this work, we also provide an ap Figure 3: Illustrations of different human-machine interfaces for T2I systems. proach to augment LLM with iT2I ability via prompting, as it can be rapidly applied to any existing LLMs without any training. ## 3 Interactive Text to Image Interactive Text to Image (iT2I) aims to provide a user-friendly approach to generate images that meet user requirements in an interactive manner. Users can instantiate a multi-turn dialogue between humans and AI agents, where they can communicate requirements, shortcomings, and suggestions of the generated images or the expected ones with natural language. ### Problem Definition Precisely, the iT2I problem can be defined as the task of generating images from textual descriptions in a way that the generated images closely align with the provided text, ensuring that the generated visual content accurately represents the textual information. There are some notable properties of iT2I systems: **Multi-Turn** refers to the ability of the system to engage in a dynamic and iterative dialogue with the user. Unlike traditional text-to-image systems that may generate a single image based on a static textual input, multi-turn iT2I systems can accept multiple rounds of textual input, enabling users to refine and specify their visual requirements through an ongoing conversation. This property enhances the user experience and allows for more fine-grained control over the generated images. **Consistency** means that these systems can automatically determine if they should take into account not only the current textual input but also the previous visual context. It involves persisting the visual identity of images in different rounds of generations. This capability enables iT2I systems to perform consistent multi-turn image editing/refinement, produce personalized and contextually relevant objects/characters, _etc_. **Composability** relates to the ability to combine or integrate image generation with other tasks. This means that the ability of image generation should be modular and compatible with the inherent abilities of LLMs, allowing users to seamlessly incorporate them to perform interleaved conversations for querying both text and visual content. ### Types of Instruction As shown in Fig. 4, there are various instructions that could be found in an iT2I system, such as generation, editing, selecting, and refinement. Different instructions could have varying levels of complexity when it comes to interpretation. Some instructions can be effectively addressed by leveraging the capabilities of an LLM, such as selecting, which primarily involves textual decision-making. However, certain instructions may necessitate a deeper synergy between the LLM and the T2I models. **Generation** refers to the process of generating entirely new images based on a given textual description. In this context, the iT2I system creates images or illustrations from scratch, attempting to capture the essence and details of the provided textual input. It essentially transforms queries into neural representations or prompts for T2I models. **Referring generation** is another variant of generation, where the system generates images that refer to or are in Figure 4: Illustration of 6 types of interactions in interactive text-to-image workflow. spired by existing objects, scenes, or concepts mentioned in the textual input and appear in the context. **Selecting** is a relatively straightforward instruction that involves choosing or picking from a set of pre-existing or bag of generated images based on the textual input. **Editing** performs the task of modifying or refining existing images in response to textual instructions. This may involve altering specific attributes of an image, enhancing or diminishing certain features, or adapting the image to match the requirements outlined in the instruction. **Refinement** means to further enhance or optimize an existing image to better align with the textual description. While editing involves making specific modifications, refinement often involves fine-tuning the visual output to achieve a higher level of detail, realism, or accuracy in accordance with the provided textual guidance. **Question Answering** is the inherent ability of LLMs. An iT2I system should be able to persist the ability as much as possible, as it is crucial to provide a coherent experience interleaving images and text for users. ### Discussion In the literature of image editing and multi-modal LLM, there are a number of works that are closely related to iT2I. Most of these related works could provide interactive interfaces. For example, InstructPix2Pix [6] and its follow-up works [63, 65] could be repeatedly applied to a single image to achieve multi-turn image editing. However, these interactive multi-turn abilities only apply to image editing instructions. There are also multi-modal LLMs [9, 16, 24, 54] that could generate response with interleaved text and images, but most of them focus more on (visual) question answering with multi-modal responses rather than interactive image generation. The key vision of iT2I is to build a chat-based system that could respond to all image generation/editing instructions in a multi-turn, consistent, and composable manner. This is the major difference between iT2I from all previous works/tasks. ## 4 Mini-DALLE3 In this section, we depict a blueprint of an iT2I system, which we refer to as Mini-DALLE3. The overall architecture of Mini-DALLE3 is illustrated in Fig. 5, and it comprises several key components: an LLM, a router, an adapter, and T2I models. The LLM can be an existing text-only LLM, such as ChatGPT [36] and LLaMA [48], or multi-modal LLM [50]. It is responsible to analyze user intentions and produce the proper outputs in text or neural representations. The router would automatically dispatch the parsed image representations (if there exist ones in the LLM output) to the image generation module. The adapter transforms the image representations to better fit the back-end T2I models. Depending on the type of image representations, the adapter can be a neural network if the image representations are neural embedding or prompt refinement modules with handcrafted rules or LLM. Next, we illustrate a simple yet effective instantiation of Mini-DALLE3 architecture by prompting large language models. ### Multi-Turn Interaction by Prompting LLM Multi-turn interaction lies at the heart of interactive text-to-image. It possesses the requirements of integrating textual/visual context and understanding instructional instead of descriptive messages. To address it, we propose to leverage the stronger context-understanding ability of LLMs by prompting them to pretend to generate images via textual descriptions. This intermediate textual description not only provides stronger flexibilities to augment the system capabilities with plug-and-play modules such as prompt variation/refinement but also enables us to utilize numerous pre-trained LLMs and T2I models without heavy finetuning. **Image Generation as Function Call.** Specifically, we utilize the few-shot prompt as shown in Fig. 6 to transform the problem of multi-turn image generation into a problem of multi-turn textual description generation. Our prompt entails several key steps. Initially, we define the LM's role and explicitly convey to it that it possesses the ability to generate images. Subsequently, we request the LM to produce images by generating descriptive text enclosed within \(\langle\text{image}\rangle\) tags. If the generated images exhibit a high degree of correlation with previous ones, the LM is instructed to generate "edit" rather than generate "image". Finally, we provide a few number of few-shot examples to further guide the LM's responses. Leveraging the robust in-context learning capabilities inherent in advanced LLMs, we observe that Figure 5: **Pipeline Overview. Mini-DALLE3 consists of two stages, with 1) a router that analyzes the response from the prompted/finetuned LLM and dispatches the demand for image generation if needed, and 2) an adapter that transforms the image embedding or descriptions for subsequent T2I models.** this approach yields favorable outcomes. The LM successfully generates images accompanied by coherent textual responses, as illustrated in Fig. 1. Importantly, these capabilities can be harnessed without the need for specialized training and can be swiftly integrated into existing LLMs. **Prompt Refinement&&Variations.** Although we can generate textual descriptions that integrate the information from context by prompting LLMs, the descriptions might not be sufficient to generate high-quality images. Therefore, we propose to leverage another round of prompt refinement to transform the vanilla descriptions to better fit subsequent T2I models. It is worth noting that the prompt refinement can also apply to embedding if the previous intermediate representation is embedding. In this instantiation, we perform text transformation by prompting LLM again with the following few-shot prompt. Furthermore, we can perform prompt variation by repeatedly performing different prompt refinements, which is useful for responding to the request to generate a list of images. ### Hierarchical Content Consistency Control Content consistency is another important aspect of an iT2I system. Although similar topics (subject-driven T2I, example-driven T2I, personalization, concept learning) are widely explored in the context of conventional T2I [26, 42, 57], only a few works explore the multi-turn scenarios and seldom works explore the integration of these abilities into a single unified system. Our decomposition makes it possible to utilize existing T2I models that were not designed for multi-turn scenarios. For example, the edited description of Prompt-to-prompt [32] can be automatically generated through LLM in an interactive manner. Specifically, we leverage the off-the-shelf T2I models that take previous images as additional input to ensure consistent multi-turn generation. To better ensure the image quality, we introduce a hierarchical control strategy that utilizes different models for different levels of content changes. For small content changes that can be described in a few words, such as changing styles, word weighting, and simple object manipulation, we adopt the models of Prompt to prompt [32] and MasaCtrl [8]. We utilize IP-Adapter [60] to perform large content changes as these models are more flexible for the input textual prompts. ### Composiblitiy As we have not modified the original LLM, our system can natively support the composition with question answering and image generation interleavedly. ## 5 Evaluation **Will prompting harm the inherent abilities of LLM?** We provide a preliminary evaluation if the iT2I prompt harms the inherent abilities of LLM. As previously shown in Fig. 1, our prompting technique would not cause severe degradation in the LLM abilities. We can still ask LLMs for either question answering or code generation as before. To further investigate the impact of the iT2I prompt, we perform an ablation study on five subtasks of MMLU [19], comparing the models with and without the iT2I prompt. The results are provided in Tab. 1, it can be observed that the iT2I prompt only brings minor degradations. **Comparison of different LLM.** We evaluate our approach with different LLMs, including commercial ser \begin{table} \begin{tabular}{l c c} \hline \hline **Task** & \begin{tabular}{c} GPT.5-Turbo \\ Original \\ \end{tabular} & \begin{tabular}{c} GPT3.5-Turbo \\ Mini-DALLE3 \\ \end{tabular} \\ \hline Abstract Algebra & 42.42 & 43.43 \\ High School Physics & 40.00 & 38.67 \\ Marketing & 88.41 & 86.70 \\ Philosophy & 77.41 & 70.65 \\ College Computer Science & 48.48 & 42.42 \\ Average & 59.34 & 56.37 \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results of different models on the subtasks of MMLU, using the script from chain-of-thought-hub [14]. Figure 6: The few-shot prompt for iT2I generation. Figure 7: Qualitative comparison of interactive text-to-image generation by prompting different LLM. Figure 8: Examples use cases of interactive text-to-image generation. vices OpenAI GPT3.5 [7], GPT4 [36], Claude3, and open-source LLAMA2-13B-Chat [48], Baichuan2-13B Chat [56], ChatGLM2-6B-Chat [10], Qwen-14B-Chat [3], InternLM-20B-Chat [46]. As shown in Fig. 7, all commercial LLMs successfully generate the images with appropriate corrsponding text (interleaved) responses. This indicates that our prompting approach could be a simple yet effective way to rapidly augment existing LLMs with iT2I ability. Nevertheless, the results are less satisfactory for the open-source LLMs. Overall, Baichuan2 [56] generates the best results, while Qwen and InternLM tend to refuse to generate images even if they are prompted to do so. ChatGLM2 could generate an image but the correspondence is incorrect. Footnote 3: [https://claude.ai](https://claude.ai) **iT2I Examples.** Here, we show a number of iT2I examples, which cover different use scenarios from single-turn/multi-turn image generation to interleaved text-image storytelling. The results are shown in Fig. 8 and Tab. 2. ## 6 Conclusion In conclusion, this paper introduces the concept of interactive text-to-image (iT2I) and presents an approach to augmenting existing large language models for this task. Our evaluation shows that this approach enables convenient iT2I capabilities without significant degradation of the models' inherent capabilities. This work has the potential to enhance user experiences in human-machine interactions and elevate the image quality of next-generation T2I models, offering promising directions for future research and development. \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}} **Interactive Image Generation** & **Storytelling** & **Concept Prototyping** & **Interactive Logo Design** \\ \hline A vibrant illustration of a traditional Spring Festival scene. The street is filled with colorful decorations, lanterns, and people celebrating. Lion dancers and dragon parades add to the festive atmosphere, while families gather to enjoy delicious food together. The scene is filled with joy and excitement, capturing the spirit of the Spring Festival. & A Retro Spaceship with a pulbous shape, fins, and a silver and red color scheme. It has a streamlined design and a nos-talgic feel, reminiscent of classic sci-fi movies. The spaceship is adorned with retro-style decals and has large windows to provide a view of the cosmos. It exudes a sense of adventure and exploration as it cruises through the vastness of space. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color the image quality of next-generation T2I models, offering promising directions for future research and development. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. The logo is placed on a white background, creating a clean and sleek look. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. The logo is placed on a white background, creating a clean and sleek look. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. The logo is placed on a white background, creating a clean and sleek look. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimalist logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimal logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimal logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimal logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimal logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimal logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation. & A minimal logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimal logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of blue and green, representing creativity and innovation. & A minimal logo with the text ”Mini DALLE 3” written in a clean and modern font. The letters are arranged in the same line, with each letter having equal spacing. The color palette consists of vibrant shades of green and orange representing creativity and innovation.
2305.12956
Prediction for the interferometric shape of the first black hole photon ring
Black hole images are theoretically predicted (under mild astrophysical assumptions) to display a stack of lensed "photon rings" that carry information about the underlying spacetime geometry. Despite vigorous efforts, no such ring has been observationally resolved thus far. However, planning is now actively under way for space missions targeting the first (and possibly the second) photon rings of the supermassive black holes M87* and Sgr A*. In this work, we study interferometric photon ring signatures in time-averaged images of Kerr black holes surrounded by different astrophysical profiles. We focus on the first, most easily accessible photon ring, which has a larger width-to-diameter ratio than subsequent rings and whose image consequently lacks a sharply defined diameter. Nonetheless, we show that it does admit a precise angle-dependent diameter in visibility space, for which the Kerr metric predicts a specific functional form that tracks the critical curve. We find that a measurement of this interferometric ring diameter is possible for most astrophysical profiles, paving the way for precision tests of strong-field general relativity via near-future observations of the first photon ring.
Alejandro Cárdenas-Avendaño, Alexandru Lupsasca
2023-05-22T12:04:58Z
http://arxiv.org/abs/2305.12956v2
# Prediction for the interferometric shape of the first black hole photon ring ###### Abstract Black hole images are theoretically predicted--under mild astrophysical assumptions--to display a stack of lensed "photon rings" that carry information about the underlying spacetime geometry. Despite vigorous efforts, no such ring has been observationally resolved thus far. However, planning is now actively under way for space missions targeting the first (and possibly the second) photon rings of the supermassive black holes M87* and Sgr A*. In this work, we study interferometric photon ring signatures in time-averaged images of Kerr black holes surrounded by different astrophysical profiles. We focus on the first, most easily accessible photon ring, which has a larger width-to-diameter ratio than subsequent rings and whose image consequently lacks a sharply defined diameter. Nonetheless, we show that it does admit a precise angle-dependent diameter _in visibility space_, for which the Kerr metric predicts a specific functional form that tracks the critical curve. We find that a measurement of this interferometric ring diameter is possible for most astrophysical profiles, paving the way for precision tests of strong-field general relativity via near-future observations of the first photon ring. ## I Introduction Theoretical work [1; 2; 3; 4] predicts that--under some mild assumptions--images of an astrophysical Kerr black hole generically display a stack of nested "photon rings," each of which is a strongly lensed image of the main emission superimposed on top of the direct image. These rings may be labeled by the number \(n\) of half-orbits executed around the black hole by their constitutive photons on their way from source to observer. The full set of \(n\geq 1\) rings is often collectively referred to as "_the_ photon ring": a striking feature that dominates simulated black hole images, and a signature stamp of strong gravity (Fig. 1). Despite vigorous efforts [5; 6; 7; 8], Event Horizon Telescope (EHT) observations from Earth of the supermassive black holes M87* and Sgr A* [9; 10] have yet to experimentally resolve any photon ring [11; 12]. Space missions targeting their first (\(n=1\))--and possibly second (\(n=2\))--photon rings are now being planned [13; 14; 15]. While a theoretical prediction for the interferometric signature of the \(n\geq 2\) rings has already been derived [16; 17; 15; 18] and explored [19; 20; 21], a sharp prediction for the interferometric signature of the more readily accessible \(n=1\) ring is still lacking. This paper formulates such a prediction (Sec. III.6). ## II Photon ring images The lensing behavior of the Kerr geometry confers two properties to the appearance of the photon ring. First, since each subring is a mirror image of its predecessor, the full photon ring must exhibit a self-similar substructure, which in the limit \(n\to\infty\) is completely characterized by three critical exponents \(\gamma\), \(\delta\), and \(\tau\) that respectively control the demagnification, rotation, and time delay of successive images [3]. The analytically known parameters \((\gamma,\delta,\tau)\) depend only on the mass and spin of the black hole--as well as the photon orbital radius [2; 3]--and may in principle be measured from observations of light echoes or their characteristic pattern of autocorrelations [22; 23]. Since successive subrings are exponentially demagnified by \(\sim e^{-\gamma}\), the large-\(n\) rings quickly become so narrow in the image plane of a distant observer that they may--to a very good approximation--be regarded as infinitely thin, mathematical curves \(\mathcal{C}_{n}\). In fact, this is an excellent approximation for \(n\geq 2\), as the second (\(n=2\)) photon ring already appears extremely thin; typically, only the first (\(n=1\)) ring displays a noticeable thickness (Fig. 1). The second property is closely tied to the exponential subring demagnification: the photon rings must converge (exponentially fast in \(n\)) to a theoretical "critical curve" in the image plane of an observer, which corresponds to the image of the black hole's (asymptotically) bound photon orbits. First derived by Bardeen [24], this analytically known curve--call it \(\tilde{\mathcal{C}}\)--delinates the apparent cross-section of a black hole in the sky. It is fully determined by the Kerr geometry (together with the observer inclination \(\theta_{\rm o}\)). Thus, the critical curve is the "\(n\to\infty\) photon ring" \[\tilde{\mathcal{C}}=\mathcal{C}_{\infty}\equiv\lim_{n\to\infty}\mathcal{C}_{n}, \tag{1}\] and indeed, photons that appear exactly on \(\tilde{\mathcal{C}}\) lie on null rays that were unstably trapped (in the far past) within a region of spacetime just outside the event horizon, which is now known as the "photon shell" [2; 3]; see also [25; 26; 27]. The preceding discussion leads to a simple description of the large-\(n\) subring images: they appear as thin curves \(\mathcal{C}_{n}\) that closely track the critical curve \(\tilde{\mathcal{C}}=\mathcal{C}_{\infty}\), with the deviations exponentially suppressed in \(n\). As illustrated in the bottom-right panel of Fig. 1, this description is already valid for \(n=2\): the second photon ring looks like a bright, narrow curve that sits exactly atop \(\tilde{\mathcal{C}}\) (drawn as a dashed black line). On the other hand, the bottom-left panel of Fig. 1 shows why this description fails for \(n=1\): the first ring has a significant width, and its shape visibly deviates from that of the (dashed black) critical curve. Moreover, the appearance of the \(n=1\) photon ring--both its thickness and deviation from \(\tilde{\mathcal{C}}\)--can significantly vary with the choice of astrophysical source. This leads to the central question of this paper: can one produce a sharp theoretical prediction for the \(n=1\) ring shape? ## III Interferometric ring signatures The key to predicting the \(n=1\) ring shape is to work in Fourier space. Interferometers sample the radio visibility \[V(\mathbf{u})=\int I_{\mathrm{o}}(\mathbf{x}_{\mathrm{o}})e^{-2\pi i\mathbf{u} \cdot\mathbf{x}_{\mathrm{o}}}\,\mathrm{d}^{2}\mathbf{x}_{\mathrm{o}}, \tag{2}\] which is the Fourier transform of the sky image \(I_{\mathrm{o}}(\mathbf{x}_{\mathrm{o}})\). The dimensionless baseline \(\mathbf{u}\) sampled by two elements is the distance separating them in the plane perpendicular to the line of sight, in units of the observation wavelength. An image of an infinitely thin ring produces a visibility with a characteristic ringing pattern, whose periodicity at polar angle \(\varphi\) in the baseline plane \(\mathbf{u}=(u,\varphi)\) is set by the (precise, well-defined) diameter of the ring at the corresponding angle \(\phi=\varphi\) in the image plane \(\mathbf{x}_{\mathrm{o}}=(\rho,\phi)\). On the other hand, if the ring has some thickness, then its image lacks a well-defined diameter, but nevertheless its corresponding visibility still rings with a characteristic periodicity, from which a sharp notion of angle-dependent "interferometric ring diameter" \(d_{\varphi}\) can thus be derived. The main idea of this paper is to _define_ the diameter \(d_{\varphi}^{(1)}\) of the first (\(n=1\)) photon ring from the periodicity of its ringing interferometric signature. In the remainder of this section, we will describe precisely how \(d_{\varphi}^{(1)}\) may be recovered from the visibility (2) that is directly probed by an interferometer, and formulate a guess for its functional form. In the rest of the paper, we will then study a set of astrophysical source models around a Kerr black hole and show that the angle-dependent diameter \(d_{\varphi}^{(1)}\) of their first photon ring follows this functional form to high accuracy. ### Interferometric signature of a zero-width ring To make sense of the preceding remarks, the first step is to consider perfectly thin rings, or more generally, images that consist of an infinitely narrow, bright curve \(\mathcal{C}\). If \(\mathcal{C}\) is closed and convex,1 then its shape can always be parameterized in the Cartesian image plane \(\mathbf{x}_{\mathrm{o}}=(\alpha,\beta)\) by the normal angle angle \(\varphi\) to the curve [17], Footnote 1: If \(\mathcal{C}\) is not closed and convex, then it does not admit a single normal-angle parameterization and must be covered by multiple segments \((x_{i}(\varphi),y_{i}(\varphi))\)[17]; we will not consider such cases here. \[\mathcal{C}=\left\{(\alpha(\varphi),\beta(\varphi))\,|\,\varphi\in[0,2\pi) \right\}. \tag{3}\] In practice, given another parameterization \((\alpha(\sigma),\beta(\sigma))\) of \(\mathcal{C}\), this parameterization may be obtained by solving \[\tan\varphi(\sigma)=-\frac{\alpha^{\prime}(\sigma)}{\beta^{\prime}(\sigma)} \tag{4}\] for the normal angle \(\varphi(\sigma)\) along the curve, and then plugging the inverse \(\sigma(\varphi)\) into the original parameterization. Thereafter, one can compute the _projected position_ of \(\mathcal{C}\), \[f(\varphi)\equiv x(\varphi)\cos\varphi+y(\varphi)\sin\varphi. \tag{5}\] This function completely encodes the shape of \(\mathcal{C}\), which may still be recovered via the inverse relations \[x(\varphi) =f(\varphi)\cos\varphi-f^{\prime}(\varphi)\sin\varphi, \tag{6a}\] \[y(\varphi) =f(\varphi)\sin\varphi+f^{\prime}(\varphi)\cos\varphi. \tag{6b}\] From an interferometric perspective, however, it is most natural to describe \(\mathcal{C}\) via its projected position (5), which turns out to be most closely connected to the visibility (2) of \(\mathcal{C}\) that an interferometer would directly sample. To connect \(f(\varphi)\) to interferometric observables, we first decompose it into its parity-even and parity-odd parts, \[d_{\varphi} \equiv f(\varphi)+f(\varphi+\pi), \tag{7a}\] \[C_{\varphi} \equiv\frac{1}{2}\left[f(\varphi)-f(\varphi+\pi)\right], \tag{7b}\] which are the angle-dependent projected diameter and projected centroid displacement at angle \(\varphi\) in the image of \(\mathcal{C}\), respectively--see [17] for further discussion of their geometric interpretation. Here, we simply note that \(d_{\varphi}\) and \(C_{\varphi}\) carry all the information about the shape of \(\mathcal{C}\) that was stored in the projected position function, since \[f(\varphi)=\frac{d_{\varphi}}{2}+C_{\varphi}. \tag{8}\] While it may seem that we have now doubled the degrees of freedom needed to describe \(\mathcal{C}\), that is not in fact the case because, as defined in (7), \(d_{\varphi}\) and \(C_{\varphi}\) only range over \([0,\pi)\), repeating periodically thereafter. Geometrically, this makes sense since the diameter and centroid are only defined for pairs of points \((\varphi,\varphi+\pi)\) around the curve.2 Footnote 2: We also note that the \(\pi\)-periodicity of \(d_{\varphi}\), \(C_{\varphi}\), and \(\alpha_{\varphi}^{\mathrm{L,R}}\) ensures that the Fourier transform (9) satisfies \(V(u,\varphi+\pi)=V^{*}(u,\varphi)\), as required by its definition (2) for a real image \(I_{\mathrm{o}}(\mathbf{x}_{\mathrm{o}})=I_{\mathrm{o}}^{*}(\mathbf{x}_{\mathrm{o}})\). We now come to the key conclusion of [16]: the Fourier transform of an infinitely narrow curve \(\mathcal{C}\) with projected diameter \(d_{\varphi}\) and projected centroid \(C_{\varphi}\) is approximately \[V(\mathbf{u})\approx\frac{e^{-2\pi iC_{\varphi}u}}{\sqrt{u}}\left[\alpha_{ \varphi}^{\mathrm{L}}e^{-i\frac{\pi d}{4}+i\pi d_{\varphi}u}+\alpha_{\varphi}^{ \mathrm{R}}e^{\frac{i\pi}{4}-i\pi d_{\varphi}u}\right], \tag{9}\] where the coefficients \(\alpha_{\varphi}^{\mathrm{L,R}}=\alpha_{\varphi+\pi}^{\mathrm{R,L}}>0\) encode the polar intensity profile around the curve, and the approximation holds for \(ud_{\varphi}\gg 1\). In particular, the visibility amplitude is a damped oscillation with radial periodicity \(\Delta u=1/d_{\varphi}\) inside an envelope with a weak \(\sqrt{u}\) power-law falloff, \[|V(\mathbf{u})|\approx\sqrt{\frac{\left(\alpha_{\varphi}^{\mathrm{L}}\right)^{ 2}+\left(\alpha_{\varphi}^{\mathrm{R}}\right)^{2}+2\alpha_{\varphi}^{\mathrm{L}} \alpha_{\varphi}^{\mathrm{R}}\sin\left(2\pi d_{\varphi}u\right)}{u}}, \tag{10}\] which depends only on the projected diameter \(d_{\varphi}\). On the other hand, the projected centroid \(C_{\varphi}\) is only encoded in the visibility phase, which we will henceforth ignore as it is significantly harder to measure, and beyond the reach of presently envisioned \(n=1\) ring observations. Figure 1: **Top left:** Adaptively ray-traced (with AART[21]) image of a stationary, axisymmetric, equatorial source with a radial profile given by (23) with \(\mu=3r_{+}/2\), \(\gamma=0\), and \(\vartheta=M\). The inset panels decompose the image into its photon-orbit layers: the direct (\(n=0\)) image and the first two (\(n=1\) and \(n=2\)) photon rings. **Top right:** The corresponding visibility amplitudes for a spin-perpendicular (\(\varphi=0^{\circ}\)) cut across the total image (black dashed line) and across each image layer. **Bottom left:** The image of the \(n=1\) photon ring only. Three characteristic diameters are measured along a horizontal cut of the intensity profile, corresponding to (from top to bottom): the distance between the location of the peaks in the intensity (\(9.82M\)), the distance between the inner edges of the “\(n=1\) lensing band [21]” (\(9.02M\)), and the distance between the outer edges of the band (\(12.03M\)). The diameter \(d_{0^{\circ}}^{(1)}\) inferred from the characteristic ring of the total visibility amplitude in two baseline windows \([40,70]\,\mathrm{G}\lambda\) and \([70,100]\,\mathrm{G}\lambda\) is reported. **Bottom right:** Same as in the bottom-left panel, but for the \(n=2\) ring and with a projected diameter \(d_{0^{\circ}}^{(2)}\) inferred from the ringing in the baseline window \([285,315]\,\mathrm{G}\lambda\). Here, the black hole spin is \(a/M=94\%\), the observer inclination is \(\theta_{\circ}=17^{\circ}\), and the critical curve has a diameter of \(9.73M\) along the considered horizontal cut. ### Interferometric signature of the photon ring So far, we have argued that an image-plane curve with angle-dependent diameter \(d_{\varphi}\) produces an interferometric response on long baselines \(u\gg 1/d_{\varphi}\) that is completely captured by the visibility (9). In particular, its visibility amplitude displays a characteristic ringing signature (10) whose periodicity at angle \(\varphi\) in the baseline plane encodes the image diameter \(d_{\varphi}\) of the curve at image angle \(\phi=\varphi\). Strictly speaking, this discussion only pertains to zero-width curves. Intuitively, however, if the curve were in fact a very narrow ring with a small width-to-diameter ratio \(w/d\ll 1\), then we would expect it to produce the same response in an interferometer limited to sampling only baselines \(uw\ll 1\) too short to resolve the ring width. This intuition was in fact proved in [16], which computed the Fourier transform of such a thin ring to leading order in \(w/d\ll 1\), and found that the same approximation (9) to the complex visibility still holds in the baseline range \[\frac{1}{d}\ll u\ll\frac{1}{w}. \tag{11}\] This range is aptly called the "universal regime" since all thin rings produce the same universal signature (9)-(10) on these baselines, regardless of their radial profile: it is only on even longer baselines \(u\gtrsim 1/w\) that a ring profile can be resolved and different rings can be distinguished. For a ring with a smooth radial profile, the visibility ought to decay very rapidly once its width is resolved. Therefore, as first noted in [2] and extensively reviewed in [3; 19; 21], the sequence of exponentially demagnified photon rings described in Sec. II must produce a cascade of damped oscillations on long baselines (see, e.g., Fig. 5 of [2]). Given that the \((n+1)^{\rm th}\) photon ring has width \[w_{n+1}(\varphi)\approx e^{-\gamma(\varphi)}w_{n}(\varphi)\approx e^{-n\gamma (\varphi)}w_{1}(\varphi), \tag{12}\] the \(n^{\rm th}\) subring ought to dominate the signal in the range \[\frac{1}{w_{n-1}}\ll u\ll\frac{1}{w_{n}}, \tag{13}\] in which the \((n-1)^{\rm th}\) ring has already been resolved out, but the \((n+1)^{\rm th}\) ring (whose flux is \(\sim e^{-\gamma}\) times weaker) has yet to take over. Hence, we expect that for large \(n\), the visibility amplitude in the range (13) must adopt a universal form (10) fixed by the \(n^{\rm th}\) ring diameter \(d_{\varphi}^{(n)}\). This expectation has been confirmed by simple models [19; 20; 21; 15] for which these statements already hold to very good approximation starting with the \(n=2\) ring, as expected since its typical width \(w_{2}\lesssim 0.1M\) and diameter \(d\sim 10M\) correspond to a width-to-diameter ratio \(\lesssim 1\%\) suitable for an expansion in \(w_{2}/d\ll 1\). Indeed, [15] found that the diameter \(d_{\varphi}^{(2)}\) of the \(n=2\) ring image could be inferred from its visibility amplitude in the range (13). In general, the \(n^{\rm th}\) ring must lie within the \(n^{\rm th}\) "lensing band": an exponentially narrow (in \(n\)) region of the image plane that is fully determined by the Kerr metric [21; 15]. Within this band, however, it generically has some width. Two important comments are now in order: 1. First, we reiterate that any ring of finite width does not have a unique, well-defined image diameter \(d_{\varphi}\), but rather a range of diameters that extends from a minimum diameter (between its inner boundaries) to a maximum diameter (between the outer ones). That is, the image diameter is only defined up to a precision of order the ring width \(w\) (Fig. 1). Yet, the corresponding visibility in the universal regime (11) does seem to pick out a unique periodicity--so how can this be? A resolution to this puzzle is partly that the exact periodicity of the ringing in the universal visibility (10) varies with the baseline length \(u\) within the regime (11). That is, the precise value of the diameter \(d_{\varphi}^{(n)}(u)\) depends on the choice of baseline window from which it is inferred [19]. For a thin \(n=2\) ring, the image diameters vary within a narrow range of \(\lesssim 1\%\) (Fig. 1), but such variation could be detected at the microarcsecond scale accessible on \(230\,\mathrm{GHz}\) Earth-Moon baselines. When observing near \(u\sim 300\,\mathrm{G}\lambda\), for instance, a unique periodicity emerges and yields a sharp \(n=2\) ring diameter \(d_{\varphi}^{(2)}(u)\), but this answer varies with the baseline length \(u\) within the regime (13). As longer baselines are sampled, higher-frequency components of the ring are progressively picked up and larger image gradients within its intensity profile are increasingly resolved. Intuitively, then, the inferred diameter \(d_{\varphi}^{(2)}\) receives contributions from image diameters connecting points across the ring's profile where the derivative of the intensity is greater. As a result, the inferred diameter \(d_{\varphi}^{(2)}(u)\) may exhibit a slight but still noticeable drift in \(u\). 2. Second, we emphasize that the universal signature (9)-(10) is only present when the universal regime (11) exists. That is, the \(n^{\rm th}\) photon ring produces its characteristic periodic ringing only within the range (13). All the photon rings have a diameter \(d\sim 10M\), which for \(230\,\mathrm{GHz}\) observations of M87* corresponds to an angular size of \(d\sim 40\,\mathrm{\mu as}\), and hence to a radial periodicity \(\Delta u\sim 1/d\approx 5\,\mathrm{G}\lambda\)[9]. As such, the number \(N_{n}\approx(\Delta u_{n})d\) of periods (or "hops") of the visibility amplitude within the regime (13) of width \(\Delta u_{n}=1/w_{n}-1/w_{n-1}\) is \(N_{2}\sim 100\), a sufficient number to obtain a good estimate of the periodicity \(\Delta u=1/d_{\varphi}^{(2)}\). The scaling (12) of the ring widths implies a scaling \(\Delta u_{n+1}\approx e^{\gamma}\Delta u_{n}\) of the baseline windows (13), so each ring produces an exponentially growing number \(N_{n+1}\approx e^{\gamma}N_{n}\) of hops in the range in which it dominates the signal. Therefore, every \(n\geq 2\) ring produces sufficiently many hops to enable an estimate of its diameter. ### Predicted interferometric shape of the higher (\(n\geq 2\)) photon rings The first property of Kerr lensing described in Sec. II (namely, the exponential demagnification of successive subrings) guarantees a small width-to-diameter ratio for all the \(n\geq 2\) subrings. As a result, their diameter \(d_{\varphi}^{(n)}\) is well-defined in the image, up to minute variations of order \(w_{n}/d\ll 1\) (with \(w_{2}/d\sim 1\%\) and the higher ratios \(w_{n}/d\) exponentially suppressed by factors of \(\sim e^{-\gamma}\)). Moreover, this guarantees--for each \(n\geq 2\) subring--the existence of a wide range of baselines in the "universal regime" (11). In the regime (12) dominated by the \(n^{\text{th}}\) ring, the visibility amplitude takes a "universal form" (10) that is fixed by the ring diameter \(d_{\varphi}^{(n)}\), and which extends over sufficiently many hops for its periodicity--and hence \(d_{\varphi}^{(n)}\)--to be precisely inferred. This interferometrically measured diameter \(d_{\varphi}^{(n)}(u)\) may vary slightly with the precise choice of baseline \(u\) within the range (11), but this variation is again limited to variations of order \(w_{n}/d\ll 1\). The outstanding question that remains is then: what is the interferometric shape of the \(n\geq 2\) rings? Or, more precisely: does general relativity make a prediction for the projected diameter \(d_{\varphi}^{(n)}\)? The second property of Kerr lensing described in Sec. II (namely, the exponential convergence of successive rings to the critical curve) answers in the affirmative: by (1), the rings \(\mathcal{C}_{n}\) converge to the critical-curve shape \(\tilde{\mathcal{C}}=\tilde{C}_{\infty}\). In other words, as the large-\(n\) rings become increasingly narrow curves, their projected position functions tend to that of the critical curve, \(\tilde{f}(\varphi)=\frac{1}{2}\tilde{d}_{\varphi}+\tilde{C}_{\varphi}\). In particular, \[\lim_{n\to\infty}d_{\varphi}^{(n)}=\tilde{d}_{\varphi},\quad\lim_{n\to\infty}C _{\varphi}^{(n)}=\tilde{C}_{\varphi}. \tag{14}\] The analytic expression for the critical curve's projected position \(\tilde{f}(\varphi)\) is investigated in [17]. The exact formula is rather unwieldy, but it closely tracks the "phoval" shape \[\tilde{f}(\varphi)\approx R_{0} +\sqrt{R_{1}^{2}\sin^{2}\varphi+R_{2}^{2}\cos^{2}\varphi} \tag{15}\] \[+(X-\chi)\cos\varphi+\arcsin(\chi\cos\varphi),\] to better than 1 part in \(10^{5}\) for most black hole spins and observer inclinations, with the largest deviation from this functional form reaching 1 part in \(10^{3}\) in the extremal limit \(a\to M\) for an equatorial observer, when the critical curve is least circular and develops a vertical edge [28]. The 5 parameters \(R_{0}\), \(R_{1}\), \(R_{2}\), \(X\) and \(\chi\) in the phoval family of shapes admit a simple geometric interpretation. The offset \(X\) accounts for a spin-dependent, translation of the centroid of the critical curve relative to Bardeen's Cartesian coordinate system \((\alpha,\beta)\). Together with the parameter \(\chi\in[-1,1]\), which is necessary to reproduce the asymmetry of the high-spin, high-inclination critical curve, it only enters into the projected centroid of \(\tilde{\mathcal{C}}\), \[\tilde{C}_{\varphi}\approx(X-\chi)\cos\varphi+\arcsin(\chi\cos\varphi). \tag{16}\] The projected diameter thus takes the functional form \[\frac{\tilde{d}_{\varphi}}{2}\approx R_{0}+\sqrt{R_{1}^{2}\sin^{2}\varphi+R_{2 }^{2}\cos^{2}\varphi}, \tag{17}\] controlled by three characteristic radii \(R_{0}\), \(R_{1}\) and \(R_{2}\). When \(R_{1}=R_{2}=0\), this describes a curve of constant radius \(R_{0}\), such as a perfect circle, which is the shape of \(\tilde{\mathcal{C}}\) for an on-axis observer at any spin or for any observer at zero spin. When \(R_{0}=0\) instead, (17) describes a perfect ellipse with axes of length \(R_{1}\) and \(R_{2}\), which is exactly the shape of \(\tilde{\mathcal{C}}\) for all spins and low observer inclinations, or equivalently for all inclinations at small spin [17]. Based on the reasoning laid out above, [15] conjectured that the projected diameter of the \(n\geq 2\) rings of M87* (which due to its jet orientation is believed to be observed at a relatively low inclination of \(\theta_{0}\approx 17^{\circ}\)[9]) ought to follow the 4-parameter functional form of a "circlipse" \[\frac{d_{\varphi}^{(n)}}{2}\approx R_{0}+\sqrt{R_{1}^{2}\sin^{2}(\varphi- \bar{\varphi})+R_{2}^{2}\cos^{2}(\varphi-\bar{\varphi})}, \tag{18}\] where the additional offset angle \(\bar{\varphi}\) is meant to account for the uncertain image-plane orientation of the projected black hole spin (and in practice, the low-\(n\) subrings may also appear rotated relative to the critical curve). As checked in [19; 15; 20] for multiple astrophysical source profiles around a Kerr black hole, the visibility amplitude in the regime (11) dominated by the \(n=2\) ring really does follow the universal form (10), with a projected \(n=2\) ring diameter \(d_{\varphi}^{(2)}\) following the circlipse shape (18). We may therefore regard (18) as a generic prediction for the interferometric signature of the \(n\geq 2\) rings that follows from the Kerr hypothesis. As argued in [19; 20; 15; 21], a measurement of the interferometric \(n=2\) ring diameter \(d_{\varphi}^{(2)}(u)\) on long Earth-space baselines could deliver a stringent test of strong-field general relativity. ### Predicted interferometric shape of the first (\(n=1\)) photon ring Having reviewed the prediction for the interferometric shape of the \(n\geq 2\) photon rings, we now turn to the first photon ring, for which an analogous prediction has so far been lacking. In large part, this is because the preceding discussion generically breaks down for the first \(n=1\) subring: 1. First, because of its significant width \(w_{1}\sim M\) and large width-to-diameter ratio \(w_{1}/d\sim 10\%\), the first ring really lacks a sharply defined diameter \(d_{\varphi}^{(1)}\) in image space: at a given angle \(\varphi\) around the image, it has a wide range of possible diameters (Fig. 1). Moreover, unlike the higher-\(n\) rings, the \(n=1\) ring is not yet strongly constrained to track the critical curve shape \(\tilde{\mathcal{C}}\). As a result, there is no theoretical prediction for its image diameter--more than that, it is not even clear how such a diameter could be precisely defined from the \(n=1\) ring image. 2. Relatedly, even if we were able to define the \(n=1\) ring diameter \(d_{\varphi}^{(1)}\) in image space and then derive a prediction for it, we would have no reason to expect its visibility to adopt the universal form (9): first, because this form was derived in a leading-order expansion in \(w/d\ll 1\) and may receive significant corrections when \(w_{1}/d\sim 10\%\), and second, because the \(n=1\) ring typically fails to exhibit a "universal regime" in which it dominates the signal, since the range (11) closes off for thick rings with \(w/d\gtrsim 10\%\) (see App. B and App. C of [19] for more details). To get around all these issues, we propose to _define_ an interferometric diameter \(d_{\varphi}^{(1)}(u)\) from the periodicity of the visibility amplitude in the baseline range where the \(n=1\) ring dominates, namely \[\frac{1}{d}<u\lesssim\frac{1}{w_{1}}. \tag{19}\] We reiterate that this range is typically too narrow to contain a universal regime (11). For observations of M87* at \(230\,\mathrm{GHz}\), this range usually stretches from \(\gtrsim 25\,\mathrm{G}\lambda\) to \(\lesssim 100\,\mathrm{G}\lambda\), and therefore contains only \(\sim 15\) "hops" of periodicity \(\Delta u\sim 5\,\mathrm{G}\lambda\). Nevertheless, even a handful of hops is already enough to estimate a periodicity, and thence infer a diameter \(d_{\varphi}^{(1)}\). Two final questions remain. First, we have no reason to expect the visibility amplitude in the (non-universal) \(n=1\) regime (19) to take the universal form (10), so there is no obvious functional form to fit to the visibility. How then can we best extract its periodicity? Second, assuming an interferometric diameter \(d_{\varphi}^{(1)}\) can be extracted from the visibility amplitude, what form should its angle-dependence take? Since this diameter would have no clear connection to any precise feature in the image, it is perhaps not evident what to expect. To tackle the first problem, we note that the universal visibility amplitude (10) can be generalized to [19] \[|V(\mathbf{u})|\approx\sqrt{\left(A_{\varphi}^{\mathrm{L}}\right)^{2}+\left(A _{\varphi}^{\mathrm{R}}\right)^{2}+2A_{\varphi}^{\mathrm{L}}A_{\varphi}^{ \mathrm{R}}\sin\left(2\pi d_{\varphi}u\right)}, \tag{20}\] where, instead of decaying like \(\sqrt{u}\), the angle-dependent functions \(A_{\varphi}^{\mathrm{L/R}}\) may become general functions of \(u\), \[A_{\varphi}^{\mathrm{L/R}}(u)=\frac{e_{\mathrm{upper}}(u)\pm e_{\mathrm{lower }}(u)}{2}. \tag{21}\] Here, \(e_{\mathrm{upper}}(u)\) and \(e_{\mathrm{lower}}(u)\) respectively correspond to the upper and lower envelopes of the function (20), which oscillates between these envelopes with periodicity \(d_{\varphi}\). As shown in [19; 21], fitting a ringing signal to (20) is a more robust method for inferring its periodicity, even when it takes the universal form (10). Mathematically speaking, we know of no reason why it should always be possible to fit a generic ringing visibility to the functional form (20), but in practice we find that it is sufficiently general to work (see Sec. 3.2.2 of [19] for more discussion). As for the second question, the simplest guess is that the interferometrically defined \(n=1\) ring diameter \(d_{\varphi}^{(1)}\) still follows the circlipse shape (18), at least for the low observer inclinations relevant to M87* observations. At this stage of the discussion, this is merely a conjecture which may not necessarily be correct. As we will show in the remainder of the paper, however, it does turn out to be true (to about \(1\) part in \(10^{3}\)) in a wide range of simple phenomenological models of M87*. As such, we may also regard (18) as a prediction from the Kerr hypothesis for the interferometric shape of the \(n=1\) ring, and its measurement could provide a precise probe of general relativity in the strong-field regime. ### Comparison with the shadow and critical curve In certain highly fine-tuned scenarios, the photon ring and its subring substructure are not present in black hole images. This happens, for instance, when the black hole is immersed in a spherically symmetric accretion inflow: in that case, the observational appearance of the source consists of a bright ring that encircles a central brightness depression whose boundary precisely coincides with the critical curve--an effect known as the "black hole shadow" [29; 30]. In such a scenario, measuring the shadow--the shape of the central brightness deficit--yields a direct measurement of the critical curve \(\tilde{\mathcal{C}}\), and hence of the black hole geometry. Indeed, the shape of \(\tilde{\mathcal{C}}\) depends only on the black hole mass \(M\), its spin \(a\), and the inclination \(\theta_{\mathrm{o}}\) of the observer, and these three parameters can be directly recovered from the three radii \((R_{0},R_{1},R_{2})\) that parameterize the projected diameter (17) of \(\tilde{\mathcal{C}}\). Unfortunately, such an astrophysical scenario does not seem to be relevant for either M87* or Sgr A* [1; 2; 3; 4], which are instead expected to present the photon ring structure described in Sec. II. Thus, the analytically known critical curve, which directly encodes the black hole parameters \((M,a,\theta_{\mathrm{o}})\), is likely not observable in itself. On the other hand, the photon rings, which are in principle observable, do not have an analytically predicted shape that encodes \((M,a,\theta_{\mathrm{o}})\). Rather, their appearance is not entirely fixed by the Kerr geometry, but instead varies with the astrophysical details of the emitting source: indeed, two black holes with the same mass and spin, observed from the same inclination, can nonetheless display photon rings of noticeably different shapes if their emission differs [15]. In other words, while the shape of the photon rings does track that of the critical curve, in the sense that their projected diameters follow the same functional form (17)-(18), the radii parameterizing these functions differ in their interpretation: in (17), they can be mapped back to \((M,a,\theta_{\mathrm{o}})\), whereas in (18), this map itself depends on the source, with the astrophysical dependence vanishing as \(n\to\infty\). Hence, measuring the projected diameter (18) of the \(n^{\mathrm{th}}\) photon ring gives stronger constraints on \((M,a,\theta_{\mathrm{o}})\) the higher \(n\) is. For \(n=2\), these can likely be inferred within a few percent, but less precisely for \(n=1\) ### Summary of the predicted first ring shape To summarize, we predict that in the baseline range (19) dominated by the \(n=1\) ring, the visibility amplitude of the (time-averaged) image of a Kerr black hole displays a characteristic ringing with angle-dependent periodicity \(\Delta u=1/d_{\varphi}^{(1)}\), where \(d_{\varphi}^{(1)}\) follows the functional form (18) to high precision. This naturally extends a previous [15] and similar prediction for the higher-\(n\) rings to the first and most easily accessible \(n=1\) subring. In contrast to the \(n\geq 2\) rings, for which \(d_{\varphi}^{(n)}\) may be associated with the diameter of the \(n^{\text{th}}\) ring in the image, the \(n=1\) ring diameter \(d_{\varphi}^{(1)}\) is defined purely interferometrically and lacks a sharp image-space interpretation. The diameter \(d_{\varphi}^{(1)}\) may be extracted from the visibility amplitude by fitting the latter to (20) and finding the best-fitting envelopes (21) and circlipse shape (18). The three circlipse radii \((R_{0},R_{1},R_{2})\) loosely track the black hole parameters \((M,a,\theta_{\text{o}})\), but their precise relation is not robust and depends on the astrophysics of the source. ## IV Phenomenological source model Having formulated a prediction for the interferometric shape of the \(n=1\) ring, we now wish to test whether it does indeed hold in a simple phenomenological model of M87*. In this section, we give a lightning review of the source model introduced in [4; 15; 19; 21]. Then, we use our Adaptive Analytical Ray Tracing code AART[21]--which exploits the integrability of light propagation in the Kerr spacetime--to compute high-resolution black hole images of this model, together with their corresponding visibilities accessible on long space-ground baselines. We model the source as a stationary, axisymmetric, equatorial disk composed of emitters describing circular Keplerian orbits down to the radius \(r_{\text{ms}}\) of the innermost stable circular orbit (ISCO), past which they plunge into the hole following a prescription of Cunningham's [31]. To determine the observational appearance of the source at an image-plane position \((\alpha,\beta)\), we analytically trace the corresponding light ray back from the observer's image plane and into the emitting region, increasing the observed intensity \(I_{\text{o}}(\alpha,\beta)\) each time the ray intersects the accretion disk by an amount dictated by the local emissivity. The full procedure is efficiently implemented in our relativistic ray tracing code AART[21]. For completeness, we sketch the main ingredients of the calculation, referring the reader to [21] for the details of our implementation. Effectively, we compute \[I_{\text{o}}(\alpha,\beta)=\sum_{n=0}^{N(\alpha,\beta)-1}\zeta_{n}\cdot g^{3} \left(r_{\text{s}}^{(n)},\alpha\right)I_{\text{s}}\left(r_{\text{s}}^{(n)} \right), \tag{22}\] where \(r_{\text{s}}^{(n)}=r_{\text{s}}^{(n)}(\alpha,\beta)\) denotes the (analytically known) equatorial radius at which a ray intersects the equatorial plane for the \((n+1)^{\text{th}}\) time on its backward trajectory from image-plane position \((\alpha,\beta)\), up to a total number \(N(\alpha,\beta)\) along its maximal extension. Meanwhile, \(g\) is a redshift factor (which is determined by the motion of the emitters and also known analytically), \(I_{\text{s}}(r)\) is a radial emission profile, and \(\zeta_{n}\) is a "fudge" factor, which we assume to be equal to \(1\) for \(n=0\), and \(1.5\) for \(n\geq 1\). The inclusion of this factor is meant to account for the (otherwise neglected) effects of geometrical thickness. It improves the qualitative agreement between images of this simplified equatorial model and the time-averaged images obtained from state-of-the-art general-relativistic magnetohydrodynamic (GRMHD) simulations [4; 20]. Figure 2: Radial emission profiles (23) considered in this work (with parameters listed in Table 1). These profiles range from emission that peaks inside the horizon and then decays rapidly outside (dotted green), to emission that peaks past the innermost stable circular orbit (with ISCO radius \(r_{\text{ms}}\)) and decays very slowly thereafter (dash-dot blue). The radii of the outer and inner horizons are denoted by \(r_{\pm}=M\pm\sqrt{M^{2}-a^{2}}\) and indicated with vertical lines. The profile depicted with a solid blue line was considered in [15]. It is broadly consistent with the 2017 EHT observations of M87* on Earth-size baselines and qualitatively similar to time-averaged images of state-of-the-art general-relativistic magnetohydrodynamic (GRMHD) simulations (see, e.g., Fig. 1 of [2]). The insets display the images produced by each of these three highlighted profiles. \begin{table} \begin{tabular}{c c} \hline \hline Johnson SU parameter & Values \\ \hline \(\mu\) & \(r_{-},r_{+}/2,r_{+},3r_{+}/2,2r_{+}\) \\ \(\gamma\) & \(-2,-1,0,1,2\) \\ \(\vartheta/M\) & \(0.25,0.5,1.0,1.5\) \\ \hline \hline \end{tabular} \end{table} Table 1: Values of the parameters considered in our survey over the 100 radial emission profiles (23) shown in Fig. 2. The outer/inner event horizon radii are \(r_{\pm}=M\pm\sqrt{M^{2}-a^{2}}\). We consider a family of radial emission profiles derived from Johnson's Standard Unbounded (SU) distribution, \[I_{\mathrm{s}}(r_{\mathrm{s}})=J_{\mathrm{SU}}(r_{\mathrm{s}};\mu,\vartheta, \gamma)\equiv\frac{e^{-\frac{1}{2}\left[\gamma+\mathrm{arcsinh}\left(\frac{r_{ \mathrm{s}}-\mu}{\vartheta}\right)\right]^{2}}}{\sqrt{\left(r_{\mathrm{s}}- \mu\right)^{2}+\vartheta^{2}}}, \tag{23}\] where the parameters \(\mu\), \(\vartheta\), and \(\gamma\) respectively control the location of the profile's peak, its width, and the profile asymmetry [19]. In our survey over emission profiles, we examine the same set of parameters as in [19], which we list in Table 1. We display the corresponding profiles in Fig. 2, which shows that these values span a wide range of possible emissivities. In particular, our survey includes a profile (solid blue line in Fig. 2) whose corresponding image is directly comparable to the time-averaged image in Fig. 1 of [2], for which the parameters in the underlying GRMHD simulation were chosen to ensure consistency with the 2017 EHT observations of M87*. ## V Survey over emission profiles Using AART[21], we perform a parameter survey over the emission profiles in Table 1 and Fig. 2, enabling us to verify how well our prediction for the interferometric shape of the \(n=1\) ring holds in our model. For each of the 100 considered emission profiles, we ray trace high-resolution images and compute the associated visibility amplitudes on very long baselines with AART--we assume throughout a black hole spin of \(a/M=94\%\) and an observer inclination \(\theta_{\mathrm{o}}=17^{\circ}\), but we expect our conclusions to hold more generally at low inclinations. Then, following the procedure introduced in [19], we determine the functional forms (20)-(21) that best fit the visibility amplitude in a given baseline range, allowing us to extract a characteristic periodicity for its ringing and thereby infer a projected diameter \(d_{\varphi}\). As discussed in [19, 15], on baselines of length \(\sim u\), the functional form (20) is approximately invariant under shifts \(d_{\varphi}\to d_{\varphi}+k/u\) for integer \(k\), creating a discrete degeneracy in the inferred diameter \(d_{\varphi}\). In principle, this degeneracy may be broken by counting the exact number of hops from \(u\) back to the origin \(u=0\), which would fix the radial periodicity \(\Delta u\) of the ringing, and hence \(d_{\varphi}\). In practice, this is not possible if we can only sample a fixed baseline window far from the origin. Instead, we fit \(d_{\varphi}\) at multiple baseline angles \(\varphi\) simultaneously, so as to obtain the best global fit \(d_{\varphi}^{\mathrm{obs}}\) for the interferometric diameter. This multi-fit procedure is explained in Sec. 3.2 of [19]. We carry out the fit on four different baseline windows of width \(30\,\mathrm{G}\mathrm{\SIUnitSymbolAngstrom}\), and assess how well the resulting interferometric diameters \(d_{\varphi}^{\mathrm{obs}}(u)\) match the prediction (18) in each of the windows as a function of the emission profile. In realistic observations, the sampled baseline window would likely vary in size and location, so this simulates a somewhat idealized experiment. We chose a fixed width of \(30\,\mathrm{G}\lambda\) to ensure that the baseline windows contain \(\sim 5\) hops in the visibility amplitude. The first three baseline windows that we examine are \([40,70]\,\mathrm{G}\lambda\), \([50,80]\,\mathrm{G}\lambda\), and \([70,100]\,\mathrm{G}\lambda\). These often lie in the range (19) dominated by the \(n=1\) ring, though sometimes they may also fall into a region where the \(n=0\) image still contributes significant power. Meanwhile, the fourth window spans the much longer baselines \([285,315]\,\mathrm{G}\lambda\), which were investigated in [15] and typically fall into a universal regime (11) dominated by the \(n=2\) ring. For some profiles, however, these baselines fall into a transition regime between the range (19) dominated by the \(n=1\) ring and the universal \(n=2\) range (11). As explained in Sec. 4 of [19], in such cases the inferred diameter \(d_{\varphi}^{\mathrm{obs}}\) may belong to neither ring, as the visibility amplitude receives significant contributions from both of them, resulting in interference. Or, it may happen that \(d_{\varphi}^{\mathrm{obs}}\) measures the \(n=1\) ring diameter \(d_{\varphi}^{(1)}\) at some angles \(\varphi\), and the \(n=2\) diameter \(d_{\varphi}^{(2)}\) for others. In principle, all these baselines are within reach of an eccentric orbiter whose Earth perigee is at \(\sim 4.5\times 10^{4}\,\mathrm{km}\) and its apogee at \(\sim 4\times 10^{5}\,\mathrm{km}\), provided it is equipped with receivers capable of observing on multiple frequency bands between \(83\,\mathrm{GHz}\) and \(345\,\mathrm{GHz}\). Indeed, a baseline extending from Earth to such an orbiter could sample the visibility from \(\sim 12\,\mathrm{G}\lambda\) (shortest baseline with the lowest frequency) to \(\sim 455\,\mathrm{G}\lambda\) (largest baseline with the highest frequency). Given an observed diameter \(d_{\varphi}^{\mathrm{obs}}\), we assess the quality of its best fit \(d_{\varphi}^{\mathrm{GR}}\) to the circlipse shape (18) by computing the ring-averaged normalized root-mean-square deviation \[\mathrm{RMSD}=\frac{\sqrt{\left\langle\left(d_{\varphi}^{\mathrm{obs}}-d_{ \varphi}^{\mathrm{GR}}\right)^{2}\right\rangle_{\varphi}}}{\left\langle d_{ \varphi}^{\mathrm{GR}}\right\rangle_{\varphi}}. \tag{24}\] The RMSD distributions resulting from the fits on the four different windows are shown in Fig. 3. As expected, the shortest baseline window is the one where fitting a circlipse shape to the inferred diameter is hardest: out of the 100 profiles studied, 53 provided an RMSD \(\leq 0.05\%\), which is our (conservative) cutoff. The number of models for which the RMSD \(\leq 0.05\%\) jumps to 85 for the longest baseline window considered. Here, it is important to clarify that the inability to obtain good circlipse fits within a fixed baseline window for some of the emission profiles does not mean that the diameters of their rings fail to follow the circlipse shape (18). In fact, for all of our profiles, each ring produces a clean ringing signature, as expected from our discussion in Sec. III. This can be seen, for instance, in Fig. 4, where we display the total visibility amplitude--together with its decomposition into separate subring contributions--for two representative profiles: one with the best-fitting circlipse in the window \([70,100]\,\mathrm{G}\lambda\) (top two rows), and another with the worst fit whose RMSD \(\leq 0.05\%\) we nonetheless deemed acceptable (bottom two rows). The difficulty, therefore, lies in whether the ring diameters can be extracted from the total visibility amplitude, which is of course the only one that is observable in experiment. Consistent with [15, 19], we find that for the \(n=2\) ring, one may always obtain a good fit to the circlipse shape (18) from the total visibility amplitude, though this may require going to extremely long baselines (sometimes as far as \(1000\,\mathrm{G}\lambda\)[19]). This is only possible to do while still remaining in the universal regime (11) dominated by the \(n=2\) ring because the width of this range is so large. By contrast, while the \(n=1\) ring image by itself also produces a clean interferometric ringing, this signature only dominates the total visibility in the comparably much narrower range (19). As a result, in some models, there may not exist a single baseline range in which the \(n=1\) ring dominates at every baseline angle \(\varphi\) (so that its full angle-dependent diameter \(d_{\varphi}^{(1)}\) may be extracted). By fixing a baseline length (window size) and angle, we may thus obtain a visibility amplitude that can be either dominated by a single ring or can show comparable power in multiple rings, as illustrated in Fig. 4. Here, the relevant factor is primarily the width of the profile, which determines the locations and widths of the ranges over which each ring dominates the signal by itself. On longer baselines \(\sim 300\,\mathrm{G}\lambda\), one may see interference between the \(n=1\) and \(n=2\) rings. This does not happen for the top model in Fig. 4, for which the visibility amplitude is completely dominated by the \(n=2\) ring in the purple window, resulting in an excellent circlipse fit. For the bottom model, on the other hand, the emission profile--and hence the \(n=1\) ring--are much narrower, so the \(n=1\) range (19) extends farther out and there is some interference between \(n=1\) and \(n=2\) in the purple window \([285,315]\,\mathrm{G}\lambda\), resulting in a slightly worse circlipse fit. Nonetheless, it is clear that going farther out to even longer baselines would further attenuate the power from the \(n=1\) ring and lead to a better circlipse fit to the \(n=2\) ring diameter in this model also. Neither of these models displays a transition between measuring purely \(d_{\varphi}^{(1)}\) or \(d_{\varphi}^{(2)}\) at different \(\varphi\), though this can also sometimes occur--particularly at higher inclinations--as shown in Fig. 6 of [19]. On shorter baselines \(\lesssim 100\,\mathrm{G}\lambda\), a clean diameter can be more challenging to extract because of more frequent interference between the \(n=0\) and \(n=1\) rings. When such interference occurs, the inferred diameter can differ more from that of a circlipse, which is expected since the \(n=0\) ring is not constrained to closely follow that shape. For the top model in Fig. 4, such interference never occurs in either the magenta window \([40,70]\,\mathrm{G}\lambda\) nor the yellow window \([70,100]\,\mathrm{G}\lambda\), which are always dominated by the \(n=1\) ring and from which we can therefore extract diameters \(d_{\varphi}^{(1)}\) with excellent circlipse fits. For the bottom model in Fig. 4, in the yellow window \([70,100]\,\mathrm{G}\lambda\), there are baseline angles (such as \(\varphi=15^{\circ}\)) where the \(n=1\) ring dominates, but at other angles (such as \(\varphi=85^{\circ}\)) the \(n=0\) ring retains significant power. As a result, the circlipse fit is not as good at those angles. For extremely narrow profiles, the first three image layers (\(n=0\) through \(n=2\)) all consist of very thin rings, and one may even observe interference effects between all three--this is for instance the case for the bottom model in Fig. 4 in the magenta window \([40,70]\,\mathrm{G}\lambda\) at \(\varphi=85^{\circ}\). Even at other angles (such as \(\varphi=15^{\circ}\)) where the \(n=0\) signal has relatively died out, the \(n=1\) and \(n=2\) rings still have comparable power, which explains why the circlipse fit in that window is so poor overall. These angle-dependent effects ought to grow with the observer inclination, but we expect them not to pose an insuperable obstacle at the low-to-moderate inclinations \(\lesssim 30^{\circ}\) of likely relevance for M87*. As for the black hole spin, higher spins increase the angular variation of \(d_{\varphi}\) and should therefore facilitate a precise circlipse fit--we defer a more thorough investigation to future work. ## VI Resolving the photon ring In this section, we discuss some of the implications of our results for future measurements of the \(n=1\) ring, before concluding in Sec. VII. There already exist multiple promising ways to detect the photon ring via its distinctive polarimetric signature [32] or characteristic pattern of autocorrelations [23, 27]. Here, we set aside these potential avenues for detection and focus exclusively on the complex visibility (2) dual to the image intensity, and more precisely its amplitude. Figure 3: The normalized root-mean-square deviation (RMSD) distributions of the fits for four different windows of size \(30\,\mathrm{G}\lambda\) with overlay kernel density estimations. The percentage of models for which we get an acceptable fit (RMSD \(\geq 0.05\%\)) are \(53\%\), \(70\%\), \(72\%\) and \(85\%\) for the windows \([40,70]\,\mathrm{G}\lambda\), \([50,80]\,\mathrm{G}\lambda\), \([70,100]\,\mathrm{G}\lambda\), and \([285,315]\,\mathrm{G}\lambda\), respectively. For the baseline window \([70,100]\,\mathrm{G}\lambda\), we display the best fit in the top panels of Fig. 4, while the worst fit (which still has an \(\mathrm{RMSD}\leq 0.05\%\)) is shown in the bottom panels of Fig. 4. In that case, it seems likely that the first unambiguous observation of the photon ring will occur via a detection of its characteristic "ringing" in the visibility amplitude. Moreover, this ringing will likely first be detected on the relatively shorter Earth-to-space baselines \(u\lesssim 100\,\mathrm{G}\lambda\) most easily accessible to observations, where the first subring dominates the visibility. Hence, our prediction (18) for the angle-dependent diameter \(d_{\varphi}^{(1)}\) of the \(n=1\) ring, which may be inferred from the angle-dependent radial periodicity of the ringing in the visibility amplitude, is especially timely. However, an important caveat is in order at this stage. According to Fig. 5, a ringing in the visibility amplitude on baselines of length \(\sim 20-40\,\mathrm{G}\lambda\) does not by itself provide conclusive evidence for the presence of the photon ring. After all, the \(n=0\) image is also ring-like [9] (in line with theoretical expectations) and therefore it also produces a characteristic ringing on those baselines by itself (top row of Fig. 5). Naively, it seems necessary to probe longer baselines \(\gtrsim 40\,\mathrm{G}\lambda\) to determine whether there really is a signature of the photon ring, that is, of strong gravity's stamp (bottom row of Fig. 5). For M87* observations at \(230\,\mathrm{G}\mathrm{H}\mathrm{z}\), an interferometer with a space element appears to be indispensable to achieve the requisite baseline length. Of course, a precise threshold past which the \(n=0\) signal decays depends on the width of the \(n=0\) ring in the image. Likewise, the decay rate of the visibility amplitude in the regime (19) dominated by the \(n=1\) ring may well provide some information about its width. It would be interesting to determine whether this width--and hence the Kerr-predicted demagnification factor \(e^{-\gamma}\)--may be recovered from the falloff rate of the visibility, that is, whether the envelope of the damped oscillations constrains the ring width. We take some first steps in this direction in App. A, where we investigate the relation between the width \(w\) of a Gaussian ring and the falloff rate \(e^{-2\pi^{2}w^{2}u^{2}}\) of its visibility amplitude. Generalizing such relations--if possible--would be interesting, since a measurement of \(\gamma\) could yield a much-sought-after estimate of the black hole parameters, particularly its spin. Finally, we note that a measurement of the predicted shape (18) for the interferometric diameter of the \(n=1\) ring would yield a consistency test of strong-field general relativity, since measuring a different diameter would be incompatible with Kerr hypothesis. At the same time, a measurement of the expected circlinge shape would not necessarily discriminate between general relativity and alternative theories of gravity predicting the same shape. On that note, [18] recently investigated the shape of the \(n=2\) photon ring in modified theories of gravity. They found that deviations from the circlinge shape were small unless the deviation from the Kerr geometry grew very large. This conclusion merits re-evaluation in the context of the \(n=1\) ring, whose deviations could perhaps be stronger. ## VII Conclusion In this paper, we examined the shape of the first \(n=1\) photon ring in time-averaged images of a simple model of M87* with an equatorial source around the black hole. We found that, even though this ring lacks a sharply defined diameter in the image domain, it is nevertheless possible to define its angle-dependent projected diameter from the periodic ringing of its interferometric signature in visibility space, which is the observable that proposed extensions of the EHT to space will directly probe. We showed in the context of our simple model of M87* that this interferometrically defined \(n=1\) ring diameter follows the shape (18) of a circlinge. We therefore regard this as a prediction from strong-field general relativity (and its Kerr hypothesis) for the shape \(d_{\varphi}^{(1)}\) of the first photon ring, which is most accessible to observation and will hopefully be measured soon. The results of this work indicate that measuring \(d_{\varphi}^{(1)}\) within a few percent of the critical curve diameter \(\bar{d}_{\varphi}\) is possible for several astrophysical profiles, and important factors affecting the accuracy of these measurements have been highlighted. As a final caveat: we have only studied time-averaged images and need to examine instantaneous snapshots with noise--both instrumental and astrophysical--to truly establish this as a robust prediction. This work is now ongoing work, and we expect to report results soon. Overall, this line of research provides valuable insights into the interferometric structure of black hole images and lays the groundwork for future observations. ###### Acknowledgements. We thank Samuel Gralla and Daniel Marrone for their valuable comments. We are grateful to Will and Kacie Snellings for their generous support, and A.C.-A. also acknowledges support from the Simons Foundation. ## Appendix A Thick axisymmetric rings This appendix explores the relation between the decay rate of the visibility amplitude of a thick axisymmetric ring and its width. In polar coordinates, the radio visibility \(V(u,\varphi)\) of an image \(I(\rho,\phi)\)--that is, its Fourier transform (2)--is \[V(u,\varphi)=\int_{0}^{\infty}\!\!\int_{0}^{2\pi}I(\rho,\phi+\varphi)e^{-2\pi i u \rho\cos\phi}\rho\,\mathrm{d}\rho\,\mathrm{d}\phi. \tag{24}\] For an axisymmetric image with a purely radial profile \(I(r)\), this is simply the zero-order Hankel transform \[V(u)=H_{0}[I(\rho)]=\int_{0}^{\infty}2\pi\rho J_{0}(2\pi u\rho)I(\rho)\, \mathrm{d}\rho, \tag{25}\] which is self-inverse (\(H_{0}^{2}=I\)), so \(I(\rho)=H_{0}[V(u)]\). ### Convolution theorem for Hankel transform If two axisymmetric images \(I_{1}(\rho)\) and \(I_{2}(\rho)\) have the visibilities \(V_{1}(u)=H_{0}[I_{1}(\rho)]\) and \(V_{2}(u)=H_{0}[I_{2}(\rho)]\), then their product image has visibility \(V(u)=H_{0}[I_{1}(\rho)I_{2}(\rho)]\) given by \[V(u)=\int_{0}^{\infty}\!\!\int_{0}^{2\pi}V_{1}(U)V_{2}(u^{\prime})u^{\prime}\, \mathrm{d}u^{\prime}\,\mathrm{d}\varphi, \tag{10}\] with \(U^{2}=u^{2}+u^{\prime 2}-2uu^{\prime}\cos\varphi\). Since the zero-order Hankel transform is self-inverse, this formula also holds with \(I\leftrightarrow V\) interchanged. ### Infinitely thin ring An infinitely thin ring of radius \(r\) (normalized to have unit total flux) has radial profile \[I_{\delta}(\rho)=\frac{1}{2\pi r}\delta(\rho-r), \tag{11}\] and corresponding visibility \[V_{\delta}(u)=J_{0}(2\pi ru). \tag{12}\] ### General thick axisymmetric ring Consider another image with some radial profile \(I_{w}(\rho)\) and associated visibility \(V_{w}(u)\). By (10), the product visibility \(V(u)=V_{\delta}(u)V_{w}(u)\) corresponds to an image \[I(\rho) =\int_{0}^{\infty}\!\!\int_{0}^{2\pi}I_{w}(R)\frac{\delta(\rho^{ \prime}-r)}{2\pi r}\rho^{\prime}\,\mathrm{d}\rho^{\prime}\,\mathrm{d}\phi\] \[=\int_{0}^{2\pi}\frac{\mathrm{d}\varphi}{2\pi}I_{w}\left(\sqrt{ \rho^{2}+r^{2}-2\rho r\cos\phi}\right), \tag{13}\] since \(R=\rho^{2}+\rho^{\prime 2}-2\rho\rho^{\prime}\cos\varphi\). This leads to a simple idea: any bump \(I_{w}(\rho)\) of width \(w\) at the origin creates a ring image \(I(\rho)\), and vice versa. ### Example: Gaussian ring The Gaussian ring of width \(w\), diameter \(d\), and unit total flux (\(V(0)=1\)), has a visibility \[V(u)=J_{0}(2\pi ru)e^{-2\pi^{2}w^{2}u^{2}}. \tag{14}\] This is of the product form \(V(u)=V_{\delta}(u)V_{w}(u)\) with \(V_{w}(u)=e^{-2\pi^{2}w^{2}u^{2}}\), which corresponds to a unit-flux Gaussian bump of width \(w\): \[I_{w}(\rho)=\frac{1}{2\pi\sigma^{2}}e^{-\frac{\rho^{2}}{2w^{2}}}. \tag{15}\] By (13), the Gaussian ring image has a radial profile \[I(\rho) =\frac{1}{(2\pi)^{2}rw^{2}}\int_{0}^{\infty}\!\!\int_{0}^{2\pi}e^ {-\frac{\rho^{2}+\rho^{\prime 2}-2\rho\rho^{\prime}\cos\phi}{2w^{2}}}\] \[\qquad\qquad\qquad\qquad\qquad\times\delta(\rho^{\prime}-r)\rho^{ \prime}\,\mathrm{d}\rho^{\prime}\,\mathrm{d}\phi\] \[=\frac{1}{2\pi rw^{2}}\int_{0}^{\infty}e^{-\frac{\rho^{2}+\rho^{ \prime 2}}{2w^{2}}}I_{0}\left(\frac{\rho\rho^{\prime}}{w^{2}}\right)\delta(\rho^{ \prime}-r)\rho^{\prime}\,\mathrm{d}\rho^{\prime}\] \[=\frac{1}{2\pi w^{2}}e^{-\frac{\rho^{2}}{8w^{2}}}I_{0}\left(\frac {d\rho}{2w^{2}}\right)e^{-\frac{\rho^{2}}{2w^{2}}}, \tag{16}\] where \(I_{0}(x)\) is a modified Bessel function of the first kind. Its name is justified because as long as the ring diameter is large enough that the intensity is small near the origin, this profile is indistinguishable from a Gaussian of width \(w\) and radius \(r\): \[I(\rho)\stackrel{{ d\gg 1}}{{\approx}}\frac{1}{(2\pi)^{3/2}\sqrt{r \rho}w}e^{-\frac{(\rho-r)^{2}}{2w^{2}}}. \tag{17}\] ### Example: Lorentzian ring The Lorentzian (Cauchy distribution) of width \(w\) is \[I_{w}(\rho)=\frac{1}{\rho^{2}+w^{2}}. \tag{18}\] Famously, all of its moments diverge. In particular, this image has infinite flux and its visibility is logarithmically divergent at the origin: \[V_{w}(u)=2\pi K_{0}(2\pi wu)\stackrel{{ u\to 0}}{{\sim}}2\pi\log\left(\frac{1}{2\pi u }\right), \tag{19}\] where \(K_{0}(x)\) is a modified Bessel function of the second kind. The Lorentzian ring of width \(w\) and radius \(r\) has visibility \[V(u) =2\pi J_{0}(2\pi ru)K_{0}(2\pi wu) \tag{20}\] \[\stackrel{{ u\to\infty}}{{\approx}}\frac{2\pi^{2}J_{ 0}(2\pi ru)}{\sqrt{wu}}e^{-2\pi wu}, \tag{21}\] which on long baselines decays like a linear exponential. ### Example: "Smooth bump" ring The "smooth bump" of width \(w\) is the function \[f_{w}(x)=\begin{cases}e^{-\frac{\rho^{2}}{w^{2}-a^{2}}}&x\in[-w,w],\\ 0&\text{otherwise}.\end{cases} \tag{22}\] It defines a (normalizable) radial bump of width \(w\) \[I_{w}(\rho)=cf_{w}(\rho),\quad c^{-1}=2\pi\int_{0}^{\infty}f_{w}(\rho)\rho\, \mathrm{d}\rho, \tag{23}\] whose convolution with \(I_{\delta}(\rho)\) produces a ring image \(I(\rho)\) with compact support localized in a band of width \(2w\). The Fourier transform of \(f_{1}(x)\) behaves asymptotically as [33] \[\tilde{f}_{1}(k)\overset{k\to\infty}{\approx}2\Re\left[\sqrt{\frac{-i\pi}{\sqrt{2 i}k^{3/2}}}e^{ik-\frac{1}{4}-\sqrt{2i\mathbb{k}}}\right]. \tag{100}\] Therefore, we expect that on long baselines, the visibility \(V(u)\) of a "smooth bump" ring of unit width will scale as \[V(u)\overset{u\to\infty}{\propto}\frac{J_{0}(2\pi ru)}{u^{3/4}}e^{-\sqrt{u}}, \tag{101}\] which indeed appears to be the case numerically. ### An observation In the three examples above, the visibility of a ring of width \(w\) asymptotically behaves as \(V(u)\sim e^{-c(wu)^{p}}\) for some constants \(c\) and \(p\). Mathematical properties of the profile can impose stringent restrictions on these constants. For instance, if the profile is analytic, then \(p\geq 1\) by the Paley-Wiener theorem, as exemplified by the Gaussian and Lorentzian rings (meanwhile, the smooth bump has \(p=0.5\) but is not analytic). It seems worthwhile to explore the set of possible values of \(c\) and \(p\) found in phenomenological models and to investigate whether a robust connection to the ring width \(w\) can be established. ## Appendix B No universal regime for thick rings In Sec. III, we discussed how a relatively thicker ring (such as the \(n=1\) photon ring in many models) may not display a universal regime. In this appendix, we examine explicitly how the universal regime opens or closes up as a function of the width of a Gaussian ring. A ring of width \(w\) and diameter \(d\) has one dimensionless parameter: its width-to-diameter ratio, or thickness \[t=\frac{w}{d}\in\left[0,\tfrac{1}{2}\right), \tag{102}\] This thickness cannot be too large: \(t\lesssim\tfrac{1}{2}\) is necessary to have a ring rather than a disk. In the limit \(t\to 0\), we always recover the infinitely thin ring \(I_{\delta}(\rho)\) with visibility \(V_{\delta}(u)=J_{0}(2\pi ru)\). A generic ring \(I(\rho)\) lies in between these two extremes: * A thick ring has \(0\ll t\lesssim\tfrac{1}{2}\). * A thin ring has \(0<t\ll\tfrac{1}{2}\). Its visibility \(V(u)\) has two dimensionless scales \(U=du\) and \(W=wu\). These are not independent, but related by \[0<W=tU<U. \tag{103}\] It is best to view the visibility as a function \(V(U)\) that has a spacing of nulls \(\Delta U\approx 1\) and exhibits three regimes: 1. \(W<U<1\), or \(u<\tfrac{1}{d}<\tfrac{1}{w}\): in this regime, the visibility does not yet resolve the ring, as it has not even reached its first null at \(U\sim 1\) or \(u\sim\tfrac{1}{d}\). 2. \(W<1<U\), or \(\tfrac{1}{d}<u<\tfrac{1}{w}\): in this regime, the visibility resolves the ring (at least one null), but not yet its width. 3. \(1<W<U\), or \(\tfrac{1}{d}<\tfrac{1}{w}<u\): in this regime, the ring has been fully resolved out. This applies to both thick and thin rings, but for thin rings only, a qualitatively new behavior can emerge in the second regime. As usual, the reason is that if a system has a small dimensionless parameter, then it ought to exhibit a large separation of scales (and vice versa). Here, if \(0<t\ll\tfrac{1}{2}\), then it is possible to simultaneously have \(1\ll U\) and \(tU\ll 1\), opening up a new regime \(W\ll 1\ll U\), or \(\tfrac{1}{d}\ll u\ll\tfrac{1}{w}\): the smallness of \(t=\tfrac{W}{U}\) is equivalent to the large separation of scales needed to open up this new regime. This is the universal regime. It is universal because in it we may approximate \(W\approx 0\) and hence set \(t\approx 0\) while still keeping \(U\gg 1\), which means that it is possible to stay in this regime while forgetting about the width--and radial profile--of the ring. Therefore, the visibility of any thin ring in this regime must tend to \(V_{\delta}(u)\). ### Example: Gaussian ring We now explore how this works in the context of the Gaussian ring of width \(w\) and diameter \(d=2r\), with the visibility (100) or \[V(U)=J_{0}(\pi U)e^{-2\pi^{2}W^{2}}=J_{0}(\pi U)e^{-2t^{2}\pi^{2}U^{2}}. \tag{104}\] In regime 1, both \(U\) and \(W=tU\) are small, so we may expand in \(U\ll 1\) to find the second-order approximation \[V(U) \approx 1-\left(\frac{1}{4}+2t^{2}\right)\pi^{2}U^{2}\] \[\quad+\left(\frac{1}{64}+\frac{t^{2}}{2}+2t^{4}\right)\pi^{2}U^{2}, \tag{105}\] which we expect to be valid for \(U<1\) and any \(t\). In Fig. 6, we confirm this by plotting the exact \(|V(U)|\) (blue) against its leading (orange) and subleading (green) approximations for \(t=5\%\), finding good agreement in the expected range \(U\ll 1\). In regime 3, both \(U\) and \(W=tU\) are large, so we may expand in \(U\gg 1\) to obtain the leading approximation \[V(U)\approx\frac{\cos\pi U+\sin\pi U}{\pi\sqrt{U}}e^{-2t^{2}\pi^{2}U^{2}}, \tag{106}\] which we plot in Fig. 6 (orange) against the exact \(|V(u)|\) (blue) for a ring of width \(t=20\%\), again finding excellent agreement in the expected regime \(U\gg 1\). We note that the approximations obtained in these regimes are not in any sense universal: in particular, they depend on the thickness \(t\) of the ring and would differ for a non-Gaussian radial profile. Finally, in regime 2, \(U\) is large but \(W=tU\) is small, so we cannot expand in \(U\). We have no recourse but to expand in \(t\ll 1\) to find \[V(U)\approx J_{0}(\pi U)\left[1-2t^{2}\pi^{2}U^{2}+\ldots\right], \tag{20}\] which is only supposed to be a valid approximation for \[1<U\lesssim\frac{1}{2\pi t}. \tag{21}\] Thus we only expect this approximation to be good as \(t\to 0\), in which case it holds over a very large range and forgets about the width (radial profile) of the ring to take the universal form \[V(U)\approx J_{0}(\pi U)=V_{\delta}(U). \tag{22}\] In Fig. 6, we confirm this by plotting the exact visibility \(|V(u)|\) (blue) against its leading (orange) and subleading (green) approximations (20), first for a relatively thicker ring with \(t=8\%\) and next for a thinner ring with \(t=2\%\). Remarkably, we find that the agreement is good up to \(U\lesssim 2\) in the first case, and up to \(U\lesssim 8\) in the second case, exactly as predicted by (21). At the lower end, we find that for the Gaussian ring, the universal formula applies all the way down to \(U=0\), but this is merely a coincidence: this would not be true of the Lorentzian ring, for instance, whose visibility diverges at the origin.
2310.11031
Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters
Learning a robust vision model despite large distribution shift is essential for model deployment in real-world settings. Especially, domain generalization (DG) algorithm aims to maintain the performance of a trained model on different distributions which were not seen during training. One of the most effective methods has been leveraging the already learned rich knowledge of large pretrained models. However, naively fine-tuning large models to DG tasks is often practically infeasible due to memory limitations, extensive time requirements for training, and the risk of learned knowledge deterioration. Recently, parameter-efficient fine-tuning (PEFT) methods have been proposed to reduce the high computational cost during training and efficiently adapt large models to downstream tasks. In this work, for the first time, we find that the use of adapters in PEFT methods not only reduce high computational cost during training but also serve as an effective regularizer for DG tasks. Surprisingly, a naive adapter implementation for large models achieve superior performance on common datasets. However, in situations of large distribution shifts, additional factors such as optimal amount of regularization due to the strength of distribution shifts should be considered for a sophisticated adapter implementation. To address this, we propose a mixture-of-expert based adapter fine-tuning method, dubbed as mixture-of-adapters (MoA). Specifically, we employ multiple adapters that have varying capacities, and by using learnable routers, we allocate each token to a proper adapter. By using both PEFT and MoA methods, we effectively alleviate the performance deterioration caused by distribution shifts and achieve state-of-the-art performance on diverse DG benchmarks.
Gyuseong Lee, Wooseok Jang, Jin Hyeon Kim, Jaewoo Jung, Seungryong Kim
2023-10-17T07:01:24Z
http://arxiv.org/abs/2310.11031v1
# Domain Generalization Using Large Pre- ###### Abstract Learning a robust vision model despite large distribution shift is essential for model deployment in real-world settings. Especially, domain generalization (DG) algorithm aims to maintain the performance of a trained model on different distributions which were not seen during training. One of the most effective methods has been leveraging the already learned rich knowledge of large pretrained models. However, naively fine-tuning large models to DG tasks is often practically infeasible due to memory limitations, extensive time requirements for training, and the risk of learned knowledge deterioration. Recently, parameter-efficient fine-tuning (PEFT) methods have been proposed to reduce the high computational cost during training and efficiently adapt large models to downstream tasks. In this work, for the first time, we find that the use of adapters in PEFT methods not only reduce high computational cost during training but also serve as an effective regularizer for DG tasks. Surprisingly, a naive adapter implementation for large models achieve superior performance on common datasets. However, in situations of large distribution shifts, additional factors such as optimal amount of regularization due to the strength of distribution shifts should be considered for a sophisticated adapter implementation. To address this, we propose a mixture-of-expert based adapter fine-tuning method, dubbed as mixture-of-adapters (MoA). Specifically, we employ multiple adapters that have varying capacities, and by using learnable routers, we allocate each token to a proper adapter. By using both PEFT and MoA methods, we effectively alleviate the performance deterioration caused by distribution shifts and achieve state-of-the-art performance on diverse DG benchmarks. ## 1 Introduction The goal of domain generalization (DG) is to well predict on domains that were unavailable during training (a.k.a unseen domains) (Gulrajani and Lopez-Paz, 2020). In DG settings, the model is trained on multiple source domains and evaluated on one unseen target domain (Zhou et al., 2021; Gulrajani and Lopez-Paz, 2020; Cha et al., 2021). Unlike domain adaptation setting, domain generalization is unable to access any information about the target domain. Therefore, DG algorithms should fully exploit domain invariant features underlying in the source domain to well predict on the target domain (Seo et al., 2020; Gulrajani and Lopez-Paz, 2020; Arjovsky et al., 2019). In recent times, the usage of large pretrained models is gaining popularity in domain generalization fields (Cha et al., 2022; Mao et al., 2022; Lew et al., 2023; Li et al., 2023). Since large pretrained models already possess some extent of domain invariant knowledge (Cha et al., 2022), exploiting this knowledge into domain generalization has become a popular choice. There are few studies that have tried to train these models directly with empirical risk minimization (ERM) (Vapnik, 1998). Angarano et al. (2022) shows that ERM algorithm performs competitively well when accompanied with proper backbones like EfficientNet (Tan and Le, 2019), ViT (Dosovitskiy et al., 2020), DeiT (Touvron et al., 2021), LeViT (Graham et al., 2021) and ConViT (d'Ascoli et al., 2021). This inspired us to leverage large pretrained models on DG settings. However it is widely known that naively fine-tuning a large model is impractical, not only does it demand a large size VRAM caused by high peak memory in training phase, but it also requires a significant amount of training time due to numerous amount of parameters. Additionally, the pretrained feature extractor can be distorted by overfitting on source domains (Gao et al., 2021) and its contextual information can be harmed (Kumar et al., 2022; Mao et al., 2022; Wortsman et al., 2022), which overall results in a generalization failure to out-of-distribution data. Fig. 1 compares the accuracy of various fine-tuning methods, including naive full fine-tuning, linear probing, and other partial fine-tuning methods,which also highlights their incapability to handle such distribution shifts. In this paper, for the first time, we propose to adopt parameter-efficient fine-tuning (PEFT) methods in the context of domain generalization, expanding its usage beyond its traditional domains on transfer learning. PEFT methods (Houlsby et al., 2019; Karimi Mahabadi et al., 2021; He et al., 2022; Zaken et al., 2021; Hu et al., 2021; Ryu, 2023; Jia et al., 2022; Chen et al., 2022; Li & Liang, 2021; Sung et al., 2022) mitigate the high cost of full fine-tuning of large pretrained models and aims to reach or exceed the performance of full fine-tuning or zero-shot performance by tuning only some parts of the model (Zaken et al., 2021) or by employing a small number of external learnable parameters (Hu et al., 2021; Ryu, 2023; Houlsby et al., 2019; Karimi Mahabadi et al., 2021; He et al., 2022; Jia et al., 2022). We compare various aspects of different trainable parameter settings (e.g., full fine-tuning, and adapter fine-tuning) and empirically verify that PEFT methods can act as an effective regularization during the training process and show that it can largely outperform a fully fine-tuned model and reach comparable performance with recent state-of-the-art methods (Cha et al., 2021, 2022; Arpit et al., 2022; Li et al., 2023) in DG settings. We also discover that the optimal strength of PEFT regularization differs, depending on the amount of a distribution shift. As shown in Fig. 1, linear probing (Linear), bias tuning in attention layer (Bias (MSA)), bias tuning in both attention and MLP layer (Bias (MSA+MLP)) show comparable performance on PACS (Li et al., 2017) and VLCS Fang et al. (2013) dataset. However on datasets with large distribution shifts, say TerralIncognita (Beery et al., 2018), Bias (MSA+MLP) shows significant improvement compared to the Bias (MSA) and Linear method. This shows that determining the proper amount of regularization by adjusting the trainable parameter is crucial to deal with different distribution shift. To handle this, we introduce mixture-of-expert based adapter architecture, called mixture-of-adapters (MoAs), that adequately manipulate the magnitude of regularization by employing adapters that have different capacity and routing each token to proper adapters. By doing so, we can further improve the performance on DG tasks by employing MoAs and learnable routers to handle the different intensity of distribution shift among various datasets and achieve state-of-the-art performance on DG benchmark. ## 2 Related Work Domain generalization.For the past decade, numerous learning methods on how to learn domain invariant representations have been proposed in the domain generalization field. Empirical risk minimization (ERM) (Vapnik, 1998) which is one of the most simplest approaches, just min Figure 1: Results on domain generalization benchmark with varying trainable parameters in ViT-Base (Dosovitskiy et al., 2020) pretrained with CLIP of OpenAI (Radford et al., 2021). Y axis is accuracy. We use linear probing (denoted as Linear), bias tuning in attention layer (Bias (MSA)), bias tuning in both attention and MLP layer (Bias (MSA+MLP)), and full fine-tuning to show the accuracy change according to the trainable parameter change when using PEFT methods with large models. OH, TI, DN denotes OfficeHome (Venkateswara et al., 2017), TerralIncognita (Beery et al., 2018), and DomainNet (Peng et al., 2019), respectively. imizes the loss on each domains and trains the model. In DomainBed (Gulrajani and Lopez-Paz, 2020), ERM still remains effective within the benchmark's restricted hyperparameter search space and model selection methods. DomainBed tested many DG methods such as IRM (Arjovsky et al., 2019), GroupDRO (Sagawa et al., 2019), MixUp (Xu et al., 2020), DANN (Ganin et al., 2016), and CORAL (Sun and Saenko, 2016) in a unified and contained experimental setup. SWAD (Cha et al., 2021) explored the relationship between flat loss surfaces and DG performance, achieving superior results. Ensemble-of-Averages (EoA) (Arpit et al., 2022) perform model averaging in training time and ensemble them in test time. Current research is focused on leveraging knowledge from large pretrained models, with MIRO (Cha et al., 2022) introducing an oracle model approximated by a large pretrained model to maximize the mutual information between the target model. Recently, methods that ensemble diverse models show remarkable performance in the DG benchmark. SIMPLE (Li et al., 2023) utilizes many different pretrained models from a ModelPool and extracts outputs from the frozen pretrained models, and trains a shallow dispatcher using these outputs. Parameter efficient fine-tuning.Leveraging large pretrained models for specific tasks involves fine-tuning, but fine-tuning all parameters is impractical. Recent approaches such as Parameter-efficient Fine-Tuning (PEFT) (Houlsby et al., 2019; He et al., 2021; Paul et al., 2022), focus on efficient fine-tuning by freezing most of the model parameters and optimizing only a few for a given task. Many successful PEFT approaches (Zaken et al., 2021; Hu et al., 2021; Ryu, 2023; Gao et al., 2020; He et al., 2021; Jia et al., 2022; Lester et al., 2021; Sung et al., 2022; Zhang et al., 2022; 2023) adopt popular pretrained models to various downstream tasks. Among these methods, the use of adapters (Hu et al., 2021; Karimi Mahabadi et al., 2021; He et al., 2022) are adopted because of its high performance and efficient computation cost. Adapters are small modules trained on specific tasks which are inserted between network layers, where all the layers except for the adapters are frozen during training. LoRA (Hu et al., 2021), Compacter (Karimi Mahabadi et al., 2021) and KAdaptation (He et al., 2022) greatly reduced the number of trainable parameters by attaching a low-rank hypercomplex adapter layers in the transformer model and by decomposing the updated weight matrix into low rank matrices respectively. Mixture-of-Experts.Mixture-of-Experts (MoEs) models are proposed to improve model performance by incorporating multiple subsets of parameters called 'experts' with routing algorithms conditioned by its input (Jacobs et al., 1991; Jordan and Jacobs, 1994; Eigen et al., 2013). Evolving from this, a type of model called sparse MoEs has become popular in both NLP (Shazeer et al., 2017; Lepikhin et al., 2020; Zoph et al., 2022; Fedus et al., 2022; Du et al., 2022) and vision (Ahmed et al., 2016; Gross et al., 2017; Yang et al., 2019; Wang et al., 2020; Riquelme et al., 2021) tasks lately. This is due to its capability to enhance model capacity while simultaneously reducing the substantial increase in the computational resources required for training. In the field of domain generalization, Li et al. (2022) incorporated MoE design into DG tasks, and proposed a Generalizable Mixture-of-Experts (GMoE) architecture to effectively handle distribution shifts. ## 3 Preliminaries ### Domain generalization Let us denote the set of training domains as \(\mathcal{D}=\{\mathcal{D}^{i}\}_{i=1}^{K}\) where \(K\) implies the total number of training domains, and \(\mathcal{D}_{i}\) is the distribution over the input space. Also, define the set of target domains as \(\mathcal{T}=\{\mathcal{T}^{i}\}_{i=1}^{K^{\prime}}\) where \(K^{\prime}\) denotes the total number of training domains. The training dataset is composed of \(n_{\mathcal{D}^{i}}\) data points denoted as \((x_{j}^{i},y_{j}^{i})_{j=1}^{n_{\mathcal{D}^{i}}}\sim\mathcal{D}^{i}\) where \(x\) is the input and \(y\) is the target label for each training domain. In DG settings, the goal is to find the model parameter \(\theta\) of a classifier \(f_{\theta}\) which generalizes well on both \(\mathcal{D}\) and \(\mathcal{T}\). To be more precise, for the ERM algorithm, we define \[\mathcal{E}_{\mathcal{D}}(\theta)=\frac{1}{K}\sum_{i=1}^{K}\mathbb{E}_{(x^{i},y^{i})\sim\mathcal{D}^{i}}[l(f_{\theta}(x^{i}),y^{i})] \tag{1}\] where \(l(\cdot,\cdot)\) denote a loss function and define \(\mathcal{E}_{\mathcal{T}}\) in the similar manner. We minimize empirical risk \(\hat{\mathcal{E}}_{\mathcal{D}}(\theta)=\frac{1}{K}\sum_{i=1}^{K}\frac{1}{n_{ \mathcal{D}^{i}}}\sum_{j=1}^{n_{\mathcal{D}^{i}}}[l(f_{\theta}(x_{j}^{i}),y_{ j}^{i})]\) during training and expect the optimal parameter \(\hat{\theta}=\arg\min_{\theta}\hat{\mathcal{E}}_{\mathcal{D}}(\theta)\) to be optimal for \(\mathcal{E}_{\mathcal{T}}(\theta)\). ### Parameter efficient adapter Parameter-efficient fine-tuning (PEFT) efficiently handles enormous pretrained models to a downstream task and achieves significant performance unlike the direct fine-tuning methods where high cost of memory and computation are required. Previous methods of fine-tuning large models have been accomplished by partial fine-tuning (Touvron et al., 2022) methods. However, recent works (Houlsby et al., 2019; Hu et al., 2021; Karimi Mahabadi et al., 2021) reveal that employing a learnable layer in the frozen pretrained weights can surpass the performance of the partial fine-tuning method. More specifically, Aghajanyan et al. (2020) verifies that dense neural network layers with full-rank matrices can be reduced to lower-rank subspaces. From this finding, Hu et al. (2021) constrains the updated weight \(\Delta\mathbf{W}=\mathbf{B}\mathbf{A}\in\mathbb{R}^{k\times d}\) to have a low intrinsic rank which can be expressed as \(\mathbf{h}=\mathbf{W_{0}}\mathbf{x}+\Delta\mathbf{W}\mathbf{x}=\mathbf{W_{0}} \mathbf{x}+\mathbf{B}\mathbf{A}\mathbf{x}\), where \(\mathbf{x}\) denotes input sequences of each module, \(\mathbf{W_{0}}\in\mathbb{R}^{k\times d}\) denotes the frozen weight matrix of a pretrained model, and \(\mathbf{A}\in\mathbb{R}^{k\times r}\), and \(\mathbf{B}\in\mathbb{R}^{r\times d}\) (\(r<k,d\)) are the trainable parameters during training. In addition, KAdaptation (He et al., 2022) and Compacter (Karimi Mahabadi et al., 2021) both exploit Kronecker products to decompose the weight matrices into reduce trainable parameters. The Kroneker product between matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) and \(\mathbf{B}\in\mathbb{R}^{p\times q}\) is denoted by \(\mathbf{A}\otimes\mathbf{B}\in\mathbb{R}^{mp\times nq}\) and can be expressed as the following: \[\mathbf{A}\otimes\mathbf{B}=\begin{pmatrix}a_{11}\mathbf{B}&\mathbf{\dots}&a_{1n} \mathbf{B}\\ \vdots&\ddots&\vdots\\ a_{m1}\mathbf{B}&\mathbf{\dots}&a_{mm}\mathbf{B}\end{pmatrix} \tag{2}\] where \(a_{ij}\) indicates the element in the \(i\)-th row and the \(j\)-th column of \(\mathbf{A}\). It decomposes the update weight matrix \(\Delta\mathbf{W}=\sum_{i=1}^{t}\mathbf{A}_{i}\otimes\mathbf{B}_{i}\in\mathbb{ R}^{k\times d}\) where \(t\) is a hyperparameter that decides the number of Kronecker products. Furthermore, \(\mathbf{W}\) can be expressed as the following: \(\mathbf{W}=\sum_{i=1}^{t}\mathbf{A}_{i}\otimes\mathbf{B}_{i}=\sum_{i=1}^{n} \mathbf{A}_{i}\otimes(\mathbf{u}_{i}\mathbf{v}_{i}^{\top})\) where slow-weights \(\mathbf{A}_{i}\in\mathbb{R}^{t\times t}\) is shared across all layers, and fast-weights \(\mathbf{B}_{i}\in\mathbb{R}^{\frac{t}{t}\times\frac{t}{t}}\) is decomposed into low-rank matrices \(\mathbf{u}_{i}\in\mathbb{R}^{\frac{t}{t}\times r}\) and \(\mathbf{v}_{i}\in\mathbb{R}^{r\times\frac{t}{t}}\) with \(i\in\{1,\dots,t\}\). Compacter decomposes the weight matrix to the additional adapter layer whereas KAdaptation decomposes the update matrix \(\Delta\mathbf{W}\) in the original layer. ## 4 Analysis of Adapters for Domain Generalization In this section, we investigate how adapters affect loss landscape flatness by analyzing the maximum Hessian eigenvalue spectra and loss landscapes of various trained models. It is widely known that finding a flat minima during optimization is closely related to the generalization performance and robustness of trained models, as documented in prior studies (Izmailov et al., 2018; Keskar et al., 2016; Garipov et al., 2018; Foret et al., 2020; Cha et al., 2021; Park and Kim, 2022). Therefore, by comparing loss landscapes along with maximum Hessian eigenvalues, we can anticipate which model will generalize better to unseen domains. We show the results of our analysis in the following sections. Figure 2: Flatness comparison of loss surfaces from models trained with full fine-tuning, LoRA, KAdaptation, and KAdaptation with Mixture-of-Adapter (denoted as KMoA) on PACS dataset Li et al. (2017). All visualizations are computed from test environment 0 (Art) domain. ### Loss Landscapes As shown in (Park & Kim, 2022; Garipov et al., 2018; Li et al., 2018) randomly perturbing the trained weights with random direction vectors, we can obtain variations of the loss value, which can be used to predict the models generalization ability. We evaluate on PACS (Li et al., 2017) dataset, one of the dataset in the DG benchmark (Gulrajani & Lopez-Paz, 2020). As shown in Fig. 2, compared to the case of full fine-tuning (Fig. 2a) where we trained all of the model parameters, model parameters tuned with LoRA (Hu et al., 2021; Ryu, 2023) (Fig. 2b) and KAdaptation (He et al., 2022) (Fig. 2c) displays a more flatter loss surface. Additionally, KAdaptation shows a flatter loss surface than LoRA. We show further visualizations of the loss landscape on additional test environments on PACS in Appendix A.1. ### Maximum Hessian Eigenvalue Spectra Drawing loss landscapes from the optimal point to random directions sometimes fail to fully represent the shape of loss surfaces due to the high dimensionality of the loss surface (Garipov et al., 2018; Li et al., 2018). Therefore, following Park & Kim (2022) we calculate the top-5 Hessian eigenvalues and show their spectra. Analogous to loss landscape results in the previous section, we observe the same phenomena with the use of adapters, say LoRA and KAdaptation. As depicted in Fig. 3, the top-5 Hessian eigenvalues are further zero-concentrated than the fully fine-tuned model. These discrepancy implies that employing adapters with large pretrained models results in a flatter loss surface around the optimal point. This is because the Hessian matrix serves as an indicator of the curvature of the loss surface. Since the flatness of loss surfaces has a substantial impact on model performance in DG tasks Cha et al. (2021), it can be concluded that adapter fine-tuning with large pretrained models effectively facilitate the performance in domain generalization. We show visualizations of max Hessian eigenvalue spectra on additional test environments on PACS dataset in Appendix A.2. \begin{table} \begin{tabular}{l|c c|c c c c c|c c} \hline \hline Algorithm & Architecture & Pretraining & PACS & VLES & Officiolome & Terrain. & DomainNet & Avg. & \#Param. & \begin{tabular}{c} Trainable \\ \#Param. \\ \end{tabular} \\ \hline MBO & RegNet\(\uparrow\)16GF & SWMoCaun & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{1}}}} {\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{1}}}}}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{1}}}}}}}}}}}}}}}}}}\) \\ SMA & RegNet\(\uparrow\)16GF & SWMoCaun & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{1}}}}}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbm{\bmbmbmbmbmbmbmbm }}}}}}}}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbmbmbmbmbm{\bmbmbmbmbmbm{\bm ### Parameter-Efficient Adapter for Domain Generalization Previously, we demonstrated that models trained with parameter-efficient fine-tuning methods have the potential to reach a more generalizable optimization point. Nevertheless, it's challenging to come to a conclusive decision that a flatter loss surface directly translates to better generalization performance. Consequently, in the following section, we conduct a comprehensive evaluation of various fine-tuning methods and make comparisons between them. Finally, we discuss about the most effective practical fine-tuning approach and explore strategies to further enhance performance. In Tab. 1 we show results from various paper on five datasets, namely PACS (Li et al., 2017), VLCS (Fang et al., 2013), OfficeHome (Venkateswara et al., 2017), TerraIncognita Beery et al. (2018) and DomainNet (Peng et al., 2019), and from the observations above, we evaluate ERM with various PEFT adapter methods in standard DG benchmark form Gulrajani and Lopez-Paz (2020). Some works like Cha et al. (2022) report the results of full fine-tuning a ViT-B/16 model with an ERM algorithm. However just by training a small adapter layer or an attention layer, we observe a greater performance improvement. Compared to the methods that use additional regularization or ensembling, our parameter efficient training approach with naive ERM algorithm reaches comparable accuracy and even state-of-the-art for the OfficeHome dataset. Additionally KAdaptation method achieves the best average accuracy of all the methods. These results validate our observation in Sec. 4.1 and 4.2, demonstrating that a proper selection of an adapter method can boost the performance adequately. ## 5 Mixture-of-Adapters (MoA) In this section we show the effectiveness of our proposed method, mixture-of-adapter, which can tune large models with low cost and achieve great performance on DG tasks. As revealed in Li et al. (2022), mixture-of-expert model architectures with the use of cosine routing is more effective on DG tasks. Experts also handle some parts of the visual attributes, therefore reducing incorrect token allocation caused by intra-domain similarities. We bring it to our adapter-based method, which maintains the computational efficiency and achieves better results in domain generalization. As demonstrated in Sec. 1, we utilize adapters to handle various amounts of distribution shift among the domains. Specifically, adapters, each with distinct capacities at every MoA layer, are employed to select the top-k outputs which are averaged and then integrated with the original layer's outputs. We manipulate the capacities of each adapter by adjusting its inner rank. The mixture-of-experts (MoE) layer utilizing the cosine router \(\mathrm{G}(\mathbf{x})\) with embedding \(\mathbf{E}\) and adapter \(\mathrm{A}_{r_{i}}\) with inner rank \(r_{i}\) can be denoted as: \[\begin{split} f_{\mathrm{MoE}}(\mathbf{x})&=\sum_{i=1 }^{N}\mathrm{G}(\mathbf{x})_{i}\mathrm{A}_{r_{i}}(\mathbf{x})\\ &=\sum_{i=1}^{N}\mathrm{TOP}_{k}\left(\texttt{Softmax}\left( \frac{\mathbf{E}^{\mathsf{T}}\mathbf{W}\mathbf{x}}{\tau\|\mathbf{W}\mathbf{x} \|\|\mathbf{E}\|}\right)\right)_{i}\mathrm{A}_{r_{i}}(\mathbf{x})\end{split} \tag{3}\] where \(\mathrm{A}_{r_{i}}(\mathbf{x})\) is the output of an adapter with a different inner rank \(r_{i}\) and \(\tau\) is a learnable temperature term of a softmax layer. In detail, we incorporate adapters in the large pretrained model by attaching them to attention sub modules. Following the implementation of traditional mixture-of-expert (MoE) methods, routers are adopted to dispatch tokens to the appropriate expert adapters. Figure 4: Architecture of proposed Mixture-of-Adapters (MoA). \(\mathbf{W}_{0}\), \(\mathbf{x}_{\mathrm{in}}\), and \(\mathbf{x}_{\mathrm{out}}\) denotes original pretrained weight, input, and output tokens in multi-head self-attention (MHSA). Adapter in Fig. 3(a) can be any adapter-based PEFT methods like LoRA, Compacter, KAdaptation, etc, and Router in Fig. 3(b) can be linear or cosine router that commonly used in Mixture-of-Expert methods. An adapter can be of any form such as LoRA (Hu et al., 2021), one of the most popular method in PEFT, Compacter (Karimi Mahabadi et al., 2021), and KAdaptation (He et al., 2022). In the case of KAdaptation, we choose the inner rank \(r_{i}\) as the additional decomposition of \(\mathbf{B}_{i}\). Visualizations of our architecture is shown in Fig. 4 ## 6 Experiments Experimental details.We use the standard benchmark DomainBed (Gulrajani and Lopez-Paz, 2020) for training and evaluating the performance on the domain generalization task. Following this, we use fixed hyperparameters within the same backbone model for all the experiments. We train five types of models which are full fine-tuning, attention tuning, LoRA (Hu et al., 2021), KAdaptation (He et al., 2022), and Compacter (Karimi Mahabadi et al., 2021). We employ CLIP (Radford et al., 2021) trained ViT (Dosovitskiy et al., 2020) as our initialization model because CLIP carries strong zero-shot ability which can be used to obtain favorable generalization performance. In detail we use OpenCLIP (Ilharco et al., 2021), an open-source re-implementation of CLIP model with LAION-2B dataset (Schuhmann et al., 2022), for all our experiments. We describe additional implementation details in Appendix B.1 Evaluation protocols and datasets.For a fair comparison, we employ DomainBed evaluation protocols (Cha et al., 2021; Gulrajani and Lopez-Paz, 2020). The following five benchmark datasets are used: PACS (Li et al., 2017), VLCS (Fang et al., 2013), OfficeHome (Venkateswara et al., 2017), TerraIncognita (Beery et al., 2018), and DomainNet (Peng et al., 2019). Using a _leave-one-out cross-validation_, all performance scores are evaluated by averaging all the cases that use a single domain as the target domain and the others as the source domains. Experiment is repeated three times and 20% percent of source domain data is left out for validation purposes. Lastly model selection (training-domain validation) and hyperparameter search procedure is referenced from DomainBed (Gulrajani and Lopez-Paz, 2020). We perform three runs with different random seeds for each setting and report their mean and standard deviation to show the training randomness. In ablation studies, we keep all the random seeds fixed and conduct the experiment. ### Results for mixture-of-adapter As described in Sec. 5, we implement Mixture-of-Adapter (MoA) with LoRA (Hu et al., 2021) and KAdaptation, and report the benchmark result in Table 2. In addition to the performance gains by applying PEFT, results after employing MoA are increased consistently at all datasets. In the case of LoRA-MoA, average performance is increased by 1.2pp compared to the results when using only \begin{table} \begin{tabular}{l|l l l l l l l l l} \hline \hline Algorithm & Architecture & Pretraining & PACS & VLCS & OfficeHome & Temula & DomainNet & Avg. & \#Param. & \begin{tabular}{l} Trainable \\ \#Param. \\ \end{tabular} \\ \hline ERMF\({}^{\dagger}\) & ViT-8/16 & CLIP\({}_{\text{Normal}}\) & 83.4 \(\pm\) & 75.9 \(\pm\) & 13.6 \(\pm\) & 03.5 \(\pm\) & 44.4 \(\pm\) & 61.1 & 85.8M & 85.8M \\ MBGD\({}^{\dagger}\) & ViT-8/16 & CLIP\({}_{\text{Normal}}\) & 95.6 \(\pm\) & 82.2 \(\pm\) & 53.5 \(\pm\) & 54.3 \(\pm\) & 54.0 \(\pm\) & 73.7 & 172M & 85.8M \\ EEM\({}^{\dagger}\) & RajNet-16/66 & SWIG\({}_{\text{Normal}}\) & 98.6 \(\pm\) & 78.6 \(\pm\) & 73.9 \(\pm\) & 51.4 \(\pm\) & 48.5 \(\pm\) & 60.0 & 83.0M & 83.6M \\ MIRGD\({}^{\dagger}\) & RajNet-16/66 & SWIG\({}_{\text{Normal}}\) & 97.4 \(\pm\) & 79.9 \(\pm\) & 80.4 \(\pm\) & 55.9 \(\pm\) & 53.8 \(\pm\) & 74.1 & 16.27M & 83.6M \\ ERMR-SWAD\({}^{\dagger}\) & RajNet-16/66 & SWIG\({}_{\text{Normal}}\) & 97.4 \(\pm\) & 72.7 \(\pm\) & 80.0 \(\pm\) & 57.0 \(\pm\) & 57.5 \(\pm\) & 53.6 \(\pm\) & 74.2 & 83.6M \\ MIRo-SWAD\({}^{\dagger}\) & RajNet-16/66 & SWIG\({}_{\text{Normal}}\) & 98.6 \(\pm\) & 81.7 \(\pm\) & 83.3 \(\pm\) & 64.3 \(\pm\) & 60.7 \(\pm\) & 77.3 & 16.27M & 83.6M \\ SMA & ReNet-16/66 & SWIG\({}_{\text{Normal}}\) & 95.5 \(\pm\) & 80.7 \(\pm\) & 81.0 \(\pm\) & 82.0 \(\pm\) & 95.7 \(\pm\) & 60.0 \(\pm\) & 75.6 & 83.6M & 83.6M \\ \hline \multicolumn{10}{l}{_Methods with Parameter-Efficient Flow-Training (PEFT)_} \\ \hline ERMF (Baseline) & ViT-8/16 & CLIP\({}_{\text{Normal}}\) & 88.8 \(\pm\) & 78.5 \(\pm\) & 78.9 \(\pm\) & 78.1 \(\pm\) & 91.0 \(\pm\) & 82.2 \(\pm\) & 67.1 & 85.8M & 85.8M \\ ERMR\({}_{\text{Normal}}\) & ViT-8/16 & CLIP\({}_{\text{Normal}}\) & 94.1 \(\pm\) & 64.0 \(\pm\) & 83.0 \(\pm\) & 35.9 \(\pm\) & 56.2 \(\pm\) & 67.1 & 78.5M & 85.8M \\ ERMR\({}_{\text{Normal}}\) & ViT-8/16 & CLIP\({}_{\text{Normal}}\) & 93.8 \(\pm\) & 82.0 \(\pm\) & 83.0 \(\pm\) & 35.9 \(\pm\) & 56.2 \(\pm\) & 72.1 & 78.5M & 28.4M \\ ERMR\({}_{\text{Normal}}\) & ViT-8/16 & CLIP\({}_{\text{Normal}}\) & 93.6 \(\pm\) & 84.0 \(\pm\) & 85.0 \(\pm\) & 76.3 \(\pm\) & 61.5 \(\pm\) & 61.4 & 74.7 & 85.8M & 28.4M \\ \hline \multicolumn{10}{l}{_Methods with _Measure-of-Adapter_} \\ \hline ERMR LoRA. Also, KAdaptation-MoA method achieves state-of-the-art results on VLCS, OfficeHome, DomainNet datasets, exceeding the result of the original KAdaptation by 0.2pp in average, validating the analyses about loss surface in Fig. 2. Additionally, following Arpit et al. (2022), ensembling the three weights obtained from different seeds, we further enhance the performace by 0.6pp in average, achieving best results in VLCS, OfficeHome, and DomainNet. Analysis on the router and experts.To understand why using multiple adapters with a router improves performance, we conduct experiments on what the router see and what each experts learn. To investigate the role of an adapter, we monitor the routing path of each token and identify the expert to which it is directed. This visualization is performed on TerralIncognita dataset. In Fig. 5, the image is divided into patches and each patch is assigned a number, where the number corresponds to the routed expert. We also reveal that the router tends to cluster tokens in areas with semantic information (e.g., object foreground or object outlines). As an example, consider the dog depicted in Fig. 4(a) and the cat in Fig. 4(b), which exhibit varying positions between the first and second rows. Images sharing the same location have consistent backgrounds, resulting in shared expert routing for background tokens. However, objects and their positions can differ, causing object-related tokens to be routed to different experts. In conclusion, the importance of routing numerous tokens to their respective adapters greatly enhances the model's ability to capture semantic information in images, thereby enabling the model to effectively navigate challenging distribution shift scenarios in domain generalization tasks. We show more visualizations of routed patches in Appendix A.3. ### Ablation Study Effect of fine-tuning in pretrained models.We observe that using a fine-tuned model with a smaller dataset shows degraded performance. Specifically, LAION-2B (Schuhmann et al., 2022) pretrained CLIP-ViT model almost consistently outperforms the LAION-2B pretrained, ImageNet Deng et al. (2009) fine-tuned CLIP-ViT model across all the adapter methods in terms of accuracy except for TerralIncognita dataset, as illustrated in Fig. 6. This can be attributed to the superior generalization capabilities of larger models and the adapter's capacity to preserve the knowledge from the pretrained model. These findings align with previous literature, such as Kumar et al. (2022), which suggests that fine-tuning entire large models can degrade the learned representations. Ablation study on routing strategy in Mixture-of-Adapter.We conduct an ablation study on three components proposed in Li et al. (2022), which are cosine router, auxiliary loss \(\mathcal{L}_{\mathrm{aux}}\), and the layer configuration on the location of the MoE layer. Li et al. (2022) demonstrates that employing a cosine router in their Generalizable Mixture-of-Experts (GMoE) architecture yields better perfor Figure 5: Visualizations of routed indices of each patch in TerralIncognita (Beery et al., 2018) dataset. Left column is original image, and in right column we indicate where each patch is routed. Upper and lower images were took from same location but different time, therefore they have same background but different object (dog, cat) shape and location. mance when contrasted with employing a linear router. Our approach significantly deviates from theirs, as each of our expert adapters exhibit distinct capacities, and our MoA is attached to an attention layer. Thus we test both the linear and cosine routers. The auxiliary loss function (\(\mathcal{L}_{\mathrm{aux}}\)) balances the amount of token allocation to each experts, and the layer configuration called 'Every 2' or 'Last 2' in their work denotes the attachment of MoE layer with every two layers or only on the last two layers. Therefore we also conduct experiments on these settings and report the whole results in Table 3. In our setting, employing a cosine router showed a consistent performance increase for all domains. Also attaching an adapter with every two layers brought better performance than using MoA only on the last two layers. While there was a minimal difference between using and using \(\mathcal{L}_{\mathrm{aux}}\) (within the error margin), using the auxiliary loss yielded more better results for the four datasets (PACS, VLCS, OfficeHome, DomainNet). Thus we decided to incorporate the auxiliary loss \(\mathcal{L}_{\mathrm{aux}}\) when conducting our main experiment. \begin{table} \begin{tabular}{l|c|c c c c c|c c c} \hline \hline \multirow{2}{*}{Adapter} & \multirow{2}{*}{Changed component} & \multirow{2}{*}{PACS} & \multirow{2}{*}{VLCS} & \multirow{2}{*}{OfficeHome} & \multirow{2}{*}{Terrala.} & \multirow{2}{*}{DomainNet} & \multirow{2}{*}{Avg.} & \multirow{2}{*}{\#Param.} & \multicolumn{2}{c}{ \begin{tabular}{c} Trainable} \\ \#Param. \\ \end{tabular} \\ \cline{2-2} \cline{6-10} & & Original & **97.5** & **82.8** & **90.6** & 53.1 & **62.6** & 77.3 & 87.3M & 1.5M \\ \hline \multirow{3}{*}{KAdaptation-MoA} & w/o \(\mathcal{L}_{\mathrm{aux}}\) & 97.3 & 82.6 & 90.5 & **54.0** & 62.6 & 77.4 & 87.3M & 1.5M \\ & \(\mathrm{Cosine\rightarrowLinear}\) & 97.5 & 82.3 & 90.3 & 51.5 & 62.6 & 76.9 & 86.1M & 0.33M \\ & \(\mathrm{Every\rightarrowLast}\) & 97.2 & 82.2 & 90.2 & 47.5 & 62.4 & 75.9 & 86.3M & 0.51M \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison on different mixture-of-adapter settings. We perform experiments with dropping each components independently from our best setting which is highlighted in blue. ‘w/o \(\mathcal{L}_{\mathrm{aux}}\)’ denotes the results without auxiliary loss that used in (Li et al., 2022), ‘Cosine\(\rightarrow\)Linear’ denotes the result when we changed the Cosine router to a Linear router, and ‘Every\(\rightarrow\)Last’ denotes the result when we changed the adapter attached layer from Every, which attaches adapters in every two layer, to Last, which attaches adapters in last two layers. Figure 6: Performance comparison between original (not fine-tuned) CLIP and ImageNet fine-tuned CLIP from timm (Wightman, 2019) (denoted as CLIP-FT) on different fine-tuning strategies. Full, Att., LoRA, and KA denotes full fine-tuning, attention-only tuning (Touvron et al., 2022), LoRA, and KA Adaptation, respectively. ## 7 Conclusion We have shown that using only parameter-efficient fine-tuning can outperform or be competitive with previous state-of-the-art domain generalization algorithms. Additionally, we propose integrating extra adapters with learnable routers to handle various distribution shift situations. This allows us to achieve state-of-the-art results compared to methods that do not utilize ensembling, and when we do employ ensembling, our results are on par with those obtained through ensemble based approaches. In this work we have witnessed the remarkable effectiveness of parameter-efficient fine-tuning and the utilization of large models for domain generalization. We anticipate that these findings can serve as a source of inspiration for future research in the domain generalization field, especially encouraging the robust fine-tuning of large pretrained models.
2308.04110
Review of Contemporary Energy Harvesting Techniques and Their Feasibility in Wireless Geophones
Energy harvesting converts ambient energy to electrical energy providing numerous opportunities to realize wireless sensors. Seismic exploration is a prime avenue to benefit from it as energy harvesting equipped geophones would relieve the burden of cables which account for the biggest chunk of exploration cost and equipment weight. Since numerous energies are abundantly available in seismic fields, these can be harvested to power up geophones. However, due to the random and intermittent nature of the harvested energy, it is important that geophones must be equipped to tap from several energy sources for a stable operation. It may involve some initial installation cost but in the long run, it is cost-effective and beneficial as the sources for energy harvesting are available naturally. Extensive research has been carried out in recent years to harvest energies from various sources. However, there has not been a thorough investigation of utilizing these developments in the seismic context. In this survey, a comprehensive literature review is provided on the research progress in energy harvesting methods suitable for direct adaptation in geophones. Specifically, the focus is on small form factor energy harvesting circuits and systems capable of harvesting energy from wind, sun, vibrations, temperature difference, and radio frequencies. Furthermore, case studies are presented to assess the suitability of the studied energy harvesting methods. Finally, a design of energy harvesting equipped geophone is also proposed.
Naveed Iqbal, Mudassir Masood, Ali Nasir, Khurram Karim Qureshi
2023-08-08T07:48:18Z
http://arxiv.org/abs/2308.04110v1
# Review of Contemporary Energy Harvesting Techniques and their Feasibility in Wireless Geophones ###### Abstract Energy harvesting converts ambient energy to electrical energy providing numerous opportunities to realize wireless sensors. Seismic exploration is a prime avenue to benefit from it as energy harvesting equipped geophones would relieve the burden of cables which account for the biggest chunk of exploration cost and equipment weight. Since numerous energies are abundantly available in seismic fields, these can be harvested to power up geophones. However, due to the random and intermittent nature of the harvested energy, it is important that geophones must be equipped to tap from several energy sources for a stable operation. It may involve some initial installation cost but in the long run, it is cost-effective and beneficial as the sources for energy harvesting are available naturally. Extensive research has been carried out in recent years to harvest energies from various sources. However, there has not been a thorough investigation of utilizing these developments in the seismic context. In this survey, a comprehensive literature review is provided on the research progress in energy harvesting methods suitable for direct adaptation in geophones. Specifically, the focus is on small form factor energy harvesting circuits and systems capable of harvesting energy from wind, sun, vibrations, temperature difference, and radio frequencies. Furthermore, case studies are presented to assess the suitability of the studied energy harvesting methods. Finally, a design of energy harvesting equipped geophone is also proposed. ## 1 Introduction For decades, oil and gas companies have been relying on cable-based network architectures for transmitting data from geophones to the on-site data collection center. For seismic surveys, cables are accountable for almost \(50\%\) of the total cost and \(75\%\) of the total equipment weight [1]. Data is usually collected by a large number of geophones distributed over a region of more than \(20\) km\({}^{2}\)[2, 3]. There has recently been a growing interest in deploying wireless geophone networks for seismic acquisition, especially in large-scale land surveys [4, 5, 6]. The network proposed by aforementioned references consists of wireless geophones sending data to the data center directly or via gateways. However, these studies ignore the fact that connecting wires also supply power to geophones. Hence, removing the wires means that each geophone needs to be equipped with an external power supply. A commercial product available in the market [7] has a cable-less \(3\)-component geophone that weighs \(2.77\) lbs whereas its battery weights \(2.4\) lbs. This means that \(75\%\) weight has been cut-off by using the cable-free geophones, however, \(86\%\) battery weight is added to the geophone weight. Hence, batteries also account for a substantial proportion of the overall weight and size of the seismic acquisition systems negating the benefit of going cable-free. This proportion is expected to increase more as the technology scales down. More importantly, batteries must always be recharged/replaced, and ultimately disposed of. This is a serious limitation to acquisition paradigms in which dozens or hundreds of battery-powered geophones are to be maintained. Replacement of batteries can be cumbersome and time-consuming, which may affect the seismic acquisition process. In addition, batteries can hinder the scalability of geophone networks. The main impediments in geophone advancements are the battery's limited energy capacity and erratic lifetime efficiency. According to Moore's Law, transistors double per one or two years [8]. However, the power density and life span of the batteries are limited and battery technology evolved very slowly (see Fig. 1). Wireless geophones make it necessary to have a provision for self-powered operation. One of the most significant trends in electronic equipment technology since its inception has been the diminution in size and the boost in functionality. These days, small yet very powerful devices with wireless communication functionalities are commercially available. Over the past few decades, the size of the electronic circuit and the energy required for a single (binary) operation have been dramatically reduced. According to Moore's law, integrated circuit technology evolves following a transistor size shrinking trend. Along with this trend, the supply voltage is also reduced due to reliability reasons. The ultimate result is a decrease in energy consumption owing to the size reduction Figure 1: Improvements in portable computing between 1990 and 2010. Wireless connectivity only takes into account the IEEE 802.11 standard released in 1997 (courtesy to [9]). of parasitic components. For a reduction of the scale with a factor \(\alpha\) (\(\alpha>1\)), the energy consumed by a particular shrunk circuit performing a specific task is decreased by \(1/\alpha^{3}\)[10]. Advances in low-power design, therefore, open up the possibility of using energy from the environment to power electronic circuits. Therefore, to meet the energy needs of the wireless geophone systems, new sources of long-lasting and regenerative power need to be developed. Energy harvesting is a very appealing choice for driving the geophones, as a node's lifetime would be limited only by the failure of its own components. Energy harvesting is a mechanism of deriving energy from natural sources. This usually involves extracting some residual energy which could be a by-product of an automated process or a natural phenomenon and, therefore considered as free energy [11]. Using the energy available in seismic fields would make it possible for wireless geophones to be fully self-sustaining, so that battery maintenance will eventually be eliminated. In this context, this work presents approaches that could be used to harvest energy in seismic fields to power geophones. To the best of the authors' knowledge, this is the first work that addresses energy harvesting techniques with regard to geophones. The electrical energy for operating a geophone can be obtained by tapping the energy from the electromagnetic field (using radio frequency (RF)), vibrations, sunlight, wind, and temperature gradients. These various sources of energies are abundantly available in seismic fields and can be advantageous. Hence, the harvested energy can be used to power up a geophone directly and/or charge a small battery (or a supercapacitor, see [12]) connected to it. Various energy sources and the duration of their availability is highlighted in Table 1. It can be noticed here that energies obtained using RF and temperature gradients (thermal) are available all day, so even if there is no seismic recording, these energies are still available and can be used to recharge the geophone batteries. Wind energy depends on the speed of the wind but in general, it is also available all the time. The huge vibroseis truck (used to produce seismic waveform) generates a tremendous amount of vibration energy that can be harvested. Vibration energy is available during the seismic shooting phases only. Therefore, the batteries may continue to charge all the time using available energy harvesting source(s), and the stored energy is then used for seismic recording and data transmission. Hence, the usual operation mode of an energy harvesting system in the seismic field implies harvesting during the peak time slots of energy availability, while the storage devices must meet the demand and supply in specified periods. The benefits of energy harvesting with regards to the seismic acquisition networks are multifold and include: long-lasting operability, no chemical disposal (avoids the environmental contamination), cost saving, safety, maintenance-free, no charging points, inaccessible sites operability, flexibility, scalability, ease of installation, increase lifetime, and complete removal of supply wires. \begin{table} \begin{tabular}{c l} \hline \hline **Energy source** & **Availability** \\ \hline Solar & During day time \\ Vibration & During seismic shooting time \\ RF & 24 hours a day \\ Thermal & 24 hours a day \\ Wind & Depends on wind speed \\ \hline \hline \end{tabular} \end{table} Table 1: Energy harvesting sources In brief, this paper brings the following novel contributions: * The detailed energy requirement analysis by the wireless geophone is provided in Section 2. The analysis incorporates all the battery-dependent tasks, e.g., sensing/recording, processing, and wireless communication. The analysis provides a baseline idea about the minimum energy to be harvested to enable continuous sensing, processing, and communication tasks. * Various possible energy harvesting mechanisms, that can be utilized by wireless geophones, are provided in Sections 3-7. Particularly, the solar energy harvesting method with its implementation and feasibility details is discussed in Section 3. Vibration energy harvesting method with different types of such energy harvesters and their operation, design plus adequacy details are outlined in Section 4. Next, Section 5 is devoted to providing different means of energy harvesting through the wind. Various wind energy harvesters are discussed and their applicability for self-powered geophones is studied. A brief survey of the thermal energy harvesting method and its workability in seismic fields is elaborated in Section 6. Finally, Section 7 considers the RF energy harvesting method along with highlighting its usefulness and implementation requirements for wireless geophone applications. * A novel design of a multi-source wireless energy harvesting geophone is proposed in Section 8, which incorporates solar cell, antenna, piezoelectric, electromagnetic/electrostatic system, and thermoelectric generator to exploit all means of energy harvesting, i.e., solar, RF, wind, vibration, and thermal energy harvesting. It is important to mention here that the proposed design of a multi-source energy harvesting geophone can be easily modified to devise a multi-source energy harvesting based green wireless sensor networks, which can prolong the operating life of various IoT based sensor networks in the areas such as agriculture, smart cities, smart building, transportation systems, healthcare, and manufacturing. ## 2 Energy Requirement of a Geophone Wireless geophones must be equipped with sensing (recording), processing, and communicating abilities. Therefore, geophones should have four key units: a sensing unit, a processing unit, a communication unit, and a power unit. The power consumed by sensing and processing units is used for data collection and data processing. Commercial geophones require an adequate amount of power to operate. For example, the geophone produced by a leading manufacturer [7] requires \(115\) Wh battery for continuous recording for \(30\) days (\(24\) hours per day). Here the power consumption is around \(159\) mW for sensing and processing. For computing the power consumed by the communication unit, the following approach is adopted: The transmitted signal from the wireless geophones experience certain path-loss. Since geophones are deployed in an open-field or rural environment, the path-loss (in dB) can be modeled as follows [13]: \[\text{PL}(d)=\begin{cases}\text{PL}_{1}(d),&10\text{m}<d<d_{\text{BP}}\\ \text{PL}_{2}(d),&d_{\text{BP}}<d<10\text{km}\end{cases}, \tag{1}\] where \(d_{BP}=\frac{2\pi h_{\text{BS}}h_{u}f_{c}}{c}\) is the breakpoint distance, \(d\) is the ground distance between the geophone and the base station (BS) either gateway or data center, \(h_{\text{BS}}\) is the BS antenna height, \(h_{u}\) is the height of the geophone antenna above the ground, \(f_{c}\) is the carrier frequency, and \(c\) is the speed of light, \[\text{PL}_{1}(d) = 20\log_{10}\left(\frac{4\pi d_{\text{3D}}f_{c}}{c}\right) \tag{2}\] \[\text{PL}_{2}(d) = \text{PL}_{1}(d_{\text{BP}})+40\log_{10}\left(\frac{d_{\text{3D}} }{d_{\text{BP}}}\right), \tag{3}\] and \(d_{\text{3D}}=\sqrt{d^{2}+(h_{\text{BS}}-h_{u})^{2}}\) is the 3D distance between the geophone and the BS. Considering the above path-loss and certain transmit power \(P_{t}\) at the geophone, the received power \(P_{r}\) at the BS is given by \[P_{r}\text{ (in dBm)}=P_{t}\text{ (in dBm)}-PL(d) \tag{4}\] Considering typical values of carrier frequency \(f_{c}=1\) GHz, BS antenna height \(h_{\text{BS}}=10\) m and geophone antenna height \(h_{u}=1\) m, Fig. 3 depicts the BS received power \(P_{r}\) as a function of distance \(d\) for different values of the geophone transmit power \(P_{t}\). As expected, the received power decreases with the increase in the distance \(d\) due to the increase in the path-loss. Considering typical noise power desnity of \(-174\) dBm/Hz, the noise power \(\sigma^{2}\) is given by \[\sigma^{2}(\text{in dBm})=-174+10\log_{10}(B)\] where \(B\) is the transmission bandwidth (BW). The above analysis can assist to calculate the signal-to-noise-ratio (SNR) at the BS for decoding the wireless geophone signal, which is given by \[\text{SNR}=P_{r}\text{ (in dBm})-\sigma^{2}\text{ (in dBm}).\] **Remark 1**: _Fig. 2 shows that if we consider the transmit power, \(P_{t}=0\) dBm (\(1\) mW) and the ground distance of \(1\) km, the received power at the BS is \(-106\) dBm. This will lead to the received SNR of \(28\) dB under the transmission BW of \(10\) KHz (enough for achieving the data rate of \(12\) kbps). This SNR is adequate to decode the signal with low bit-error rate. This implies that even assuming the quite far distance of \(1\) km, we can achieve acceptable SNR of \(28\) dB to decode the received signal at the BS._ Fig. 3 plots the received SNR at the BS against different possible values of transmit power \(P_{r}\). This figure shows that at an extreme distance of \(1\) km, the transmit power should be at least \(0\) dBm to ensure SNR of \(28\) dB under transmission BW of \(10\) kHz. It is clear from Fig. 3 that the typical value of \(P_{t}=0\) dBm, which is equal to \(1\) mW, is enough to ensure adequate SNR with sufficient coverage. **Remark 2**: _Now it remains how to harvest sufficient energy from different means that can allow continuous sensing and processing (power consumption of around 159 mW), and communication (transmit power requirement of around \(1\) mW from the geophone to cover up to \(1\) km distance). The following sections elaborate on different means that can be employed to harvest energy at the geophone._ In the ensuing, various energy harvesting mechanisms are discussed and their feasibility in geophones are highlighted. ## 3 Solar Energy Harvesting The presence of a significant amount of sunlight in outdoor environments makes it a vital energy source for geophones. A solar cell, or a photovoltaic cell, converts light energy into electricity by the photovoltaic effect. A solar cell is a tiny device made of semiconductor material. The first useful solar cell was developed in 1954 by the scientists at Bell Labs. Since then the field of harvesting solar energy has grown steadily. A number of solar power harvesting facilities are generating hundreds of megawatts of energy exist across the world [14]. Solar power is considered the most feasible form of renewable energy. This is because the Sun provides the highest energy density [15] as compared to other green energy sources such as wind, vibrations, etc. Outdoor solar panels can deliver energy densities in the range of 7.5 mW/cm\({}^{2}\)[16]. Only a few wind energy harvesting methods have shown to outperform this number at very high wind speeds [17]. A number of different semiconductor materials/technologies have been used to develop solar cells. These are broadly divided into three generations as follows: * First generation: made of crystalline silicon cells * Second generation: made of thin-film cells * Third generation: made of organic, dye-sensitized, Perovskite, and multijunction cells. The most popular of these solar/photovoltaic cells belonging to different generations and their efficiencies are listed in Table 2. The solar cells based on the latest technology of concentrated and Perovskite material offer the highest efficiency as well as the highest energy density among all existing technologies according to lab experiments [26]. However, these technologies are still in their nascence and, therefore, have several stability limitations [28, 29]. These technologies hold a huge potential in generating high energy but are not commercially viable. Therefore, upon their commercialization in the future, these must be considered for the application of geophones. ### Commercial Solar Cells Commercial solar products use either the first generation or the second generation solar cells. The solar cells belonging to the third generation are still far from being commercialized due to stability issues highlighted in the previous section. Therefore, to harvest solar power for our application of geophones, the products that are available in the market are surveyed. A number of well-known solar manufacturers currently develop solar cells having maximum efficiencies in the range of 19% - 23%. Some of these are listed in Table 3[30]. Any solar cell used with geophones should be resilient/robust against rugged environments. Most often the geophones are exposed to extreme conditions such as high temperatures, moisture, rain, sandstorms, snow, hail, wind, etc. which may result in corrosion, significant efficiency loss, and in some cases breakdown of solar cells. Therefore, for a commercially viable solar harvesting solution, different characteristics in addition to the solar cell efficiency need to be compared. Most notably, the following characteristics are important. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Generation** & **Solar Cell Type** & **Efficiency** & **Power Density** (\(W/m^{2}\))1 & **Ref.** \\ \hline First & Mono-crystalline & \(17-18\%\) & \(111-142\) & [18] \\ First & Poly-crystalline & \(12-14\%\) & \(111-125\) & [19] \\ Second & Amorphous Silicon & \(4-8\%\) & \(50-77\) & [20] \\ Second & Copper Indium-of-Scientific & \(16-23\%\) & \(91-111\) & [21] \\ Second & Caluminum Telluride & \(9-11\%\) & \(77-91\) & [20] \\ Third & Nano-crystall/Quantum Dot & \(7-9\%\) & - & [22] \\ Third & Polymer & \(3-10\%\) & - & [23] \\ Third & Dye Sensitized & \(9-12\%\) & - & [24] \\ Third & Concentrated & \(\approx 33-46\%\) & - & [25] \\ Third & Perovskite & \(\approx 28\%\) & - & [26, 27] \\ \hline \hline \end{tabular} \end{table} Table 2: Characteristics of different Solar Cell Types #### 3.1.1 Power Tolerance The power tolerance metric indicates the variation in the power output that could happen due to some unavoidable circumstances. These variations are measured as a percentage of the product's power rating. Most manufacturers listed in Table 3 have a 0 W negative power tolerance which means that the actual power output will always be equal to or greater than the specified output. Any product that has a non-zero negative tolerance will result in reduced power output as compared to its rating and, therefore, may not be a good choice. #### 3.1.2 Temperature Coefficient Solar panels rely solely on the light from the Sun which is also a source of heat. Interestingly, solar panels are also sensitive to high temperatures. The output of solar panels may reduce significantly at high temperatures. The temperature coefficient indicates the rate at which the efficiency of solar panels drops for every \(1^{\circ}\) C above \(25^{\circ}\) C. The temperature of \(25^{\circ}\) C is used as a reference point as all solar panel characteristics are tested at this temperature. The temperature coefficients of the solar panels from some top manufacturers are listed in Table 4. #### 3.1.3 Durability - Snow, Hail, and Wind Load Ratings Our survey of several commercial products showed that when it comes to robustness against snow, hail, and wind, almost all solar panels are certified to withstand extreme conditions. We summarize the corresponding ratings in Table 5. International Electrotechnical Commission (IEC) has proposed two standards (IEC 61215 and IEC 61646) to evaluate the reliability of solar panels. Tests are designed following the guidelines of these standards to assess the wear and tear \begin{table} \begin{tabular}{c c} \hline **Solar Manufacturer** & **Temperature Coefficient Range** \\ \hline China Sunery & \(-0.423\)\(10-0.39\) \\ Hawwha Q CELLS & \(-0.42\)\(10-0.37\) \\ Hyundisi & \(-0.45\)\(10-0.41\) \\ LG & \(-0.42\)\(10-0.3\) \\ SunPower & \(-0.38\)\(10-0.29\) \\ Panasonic & \(-0.3\)\(10-0.29\) \\ \hline \end{tabular} \end{table} Table 4: Temperature Coefficients of some Commercial Solar Cells [31] \begin{table} \begin{tabular}{c c c c} \hline **Manufacturer** & **Min. Efficiency (\%)** & **Max. Efficiency (\%)** & **Avg. Efficiency (\%)** \\ \hline SunPower & \(16.50\%\) & \(22.80\%\) & \(20.70\%\) \\ LG & \(18.40\%\) & \(21.70\%\) & \(19.80\%\) \\ REC Group & \(15.20\%\) & \(21.70\%\) & \(18.11\%\) \\ China Sunery & \(14.98\%\) & \(21.17\%\) & \(17.68\%\) \\ Solaria & \(19.40\%\) & \(20.50\%\) & \(19.76\%\) \\ Panasonic & \(19.10\%\) & \(20.30\%\) & \(19.65\%\) \\ Siflab & \(17.80\%\) & \(20.00\%\) & \(18.93\%\) \\ Canadian Solar & \(15.88\%\) & \(19.91\%\) & \(17.88\%\) \\ CertainTeed Solar & \(15.40\%\) & \(19.90\%\) & \(18.46\%\) \\ Solartech Universal & \(19.00\%\) & \(19.90\%\) & \(19.45\%\) \\ JinLoSolar & \(15.57\%\) & \(19.88\%\) & \(17.50\%\) \\ JA Solar & \(15.80\%\) & \(19.80\%\) & \(17.83\%\) \\ Hawwha Q CELLS & \(17.10\%\) & \(19.60\%\) & \(18.44\%\) \\ Risen & \(16.30\%\) & \(19.60\%\) & \(18.12\%\) \\ Talesun Energy & \(16.20\%\) & \(19.50\%\) & \(17.52\%\) \\ \hline \end{tabular} \end{table} Table 3: Commercial Solar Cell Manufacturers and their Efficiencies [30] that solar panels will experience during their lifetime. Therefore, only those panels that are certified by IEC should be selected as they are guaranteed to withstand harsh environmental conditions. While performing the survey, we found that the solar panels based on Maxeon technology (manufactured by SunPower) stood out among all other solar panels. This is mainly due to the structural difference between Maxeon and other conventional solar cells [32, 33]. Conventional cells use busbars that run through the face of the cell to capture electrical energy created by the cell. However, Maxeon cells are backed with solid copper to capture the electrical energy as shown in Fig. 4. This allows more surface area for the cell to capture energy which results in higher efficiency as evident from Table 3. Moreover, the use of copper at the back of the cell makes it resilient to corrosion and daily wear and tear from thermal expansion, etc. In light of the detailed review and the discussion presented above, we conclude that the solar cells based on Maxeon technology are highly efficient and at the same time robust to the harmful effects of the environment. Therefore, we propose to equip geophones with solar cells based on Maxeon technology. ### Photodiodes A photodiode is also made of semiconductor material that converts light into an electric current. A photodiode is smaller than a solar cell. Its output is much lower as compared to a solar cell and is therefore mainly used as a sensor to detect light. Photodiodes have also been used to power up small electronics [35]. These include mainly wearable medical sensors. However, it is not used for larger electronic devices as the generated power is not sufficient for such devices. It is due to these reasons, we have not considered photodiodes as possible sunlight energy harvesting mechanisms. Figure 4: Front views of Maxeon (left) and a Conventional Solar Cell (right) (courtesy to [34]). \begin{table} \begin{tabular}{c c c} \hline **Snow** & **Hall** & **Wind** \\ \hline 5400 Pa (550 kg/m\({}^{2}\)) & 25 mm at speed of 83 kph & 2400 Pa (225 kph) \\ \hline \end{tabular} \end{table} Table 5: Weather Ratings of Commercial Solar Cells ### Discussion The solar energy harvesting infrastructure is low cost, and noise-free. The sunlight is available to every geophone and, therefore, solar energy can be harvested by any geophone, anywhere in the world. Despite these advantages, there are some limitations. For example, sunlight is not available at night. Similarly, different weather conditions may result in limited availability of energy. Furthermore, since geophones are placed on the ground, there is a risk that solar panels would be covered by dust, and hence lowering the efficiency. Therefore, a reliable green system must not rely solely on solar energy. This implies that any reliable green solution must be hybrid, i.e., it is designed to harness different forms of energies that are available throughout the year. As a case study, we demonstrate the viability of energy harvesting by solar energy in one of the major city (Dammam) in the Eastern region of Saudi Arabia. Note that Saudi Arabia is chosen for the feasibility study of solar-powered wireless geophones as it is currently the largest oil producer and thus the largest consumer of geophones. The amount of harvested energy depends on the availability of the sunlight and the sky condition (whether it is clear or covered by the clouds). In this regard, Fig. 5 plots the average number of sun hours per month and Fig. 6 plots the average cloud coverage (in percentage) during different months in Dammam, Saudi Arabia [36]. It can be observed from Fig. 5 that sun is easily available around 12 hours per day. Fig. 6 shows that the cloud coverage is also in an acceptable range. Particularly, the cloud coverage is around \(10\%\) or even less during summer (June-October), which shows that solar energy harvesting is very much suitable during summer days. However, the weather is hot for most part of the year and the temperature can reach up to \(50~{}^{\circ}\)C in Summer, which reduces the output of solar panels. It is, therefore, concluded that the presence of sunlight across the world and the availability of high energy density solar cells make it feasible to equip geophones with solar cells. These solar cells could be placed around the geophone body. The surface area of geophones exposed to sunlight might be small; however, the high energy density of the cells means a sizeable amount of energy could be harvested for the successful realization of wireless geophones. Furthermore, weather is often windy in the Eastern region of Saudi Arabia and the maximum speed is \(\approx 15\) m/s (\(54\) km/h). It can be seen in Table 5 that solar panels can withstand these harsh environmental conditions. ## 4 Vibration Energy Harvesting Vibration energy harvesting has been a subject of interest over the last decade. Vibration energy can be transformed into electric energy through various mechanisms, e.g., electromagnetic induction, electrostatic mechanism, or piezoelectric approach. Wireless geophones can harvest the tremendous amount of vibration energy that is generated by huge vibroseis trucks. These trucks generate vibration energy at regular intervals and thus provide a reliable source of energy to geophone. Vibroseis trucks inject a sweep (around \(8\) to \(10\) sec duration) of low frequencies into earth typically in the range of \(1-100\) Hz and, therefore, it is critical to tune the energy harvesters resonant frequency accordingly. A slight deviation could drastically reduce the amount of energy being harvested [37]. An example of a linear sweep is shown in Fig. 7. Since the range of vibration frequency is known in our seismic scenario, the energy harvester can be designed with high efficiency. Recently, new design approaches have been exploited based on the fact that a harvesting device's power generation performance is confined to the resonance excitation. In numerous applications, ambient vibration is often broadband and random, and this type of excitation must be taken into account when designing energy harvesting devices. In order words, the operating frequency bandwidth of the harvester is usually confined to a specific range that cannot cover the random vibration frequencies of external sources. Recently, researchers have explored the concept of broadband energy harvesting, and many nonlinear power generators have been proposed in the literature [39, 40, 41, 42, 43, 44, 45, 46]. Table 6 shows a comparison of various mechanisms that are used to convert vibration energy to electrical energy. In the sequel, various vibration energy harvesting mechanisms are briefly discussed. ### Piezoelectric-based Vibration Energy Harvester Piezoelectric generates an electric charge from mechanical strain. This phenomenon is known as the direct piezoelectric effect. In the case of vibration energy harvesting, ambient vibration around the energy harvesting unit/device induces the mechanical strain. Usually, the piezoelectric energy harvesters of the cantilever type are developed with a proof mass located at the free end of the beam. The electric energy can be generated from bending vibrations under excitation at the root of the beam. Among various energy harvesting structures, piezoelectric transducers are one of the widely known with nonlinear characteristics, and permanent magnets are often attached to the accompanying structures to reproduce the effect of external vibration forces. Piezoelectric energy harvester investigated in the literature using a magnetic oscillator [40, 47, 48, 49, 50, 51] is depicted in a schematic diagram (Fig. 8). The authors in [52] discovered that harvester's resonant frequency range is influenced by the geometric nonlinearity (in the presence or absence of the external magnets) and distance between the magnets. It has been demonstrated in [53] that the hybrid vibration energy harvesters (consisting of electromagnetic and piezoelectric generators) with nonlinear magnetic forces can effectively boost output performance under random excitation. Piezoelectric transducers are usually manufactured using aluminum nitride, lead zirconate titanate (PZT), quartz, and berlinite [54]. New lead-free piezoelectric transducers are developed to make it more environmentally friendly, e.g., piezoelectric nanogenerators that contain zinc nanowires (ZnO) [55, 56]. The piezoelectric materials are available in four types, namely thin films, single crystals, ceramics, and polymers. The piezoceramic materials, PZT-5A, PZT-5J, and PZT-5H are used for low power applications, while for high power \begin{table} \begin{tabular}{l l l l} \hline \hline & **Electrostatic** & **Electromagnetic** & **Piezoelectric** \\ \hline Material & Conductive capacitor & Need/minimum iron boron & Lead zirconate titanate \\ \hline \multirow{3}{*}{Advantages} & \(\bullet\) Smart material not needed & \(\bullet\) Simple construction on a large scale & \(\bullet\) Simple structure on a small scale \\ & \(\bullet\) Very high output voltage (\(>100\) V) & \(\bullet\) Low output impedance & \(\bullet\) High output voltage (\(>5\) V) \\ & \(\bullet\) Easy of voltage rectification and frequency tuning & \(\bullet\) Higher output current & \(\bullet\) High coupling coefficient \\ \hline \multirow{3}{*}{Disadvantages} & \(\bullet\) Low output current & \(\bullet\) Small-scale limitations & \(\bullet\) Low output current \\ & \(\bullet\) High impedance needed & \(\bullet\) Low output voltage (\(<1\) V) & \(\bullet\) Finite \\ \cline{1-1} & \(\bullet\) Biased voltage required & \(\bullet\) Affected by electromagnetic field & \(\bullet\) Low strain limit \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of various mechanisms Figure 8: Nonlinear piezoelectric energy harvester. applications PZT-8 and PZT-4 are used [57]. Porous PZT material has the benefits of stiffness control and good capacitance. Piezoelectric polymers provide high power density, as do piezoelectric ceramics. However, polymer-like polyvinylidene fluoride (PVDF) has poor adhesion to the material and a low electromechanical coupling coefficient. On the other hand, PZT is brittle and difficult to process, making it unsuitable for flexible application devices despite its high coupling coefficient [37, 58]. The piezoelectric generation has received a great deal of attention due to the simple structure of the piezoelectric transducer, its compact size, and power generation efficiency. The piezo patch size is very thin, and therefore the entire system is simpler and smaller than other energy harvesters [59]. Small-scale piezoelectric transducers are also more robust and most effective when compared to electromagnetic transducers, and hence are suitable for a compact structure such as airflow [60] and wind turbine energy harvester [61], sound wave energy harvester [62], and energy harvester based on the raindrop impact [63, 64, 65]. In order to improve the output obtained from piezoelectric, the authors in [65] utilized the voltage multiplier circuit in the energy harvester system. One of the issues, however, is achieving maximum power generation efficiency [66]. Several works [67, 68, 69] focused on frequency bandwidth extension to maximize the efficiency. The surveyed piezoelectric energy harvesters are listed in Table 7. ### Electromagnetic-based Vibration Energy Harvesters In an electromagnetic-based energy harvester, electrical energy is produced from the mechanical energy obtained by relative motion between a coil and conductive magnetized body. The design of an electromagnetic energy harvester consists of pick-up coil, magnet, mechanical barrier arm, and cantilever beam and is used for low-frequency range applications, i.e., \(1-10\) Hz. Downsizing of the electromagnetic harvesters with optimum power output to be adequate for low power micro-system applications is the main challenge [89]. A major limitation of the micro-systems is \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **Freq. (Hz)** & **Acceleration (m/s\({}^{2}\))** & **load (k/\(\Omega\))** & **Max. Power output (\(\mu\)W)** & **AC voltage output (V)** & **Ref.** \\ \hline \(126\) & \(5\) & \(50\) & \(5.3\) & \(2.6\) & [70] \\ \(113\) & \(2.5\) & \(200\) & \(115.2\) & \(8.6\) & [71] \\ \(976\) & \(10\) & \(5.1\times 10^{5}\) & \(2.45\times 10^{-5}\) & - & [72] \\ \(1.39\times 10^{4}\) & - & \(5.2\times 10^{3}\) & - & 2.4 & [73] \\ \(214\) & \(19.6\) & \(510\) & \(1.288\) & \(2.292\) & [74] \\ \(255.9\) & \(24.5\) & \(510\) & \(2.675\) & \(1.792\) & [74] \\ \(870\) & \(9.8\) & - & \(1.4\) & \(1.6\) & [75] \\ \(572\) & \(19.6\) & - & \(60\) & - & [76] \\ \(150\) & \(9.8\) & \(11\) & \(2.7\times 10^{4}\) & 17 & [77] \\ \(461.15\) & \(19.6\) & \(6\) & \(2.15\) & - & [78] \\ \(125\) & \(1.96\) & \(1\times 10^{4}\) & \(0.12\) & - & [79] \\ \(183.8\) & \(7.36\) & \(16\) & \(0.32\) & \(0.101\) & [80] \\ \(1700\) & - & \(5.6\) & \(650\) & - & [81] \\ \(97\) & \(1.96\) & \(2\times 10^{3}\) & \(0.136\) & \(1\) & [82] \\ \(608\) & \(9.8\) & \(21.4\) & \(2.16\) & \(0.898\) & [83] \\ \(107\) & \(2.5\) & \(55.90\) & \(222\) & \(3.428\) & [84] \\ \(107\) & \(2.5\) & \(11.91\) & \(586\) & \(2.627\) & [84] \\ \(150\) & \(5\) & \(5.2\times 10^{3}\) & \(1.01\) & \(2.4\) & [85] \\ \(2580\) & \(18.36\) & \(56\) & \(1.8\times 10^{3}\) & - & [86] \\ \(120\) & \(2.5\) & - & \(375\) & - & [87] \\ \(80\) & - & \(333\) & \(2\) & \(1.2\) & [39] \\ \(229\) & - & - & \(3.98\) & \(3.93\) & [88] \\ \hline \hline \end{tabular} \end{table} Table 7: Piezoelectric energy harvesters the limited number of turns of the coil. Performance improvement is achieved by adjusting the external excitation frequency [90]. The effective harvesting bandwidth can be increased by using an excitation structure having multi-degrees of freedom system [91]. Another way of making bandwidth wider is to introduce nonlinearity in an energy harvesting system. The performance is enhanced when compared to the linear system. Coupling between tuning modes, hybrid transduction, and multi-modal arrays are several strategies used to improve efficiency through the incorporation of nonlinearity into the system [67, 92]. The authors in [93] propose a novel electromagnetic harvester designed to improve the operating frequency range using the dual resonator technique. The study alluded comprised of two separate resonator systems. Due to multi-vibration mode, multiple frequencies of various modes are tuned to a specific spectrum resulting in a wider bandwidth [94, 95]. Electromagnetic energy harvesters generate a good amount of power from weak vibration. Since power generated is proportional to the operating frequency, the frequency-up conversion can be used in order to obtain the desired amount of average power [96, 97]. In addition, magnetoelectric transducer together with a rotary pendulum has shown to have frequency-doubling characteristics, hence more power is produced from low frequency [98]. Conversely, the resonant frequency may be altered by introducing switching damping at the expense of some power loss [99]. Optimal performance is observed by Kluger _et al._[100] for small value of electromagnetic damping in the case of linear systems and large damping value in the case of nonlinear systems. Electromagnetic energy harvesting systems typically occupy a comparatively larger space in the devices and suffer from magnetic deterioration and windage loss. Due to the size issue, the fabrication of magnetic coils on micro-and nano-scales is a challenging area. Hence, the authors in [101, 102, 103] proposed the design based on the fact that power increases substantially with the input amplitude, particularly with low-frequency vibrations. In a realistic scenario, an electromagnetic energy harvester is used to produce \(30.313\) mW of power from bus vibration [104]. Electromagnetic energy harvesters perform better with larger size and periodic excitation, however, in the case of random vibration performance is weak [105, 106]. Various electromagnetic energy harvesters present in the literature are listed in Table 8. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **Freq. (Hz)** & **Acceleration (m/s\({}^{2}\))** & **load (d\(\Omega\))** & **Max. power output (\(\mu\)W)** & **AC voltage output (V)** & **Ref.** \\ \hline \(322\) & \(2.7\) & - & \(180\) & - & [107] \\ \(20.8\) & \(1.96\) & \(1.35\times 10^{3}\) & \(118.3\) & - & [108] \\ \(52\) & \(1.7\) & - & \(120\) & - & [109] \\ \(30\) & \(1.47\) & \(50\) & \(20\) & \(0.8\) & [110] \\ \(100\) & \(1.96\) & - & \(240\) & - & [111] \\ \(12\) & \(29.4\) & \(40.8\) & \(71.26\) & \(0.47\) & [112] \\ \(369\) & - & - & \(0.6\) & \(1.38\times 10^{-3}\) & [113] \\ \(62\) & - & - & \(1.77\) & - & [114] \\ \(30\) & - & - & \(254\) & - & [115] \\ \(40\) & - & - & \(153\) & - & [116] \\ \(52\) & \(0.59\) & \(4\) & \(46\) & - & [117] \\ \(128\) & - & \(6\) & \(404\) & - & [118] \\ \hline \hline \end{tabular} \end{table} Table 8: Electromagnetic energy harvesters ### Electrostatic-based Vibration Energy Harvesters In electrostatic energy harvesters, charges are created by relative motion between two charged capacitor plates. This results in a potential difference in the capacitor and thus static electricity. Triboelectrification refers to the transfer of the charge between two surfaces in contact. Triboelectric nanogenerators based on electrostatic induction and triboelectrification effect are invented by Fan _et al._[119] in order to harvest mechanical energy from the ambient environment. Recently, Sequeira et al.[120] discovered the optimized capacitor plate pattern by utilizing the topological optimization method in order to enhance the average output power. In [121], the authors designed a single electrode mode nanogenerator for wearable products, in which silicon rubber and conductive thread are used as a negative dielectric material and an electrode, respectively. The electrical energy is produced by interaction of the silicon layer and human skin. However, the output energy is observed to be very low. Improved efficiency is shown in the freestanding triboelectric setup [122, 123]. In this setup, one dielectric material is free while another pair of dielectric material is fixed and attached to electrodes. Lateral sliding occurs between free and paired electrodes in this configuration. Research has been carried out in recent years to achieve optimum power output by hybridizing triboelectric materials with electromagnetic and piezoelectric materials [124, 125]. The authors in [126] have succeeded in generating energy that can supply power to LED bulbs and supercapacitors. Electrostatic energy harvesters require an external voltage source. The key benefit is the production of extremely high voltage due to high internal impedance as compared to other energy harvesters [127]. Due to the absence of smart materials like optoelectronics, piezo patches, shape memory alloy, and magnetostrictive, triboelectric energy harvesters are long-lasting with adjustable coupling coefficient and low system cost. These harvesters are mostly used for small-scale purposes. Table 9 depicts various electrostatic energy harvesters present in the literature. ### Discussion For a geophone, a hybrid vibration energy harvester can be designed. The surface area and the internal space in geophone allow us to use piezoelectric, electromagnetic, and electrostatic energy harvesters altogether. A schematic diagram of a geophone is shown in Section 8 with various types of energy harvesters. However, it should be noted that geophones close to the vibroseis truck get the maximum vibration as compared to the ones that are far away. Hence, \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **Freq. (Hz)** & **Acceleration (m/s\({}^{2}\))** & **load (d\(\Omega\))** & **Max. power output (\(\mu\)W)** & **AC voltage output (V)** & **Ref.** \\ \hline \(20\) & - & \(6\times 10^{4}\) & \(37.7\) & \(150\) & [128] \\ \(45\) & \(0.08\) & - & \(0.12\) & - & [129] \\ \(120\) & \(2.25\) & - & \(116\) & - & [130] \\ \(2\) & - & \(7.0\times 10^{3}\) & \(40\) & - & [131] \\ \(200\) & - & - & \(1.6\) & - & [132] \\ \(96\) & \(9.8\) & \(1.34\times 10^{4}\) & \(0.15\) & - & [133] \\ \(4.76\) & - & \(1\times 10^{5}\) & \(58\) & \(24\) & [134] \\ \(6\) & - & - & \(36\) & - & [135] \\ \(63\) & \(9.8\) & \(2\times 10^{4}\) & \(1\) & \(11.2\) & [136] \\ \hline \hline \end{tabular} \end{table} Table 9: Electrostatic energy harvesters nearby geophones benefit more from the vibration energy harvesters for a particular shot. It is also worth mentioning that vibroseis truck moves within the seismic field and shots are carried out at various locations to cover all the area. Roughly, we can say that every geophone gets approximately the same amount of vibration energy per day. Various commercial piezoelectric harvesters are available in the market to be suitable for the geophones [137]. Among them, PPA-2011, PPA-2014, and PPA-4011 are best suited for the application at hand (see Table 10 for PPA-4011 specifications). Furthermore, multiple piezo can be connected together for more power [138]. ## 5 Wind Energy Harvesting The presence of natural wind in outdoor environments makes it an important energy source for geophones. Wind energy has been used for centuries to perform different tasks. However, it is only recently that, with the advent of wireless sensor networks and the IoT, attention has been focused on miniature wind energy harvesting devices. In the past decade, the area of small-scale wind energy harvesting has gained attention and a number of designs have emerged. Small-scale wind energy harvesters can help achieve the goal of self-powered sensors and/or tiny devices. Therefore, these harvesters are of huge interest to realize the idea of wireless geophones. Wind energy can be harvested using two different mechanisms. These include: * the rotary movement of windmills/wind turbines, and * the aeroelastic behavior of materials Most of the windmills and wind turbines work on the principle of electromagnetic induction to generate electricity. However, the rotary movement can be converted to electrical energy using other induction mechanisms as well. On the other hand, the harvesters utilizing the aeroelastic behavior of materials are mainly based on piezoelectric induction. In this section, we discuss some of the most important innovations/designs of miniature wind energy harvesters that could be successfully adopted to power geophones. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline **Tip Mass** & **Freq.** & **Acce. Amp.** & **Load** & **RMS Current** & **RMS Voltage** & **RMS power** \\ **(gram)** & **(Hz)** & **(\(\mathbf{\mu}\))** & **(\(\mathbf{k}\mu\))** & **(mA)** & **(V)** & **(mW)** \\ \hline \(25.3\) & \(63\) & \(0.25\) & \(8.1\) & \(0.5\) & \(3.9\) & \(1.9\) \\ \(25.3\) & \(63\) & \(0.50\) & \(8.5\) & \(0.8\) & \(6.9\) & \(5.6\) \\ \(25.3\) & \(62\) & \(1.00\) & \(6.2\) & \(1.7\) & \(10.6\) & \(18.0\) \\ \(25.3\) & \(62\) & \(2.00\) & \(5.0\) & \(3.2\) & \(16.2\) & \(52.0\) \\ \(28.4\) & \(60\) & \(0.25\) & \(7.5\) & \(0.5\) & \(4.0\) & \(2.1\) \\ \(28.4\) & \(60\) & \(0.50\) & \(9.4\) & \(0.8\) & \(7.7\) & \(6.4\) \\ \(27.1\) & \(60\) & \(1.00\) & \(5.4\) & \(1.9\) & \(10.2\) & \(19.5\) \\ \(26.6\) & \(60\) & \(2.00\) & \(4.7\) & \(3.5\) & \(16.7\) & \(59.0\) \\ \hline \end{tabular} \end{table} Table 10: Specifications of PPA-4011 (Length \(=71\) mm, Width \(=25.4\) mm, Thickness \(=1.3\) mm) ### Windmills and Wind Turbines Windmills and wind turbines are used to convert the kinetic energy of wind into mechanical energy. The mechanical energy can then be converted to electrical energy using any of the three induction mechanisms (piezoelectric, electromagnetic or electrostatic). Table 11 lists the major developments in the area of harvesting wind energy using wind turbines. It can be observed that all of these designs are of large dimensions (several cms) and their power density is extremely low to be useful for geophones. The design proposed in [139] is an exception with a power density of 9.38 mW/cm\({}^{2}\). However, the efficiency of this design reduces significantly at low wind speeds. Actually, the efficiency of all harvesters based on rotary motion reduces drastically at lower wind speeds [140]. Designs shown in Table 11 have cut-in wind speeds in the range of 2 - 4.5 m/s with the exception of [141]. This indicates clearly that high wind speeds are needed to take advantage of the windmills and wind turbines. However, high wind speeds are not always available. Let us take an example of oil-rich Saudi Arabia where geophones find most of their usage. The average wind speed in Saudi Arabia, in general, is 6.73 m/s at a height of 100 m [142]. However, specifically, in the oil-rich region of Saudi Arabia (Dammam), the wind speeds vary from 0.2 - 5.5 m/s [143]. As geophones are placed on ground level, they will experience wind speeds that are much lower than the cut-in speeds. Thus operating small-scale devices such as geophones and other sensors using such small-scale wind turbines is not a viable solution. However, other (non-rotary) designs for wind energy harvesting exist. Their merits and demerits are discussed from the point-of-view of geophones in the next sub-section. ### Wind Energy Harvesters Utilizing Aeroelasticity Wind energy can be harvested by taking advantage of the aeroelastic behavior of different materials. _Aeroelasticity_ refers to the tendency of an elastic body to vibrate when it is exposed to a fluid flow (flow of wind/air in our case). These vibrations may be induced due to various aerodynamic phenomena such as _flutter_, _vortex-induced vibrations_, _galloping_, and _buffeting_. These phenomena are undesired in most of the applications such as in aircraft wings, bridges, transmission lines, etc. However, these phenomena can be used to generate power. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dimension (cm)** & \begin{tabular}{c} **Power density** \\ **(mW/cm\({}^{3}\))** \\ \end{tabular} & \begin{tabular}{c} **Max. power** \\ **(mW)** \\ \end{tabular} & \begin{tabular}{c} **Max. power speed** \\ **(m/s)** \\ \end{tabular} & \begin{tabular}{c} **Cut-in speed** \\ **(m/s)** \\ \end{tabular} & **Induction Method** & **Ref.** \\ \hline 4.2 & 9.38 & 130 & 11.8 & - & electromagnetic [139] \\ 2 & 1.368 & 4.3 & 10 & 4.5 & electromagnetic [144] \\ 4.5 & 3.9 & 62.5 & 15 & - & triboelectric [145] \\ 4 & 0.04377 & 0.55 & 20 & 4 & triboelectric [146] \\ 4 & 0.0159 & 0.2 & 10 & - & electrostatic [147] \\ 10 bimorphs each of & 0.0663 & 7.5 & 4.5 & 2.1 & piezoelectric [148] \\ \(6\times 2\times 0.06\) cm\({}^{3}\) & 0.014 & 1.2 & 5.4 & 2.1 & piezoelectric [149] \\ \(5.08\times 11.6\times 7.62\) cm\({}^{3}\) & 0.0388 & 5 & 4.5 & 2.4 & piezoelectric [150] \\ \(16.51\times 16.51\times 22.86\) cm\({}^{3}\) & 0.00318 & 1.2 & 4.0 & 0.9 & piezoelectric [141] \\ \(8\times 8\times 17.5\) cm\({}^{3}\) & 0.0286 & 4 & 10 & 2 & piezoelectric [61] \\ \hline \hline \end{tabular} * open-circuit measurements \end{table} Table 11: Summary of some prominent designs of windmills and wind turbines In this case, a wind energy harvester is exposed to a flow field which results in large limit-cycle oscillations. The kinetic energy of these oscillations may then be converted to electrical energy using either of the three transduction mechanisms. However, in almost all designs proposed in the literature, piezoelectric transduction is used due to the flexibility and efficiency that this method offers. In the following, we list different types of harvesters that utilize various methods to take advantage of aeroelasticity. Each of these methods has been used in the literature to propose a number of harvester designs to efficiently harvest wind energy. These harvester types are: * Vortex-induced vibration (VIV) wind energy harvester * Galloping energy harvester * Wake Galloping energy harvester * Flutter-based energy harvester * Turbulence-induced vibration (TIV) wind energy harvester The harvester designs that hold the potential to be most effective for our application of geophones are presented briefly in the following. #### 5.2.1 Vortex-induced Vibration (VIV) Wind Energy Harvester VIV is a phenomenon in which periodic vortices are shed by a bluff body when exposed to wind. These periodic vortices cause the body to oscillate [151, 152]. Therefore, VIV can be used to harvest wind energy by converting the oscillations into electrical energy. Piezoelectric transduction mechanism is usually used to convert oscillations into electrical energy. The design concept is shown in Fig. 9a. Using piezoelectric material could allow VIV-based harvesters to be miniaturized without losing their capability to harvest energy at low wind speeds. Therefore, recently a microchip level energy harvester has been reported in the literature [153]. The VIV energy harvesters suitable for small-scale wind energy harvesting are listed in Table 12. #### 5.2.2 Galloping Vibrations Wind Energy Harvester Galloping vibrations could be induced by replacing the smooth cylindrical bluff body in the schematic of Fig. 9a with a prismatic body as shown in Fig. 9b. The prisms used to generate galloping vibrations could be of different shapes; for example, rectangular, triangular, D-shape, hexagonal, rectangular with a V-shape groove, etc. The galloping-induced \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Dimensions** & **Dimensions** & **Power density** & **Max. power** & **Max. power speed** & **Cut-in speed** & **Ref.** \\ **(Buff - dia, len)** & **(Cantainer)** & **(mW/cm\({}^{3}\))** & **(mW)** & **(m/s)** & **(m/s)** & **Ref.** \\ \hline 2.91 cm, 3.6 cm & \(3.1\times 1.0\times 0.0202\) cm\({}^{3}\) & 1.25 \(\times 10^{-3}\) & 0.03 & 5 & 3.1 & [154] \\ 2.5 cm, 11 cm & \(2.86\times 0.63\times 0.25\) cm\({}^{3}\) & 0.0918 & 5 & 5.5 & 2 & [155] \\ 1.98 cm, 20.3 cm & \(26.7\times 3.25\times 0.0635\) cm\({}^{3}\) & 1.47 \(\times 10^{-3}\) & 0.1 & 1.192 & - & [156] \\ 2 mm, 10 mm & \(2.4\times 2.4\times 0.01\) mm\({}^{3}\) & 5.66 \(\times 10^{-6}\) & 1.6 \(\times 10^{-6}\) & 4.48 & - & [153] \\ \hline \hline \end{tabular} \end{table} Table 12: VIV Energy Harvesters vibrations have large amplitudes as compared to VIV and lend themselves for the stable acquisition of energy [157]. Galloping vibrations can be harvested using methods similar to those for VIV. Table 13 summarizes the performance of some prominent harvesters with different types and dimensions of the prismatic bluff bodies. It can be observed that galloping vibrations-based wind energy harvesters feature low cut-in speeds and are, therefore, useful in environments with slow wind. #### 5.2.3 Wake Galloping Vibrations Wind Energy Harvester Among various aerodynamic phenomena, wake galloping is the most suitable for a wind energy harvesting system. Such systems have low cut-in speeds and can operate on a wide range of wind speeds [165]. Wake galloping occurs when a fixed cylinder is placed in front of another cylinder with a flexible base. Due to the wakes from the windward cylinder, the flexible base cylinder vibrates significantly. The vibrations of the downstream cylinder are called wake galloping. These vibrations are significantly higher than those occurring due to the galloping phenomenon discussed earlier. Fig. 8(c) shows the schematic of a wake galloping-based wind energy harvesting system. Some prominent wake galloping-based harvesting designs are summarized in Table 14. #### 5.2.4 Flutter-induced Vibration Wind Energy Harvester The schematic of Fig. 8(d) shows a typical system to harness wind energy using the aerodynamic phenomenon of flutter. In this system, a flap (airfoil) is connected at the end of a cantilever. The flow of air causes the flap to flutter and the resulting vibrations can be converted to electrical energy. \begin{table} \begin{tabular}{c l l l l l l l l} \hline \hline **Buff** & **Dimensions** & **Dimensions** & **Power density** & **Max. power** & **Max. power** & **Cut-in speed** & \\ **Shape** & **(Buff-sides, len)** & **(Cantileover -cm3)** & **(mW/cm3)** & **(mW)** & **speed (m/s)** & **(m/s)** & **Ref.** \\ \hline V-shaped groove & \(2.91\) cm, \(3.6\) cm & \(3.1\times 1.0\times 0.0202\) & \(1.25\times 10^{-3}\) & \(0.03\) & \(5\) & \(3.1\) & [158] \\ & base\(=50\) cm, & & & & & & & \\ Triangle & side\(=160\) cm & - & \(4.464\) & \(2.4\) & \(15\) & \(0.336\) & [159] \\ Triangle & \(4\) cm, \(25.1\) cm & \(16.1\times 3.8\times 0.0635\) & \(0.281\) & \(50\) & \(5.2\) & \(3.6\) & [160] \\ D-shape & \(3\) cm, \(23.5\) cm & \(9\times 3.8\times 0.0635\) & \(0.0134\) & \(1.14\) & \(4.7\) & \(2.5\) & [161] \\ Square & \(4\) cm, \(15\) cm & inner: \(5.7\times 3.03\), & \(0.0162\) & \(4\) & \(5\) & \(1\) & [162] \\ Square & \(2\) cm, \(10\) cm & \(13\times 2\times 0.06\) & \(0.0782\) & \(3.25\) & \(7\) & \(3\) & [163, 164] \\ \hline \hline \end{tabular} \end{table} Table 13: Galloping Vibration Wind Energy Harvester \begin{table} \begin{tabular}{c l l l l l l l l} \hline \hline **Buff** & **Buff** & **Buff** & **Spacing** & **Power density** & **Max. power** & **Max. power** & **Cut-in speed** & **Ref.** \\ **Shape** & **Dimensions** & **(mW/cm3)** & **(mW/cm3)** & **(mW)** & **speed (m/s)** & **(m/s)** & **Ref.** \\ \hline Both: circular cylinder & Both: dia = 5 cm, & & & & & & \\ cylinder & len = 85 cm & \(25\) cm & \(0.111\) & \(370.4\) & \(4.5\) & \(1.2\) & [165] \\ Inner: square cylinder, & len = 26.67 cm, & & & & & & \\ outer: circular cylinder & Outer: dia = \(1.25\) cm, & \(24\) cm & \(0.572\times 10^{-3}\) & \(0.05\) & \(3.05\) & \(0.4\) & [166] \\ & len = 27.15 cm & & & & & & \\ Both: circular cylinder & Both: dia = \(0.3\) cm, & & & & & & \\ cylinder & len = 25 cm & \(15\) cm & - & - & - & 4 & [167] \\ \hline \hline \end{tabular} \end{table} Table 14: Wake Galloping Vibration Wind Energy Harvester The cut-in wind speeds of flutter-based systems are usually high (\(>10\) m/s). Therefore, these systems are suitable for high-wind regimes in general. However, some designs having normal cut-in wind speeds (2 - 4 m/s) exist and are summarized in Table 15. #### 5.2.5 Turbulence-induced Vibration (TIV) Wind Energy Harvester Turbulence-induced vibrations-based wind energy harvesters are one of the most practical small-scale solutions. Almost all of the above-mentioned designs work in laminar flow conditions. In other words, if the airflow is turbulent then these harvesters will become extremely inefficient and most often seize to harvest energy. Therefore, it is natural to design harvesters that can still work in turbulent winds. Another limitation of the harvesters mentioned earlier is that they generate vibrations (and generate electricity) only if the wind speed is above a minimum limit (cut-in speed). However, interestingly, turbulence-induced vibrations (TIVs) occur even if the wind speed is very low. This phenomenon can be used to design efficient turbulence-induced vibration wind energy harvesters [171, 172, 173, 174, 175, 176]. Some of the prominent efforts in this direction are summarized in Table 16. It can be seen from the data provided in Table 16 that the designs in [174, 175, 176] are most suitable for tiny devices. Moreover, the power density of these designs is also high when compared to other designs. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Flap** & **Flutter** & **Dimensions** & \begin{tabular}{c} **Power density** \\ **(mW/cm\({}^{3}\))** \\ \end{tabular} & \begin{tabular}{c} **Max. power** \\ **(mW)** \\ \end{tabular} & \begin{tabular}{c} **Max. power** \\ **speed (m/s)** \\ \end{tabular} & \begin{tabular}{c} **Cut-in speed** \\ **(m/s)** \\ \end{tabular} & \begin{tabular}{c} **Ref.** \\ \end{tabular} \\ \hline \multirow{3}{*}{Airfoil} & Modal & \begin{tabular}{c} Airfoil: semichord \(2.97\) cm, \\ span \(13.6\) cm. Candilever: \\ \(25.4\times 2.54\times 0.0381\) cm\({}^{3}\) \\ \end{tabular} & \(7.17\times 10^{-3}\) & \(2.2\) & \(7.9\) & \(1.86\) & [168] \\ & Modal & Flat plate \(6.7\times 3\) cm\({}^{3}\) & \(2.56\) & \(4\) & \(15\) & \(4\) & [169] \\ & convergence & Caulierier \(10.7\times 6.02\) cm\({}^{3}\) & \(5.82\) & \(1.1\) & \(8\) & \(4\) & [170] \\ \cline{1-1} \cline{3-6} \cline{7-7} \cline{7- ### Discussion It is important to note that, during this survey, we could not find any small/tiny-scale commercial solution for wind energy harvesting. However, the survey gives an interesting glimpse into the flurry of activity happening towards achieving tiny-scale wind energy harvesting solutions. These solutions are mainly targeted towards IoT sensors. We believe that the application of seismic energy harvesting provides a great incentive to the industry to seriously look into commercializing some of these wind energy harvesting ideas. Further, the results of the survey show that most of the wind energy harvesting methods do not perform efficiently at low wind speeds. Thus such techniques are not suitable for regions with low average wind speeds. As an example, the wind speed data of Dammam city in Saudi Arabia is presented. Figure 10 shows the maximum, minimum, and average wind speeds in Dammam for each day of the year 2019. It could be easily noted that while maximum wind speeds are as high as 15 m/s, the average speed every day is around 4 m/s. Similarly, Fig. 11 provides a snapshot of the average wind speeds over the period of three years (2017-2019) again for Dammam city. With this data, it is obvious that for a wind energy harvesting system to be effective for Dammam city, the cut-in wind speed must be less than 4 m/s. All wind data has been acquired from the Weather Underground website [177]. Moreover, as the amount of energy generated by green energy harvesting solutions is not sufficient for the sustainable operation of a geophone, it is important to devise a hybrid system. Therefore, wind energy harvesting could be used along with other energy harvesting methods discussed in this paper to provide a sustainable solution. ## 6 Thermal Energy Harvesting Another possible solution to power geophones is through energy harvesting from the temperature gradient that exists between the part of a geophone inserted inside the ground and the part that is exposed to the open environment in the seismic field. Thermal energy harvesting is coined as the reliable conversion of thermal energy to electricity with no moving parts. There are various strategies of thermal energy harvesting reported in the literature [178, 179, 180, 181, 182, 183, 184, 185]. However, the most notable are pyroelectric and thermoelectric generators. The first type, called pyroelectric generator, converts the temperature fluctuations in material to usable electrical energy. On the other hand, the thermoelectric generators, do not require temperature fluctuations; rather they rely on the temperature differences. In this mode of \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dimensions** & \begin{tabular}{c} **Power density** \\ **(mW/cm\({}^{3}\))** \\ \end{tabular} & \begin{tabular}{c} **Max. power** \\ **(\(\mu\)W)** \\ \end{tabular} & \begin{tabular}{c} **Max. power** \\ **speed (m/s)** \\ \end{tabular} & \begin{tabular}{c} **Cut-in speed** \\ **(m/s)** \\ \end{tabular} & **Ref.** \\ \hline Bluff: \(4.45\times 4.45\times 10.92\) cm\({}^{3}\), Canallever: & & & & & \\ \(0.1016\times 0.0254\times\) (\(0.1016\times 10^{-3}\)) cm\({}^{3}\) & & & & & \\ Bluff: \(\Delta\)m, ten-1.2 m, Canallever: & & & & & \\ \(3\times 1.6\times 0.02\) cm\({}^{3}\) & & & & & \\ whole body: \(2\times 3.3\times 0.4\) mm\({}^{3}\) & & & & & \\ PZT beam: \(3\times 0.3\times 0.008\) mm\({}^{3}\) & & & & & \\ Bluff: \(3\times 7\times 0.51\) mm\({}^{3}\) & & & & & \\ \(3\times 8\times 0.035\) mm\({}^{3}\) & & & & & \\ \hline \hline \end{tabular} \end{table} Table 16: mm-Scale TIV Energy Harvesters Figure 11: Average wind speeds in Dammam city for three-year period (2017-2019) Figure 10: Wind speed data for Dammam city for the year 2019 energy generation, the thermal gradient is converted into useful electrical energy utilizing the phenomenon termed as Seebeck effect which is summarized as follows. When two dissimilar electrical conductors are joined together, a thermocouple is formed. An electromotive force is developed when the temperature difference is maintained between the two joining junctions. The induced voltage is proportional to the temperature gradient. The heat source provides an elevated temperature where the heat flows through a thermoelectric converter to a heat sink, which is maintained at a temperature well below that of the source. Hence, the flow of charge carriers between the hot and cold bodies creates voltage difference, leading to power generation. The thermoelectric generators offer unique characteristics, such as; small footprint, lightweight, solid-state with no moving parts, free from noise, resistant to mechanical damage which means less maintenance, and long-term use in harsh environments. Table 17 shows applications where thermal energy is utilized for powering up different devices. It is apparent from Table 17 that a harvesting power in the range of hundreds of milliwatts is possible using thermal sources and could be potentially used for various applications. The thermoelectric generators are generally manufactured from either inorganic or polymer materials. The inorganic materials are mostly based on Bi-Te compounds. The thermoelectric generator consists of inorganic bulk materials embedded in a flexible polymer. The flexible thermoelectric generator can be attached to a curved surface as well. Table 18 shows the conversion efficiency for different thermoelectric generator materials. It is well known that the conversion efficiency of thermoelectric generators is very low, making them unsuitable for various standalone applications. The performance of thermoelectric material is usually measured using a dimensionless figure of merit known as ZT value. The ZT value is directly proportional to the Seebeck coefficient and electrical conductivity. In order to improve the conversion efficiency of thermoelectric generators, a high value of ZT at room temperature is desired. The maximum conversion efficiency of \(84.5\%\) is achieved when the ZT of the material is \(1.8\) at \(298\)\({}^{o}\)K as indicated in Table 18. Thermoelectric generators generally require a temperature gradient of around \(5-10\)\({}^{o}\)K to generate electrical power in the milliwatt range [186]. Here, we propose to use thermoelectric generators to be placed on the outer surface of the geophones installed in the seismic field. As shown in Fig. 12, a part of the geophone is under the ground. This creates \begin{table} \begin{tabular}{c c c c} \hline \hline **Applications** & **Heat utilized** & **Harvested Energy** & Ref. \\ \hline Seiko Thermic watch & Wrist and the environment at room temperature & \(22\,\mathrm{\SIUnitSymbolMicro W}\) & [178] \\ Nuclear Power Plant & Heat pipes & - & [179] \\ ThermoWatt & The heat of candle and room temperature & \(500-800\,\mathrm{mW}\) & [180] \\ DW-DF-10W Camp Stove & The heat of Propane stove & - & [181] \\ Radiator & Radiator \(323\,\mathrm{\SIUnitSymbolC k}\) & A fair Voltage Current Power \(294\,\mathrm{\SIUnitSymbolC K}\) & \(95.19\,\mathrm{mW}\) & [182] \\ & The temperature difference between the pavement & \(0.05\,\mathrm{mW}\) & [183] \\ & surface and the shortage soil & \(22.58\,\mathrm{mW}\) & [184] \\ Aircraft & Cargo skin and Cargo primary insulation & \(22.58\,\mathrm{mW}\) & [184] \\ \hline \hline \end{tabular} \end{table} Table 17: Thermal energy harvesting applications a temperature gradient due to the temperature difference between the ground and the subsurface. Usually, a significant temperature difference exists between the upper surface of the seismic field and few centimeters below it. The harvested energy from the thermoelectric generator can be utilized to provide power to geophones installed in seismic fields. A closely related scenario has been recently studied by Sigrist et.al. [185]. The authors in the aforementioned work developed an end-to-end thermoelectric energy harvesting system to harvest energy from temperature gradients found at the natural ground-to-air boundary on the earth's surface. Table 19 lists various thermoelectric generators that generate power using ground-to-air temperature gradient. ### Discussion In order to demonstrate the feasibility of thermal energy harvesters in seismic fields, we have recorded the temperature during the month of November in Dammam, Saudi Arabia (see Table 20) and found out the temperature difference of about \(5-7\)\({}^{\circ}\)K exists \(10\) cm below the surface. We strongly believe that this temperature difference will generate significant power to contribute to the energy required by geophones. This energy harvesting source is readily available for \(24\) hours a day and can account for the significant portion of energy harvested from various sources. Moreover, the thermal energy harvesters are also robust to high temperatures and dust. ## 7 RF Energy Harvesting Harvesting energy from RF sources, also known as wireless energy harvesting, has found a lot of interest in recent years due to their wide applications as a substitute power source for various applications. Some interesting applications include battery-less power sources, RF tags, biomedical devices, and smart wireless sensor networks which require Figure 12: Geophone with the thermoelectric generator. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Material** & **ZT (at \(298\)\({}^{\circ}\)K)** & \(\Delta\)**T** & **Conversion efficiency** (\%) & **Ref.** \\ \hline Bi\({}_{2}\)Te\({}_{3}\) & \(0.69\) & \(137\) & \(54.6\) & [186] \\ Bi\({}_{2}\)Te\({}_{3}\), Sb\({}_{2}\)Te\({}_{3}\) & - & \(15\) & \(43\) & [187] \\ (BiskD\({}_{2}\)Te\({}_{3}\), Sb\({}_{2}\)Te\({}_{3}\) & - & \(240\) & \(81.8\) & [188] \\ Bi\({}_{2}\)Te\({}_{3}\) & \(1\) & \(125\) & \(60.4\) & [189] \\ (Bi\({}_{3}\)Sb)\({}_{2}\)Te\({}_{3}\) & \(1.4\) & \(125\) & \(74\) & [189] \\ Bi\({}_{2}\)Te\({}_{3}\) super-lattices & \(1.8\) & \(125\) & \(84.5\) & [189] \\ \hline \hline \end{tabular} \end{table} Table 18: Conversion efficiency for different thermoelectric generator compositions nanowatt to microwatt input power. We believe that wireless geophones can also take advantage of this technology. Specifically, the presence of an on-site data center provides an opportunity to power wireless geophones through RF energy. A typical layout of geophones in the field and an on-site data center is shown in Fig. 13. Power is readily available at the data centers and can be used to transmit energy to geophones using a wireless link. ### RF Energy Sources In general, a wireless geophone can harvest RF energy from various different sources. Any device emitting radio waves can be considered as a source for wireless energy harvesting. The frequency range of such sources depends on the type of transmitter. The most common radio sources are radio/TV broadcasting stations, satellites, wireless fidelity (Wi-Fi), global system for mobile communications (GSM), universal mobile telecommunications system (UMTS), and long term evolution (LTE) base stations. These sources cover a broad range of frequencies starting from \(3\) kHz \begin{table} \begin{tabular}{c c c c} \hline \hline **Thermoelectric** & **Power** & **Power type** & **Ref.** \\ **Generator characteristics** & **output (mW)** & **Power type** & **Ref.** \\ \hline Area of \(144\) cm\({}^{2}\) & \(1.1\) & Average & [190] \\ Optimized source and load & \(8.1\times 10^{-4}\) & Average & [191] \\ mined thermal guides of \(3.8\) cm & \(1\) & Average & [192] \\ diameter & \(16\) & Average & [193] \\ Only Simulations setup & \begin{tabular}{c} Enough to power \\ a wireless sensor \\ \end{tabular} & - & [194] \\ With active rectification circuit & \(27.2\) (day) \(6.3\) (night) & Peak & [195] \\ and electrical impedance matching & \(1.1\) & Average & [195] \\ \hline \hline \end{tabular} \end{table} Table 19: Thermoelectric generators utilizing ground-to-air temperature gradient Figure 13: Geophones and data center in the seismic field. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Time** & **Ambient** & **Temperature** & **Temperature** & **Temperature** \\ **temperature** & **1 cm below** & **5 cm below** & \(10\) cm below** \\ & **surface** & **surface** & **surface** \\ \hline Morning & \(293\)\({}^{\circ}\)K & \(289\)\({}^{\circ}\)K & \(289\)\({}^{\circ}\)K & \(288\)\({}^{\circ}\)K \\ Noon & \(306\)\({}^{\circ}\)K & \(313\)\({}^{\circ}\)K & \(307\)\({}^{\circ}\)K & \(299\)\({}^{\circ}\)K \\ Evening & \(293\)\({}^{\circ}\)K & \(294\)\({}^{\circ}\)K & \(298\)\({}^{\circ}\)K & \(300\)\({}^{\circ}\)K \\ Night & \(293\)\({}^{\circ}\)K & \(290\)\({}^{\circ}\)K & \(294\)\({}^{\circ}\)K & \(298\)\({}^{\circ}\)K \\ \hline \hline \end{tabular} \end{table} Table 20: Temperature measurements in the Eastern region of Saudi Arabia during the month of November. to all the way up to \(300\) GHz of the electromagnetic spectrum. These RF energy sources are ubiquitous and are even available in the most inaccessible places. A typical RF energy harvesting system consists of an antenna that receives the incident power, a matching network for maximizing the power transfer and minimizing the signal reflection, and an RF to DC rectifier [196]. RF energy harvesting can also be used along with data transfer in a communication system. Table 21 shows the power density of different RF sources. The power densities of these sources vary from \(0.45\) nW/cm\({}^{2}\) for GSM900 mobile terminal to \(84\) nW/cm\({}^{2}\) for GSM1800 base station. Table 22 shows the reported conversion efficiencies of different RF schemes with various antenna types. In [197] an RF energy harvester is reported where there is no antenna and a uniform transmission line was used for impedance matching. It is evident that dipole and microstrip antenna offered the best conversion efficiency followed by schemes based on patch antennas. ### Optimal Signal Design for RF energy harvesting The signal waveform design also plays an important role in efficient RF energy harvesting. Various waveform designs based on single or multiple antenna transmissions are reported in the literature [204, 205]. It has been shown that the design of an appropriate signal generation method that adapts as a function of the channel condition, significantly boosts the amount of harvested energy [205]. Particularly, the transmitted RF signal has been proposed to be the superposition of multiple sine-waves of unique amplitudes and phases, where the number of sine-waves depends upon the number of channel subbands. \begin{table} \begin{tabular}{c c c c} \hline \hline **Band** & **Range (MHz)** & **Power Density (nW/cm\({}^{2}\))** \\ \hline DTV & \(470-610\) & \(0.89\) \\ GSM900 (MT) & \(880-915\) & \(0.45\) \\ GSM900 (BT) & \(920-960\) & \(36\) \\ GSM1800 (MT) & \(1710-1785\) & \(0.5\) \\ GSM1800 (BT) & \(1805-1880\) & \(84\) \\ 3G(MT) & \(1710-1785\) & \(0.46\) \\ 3G(MT) & \(2110-2170\) & \(12\) \\ WiFi & \(2400-2500\) & \(0.18\) \\ \hline \hline \end{tabular} \end{table} Table 21: Power density of different RF sources (DTV: Digital TV; MT: Mobile Terminal; BT: Base Terminal) \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Frequency** & **Type of** & **Input** & **Output** & **Co** (\(\omega\)) & **Conversion** & **Ref.** \\ **Band** (MHz) & **antenna** used & **Power (dBm)** & **Voltage (V)** & Load (\(\omega\)) & **efficiency (\%)** & **Ref.** \\ \hline \(0.47-0.86\) & \begin{tabular}{c} Just rectifier, \\ no antenna used \\ \end{tabular} & \(10\) & \(-\) & \(12200\) & \(60\) & [197] \\ \(0.9-2.45\) & \begin{tabular}{c} Patch \\ \end{tabular} & \(-3\) & \(21\) & \(2400\) & \(50\) & [198] \\ \(0.876-0.959\) & \begin{tabular}{c} Dipole \\ \end{tabular} & \(5.8\) & \(0.9\) & \(11000\) & \(84\) & [199] \\ \(0.9\) & \begin{tabular}{c} Patch \\ \end{tabular} & \(-15\) & \(-\) & \(50000\) & \(45\) & [200] \\ \(1.5-2.0\) & \begin{tabular}{c} Just rectifier, \\ no antenna used \\ \end{tabular} & \(27\) & \(-\) & \(50\) & \(55\) & [201] \\ \(0.85\) & \begin{tabular}{c} Patch \\ \end{tabular} & \(-20\) & \(-\) & \(2200\) & \(15\) & [202] \\ \(2.45\) & \begin{tabular}{c} Microstrip \\ \end{tabular} & \(0\) & \(1\) & \(1400\) & \(83\) & [203] \\ \hline \hline \end{tabular} \end{table} Table 22: Conversion efficiency for different RF schemes Consider a general multiple-antenna transmitter with \(M\) transmit antennas and assume \(N\) channel subbands for a general frequency-selective channel. The transmit vector signal can be expressed as [205] \[\mathbf{x}(t)=\Re\left\{\sum_{n=0}^{N-1}\mathbf{w}_{n}e^{j2\pi f_{n}t}\right\} \tag{5}\] where \(\mathbf{x}(t)=[x_{1}(t),\cdots,x_{M}(t)]^{T}\) is a vector of transmitted signal from \(M\) antennas, \(\mathbf{w}_{n}=[w_{n,1}(t),\cdots,w_{n,M}(t)]^{T}\) with \(w_{n,m}(t)=s_{n,m}(t)e^{j\phi_{n,m}(t)}\) expresses the amplitude and phase of the subband signal on frequency \(f_{n}\) and transmit antenna \(m\) at time \(t\). If the frequency response of the multipath channel is given by \(h_{n,m}=A_{n,m}e^{j\psi_{n,m}}\), the optimal design of \(\mathbf{w}_{n}\) is given by [205] \[\mathbf{w}_{n}=\frac{\mathbf{h}_{n}^{H}}{\left\|\mathbf{h}_{n}\right\|}\left\| h_{n}\right\|^{\beta}\sqrt{\frac{2P}{\sum_{n=0}^{N-1}\left\|\mathbf{h}_{n} \right\|^{2}\beta}} \tag{6}\] where \(\mathbf{h}_{n}=[h_{n,1},\cdots,h_{n,M}]\), and \(\beta\) is a scaling factor whose optimal value is chosen to be \(3\)[205], and \(P\) is the transmit power budget. Under a single-antenna transmitter, the optimal design can be expressed as \[w_{n}=A_{n}^{\beta}\sqrt{\frac{2P}{\sum_{n=0}^{N-1}A_{n}^{2}}}e^{-j\psi_{n}} \tag{7}\] ### Discussion In the seismic exploration environment, we propose that wireless geophones tap the RF energy generated by the data center. In the seismic acquisition, geophones transmit the acquired data to the data center, which in turn sends acknowledgments [4] usually in the form of small frames. The fact that only small frames are sent in the downlink, makes it a perfect scenario to harvest energy from the RF signals. Since the downlink channel is idle most of the time, special signals (as discussed in Section 7.2) meant for energy harvesting can be sent over it. Moreover, geophones can be powered up using RF signals during both the shooting interval and the non-shooting periods. It means that RF signals can be used to power up geophones at any time. In [4], the authors proposed a scheme for seismic data transmission utilizing wireless network based on IEEE802.11af standard. Usually the ambient energy from this RF source is not sufficient for powering the geophonesss and, therefore, other sources need to be added to the system. Nevertheless, we still believe that it can be utilized with other energy harvesting modes in a hybrid fashion. Furthermore, in the wireless sensor network literature, there is a growing trend of using unmanned aerial vehicles (UAV) to power up sensor nodes through RF signals [206, 207, 208]. The same concept can be applied to power up geophones located far away from the data center where RF energy harvesting is not feasible. In such large distance scenarios as geophone cannot transmit the recorded data to data centers directly, UAVs are sent to collect the data [209]. Thus the UAVs can be used to simultaneously receive data from and transmit power to the geophones. As mentioned above that downlink contains only acknowledgments. The _almost_ idle channel can be leveraged to intelligently design waveforms that are friendly for RF energy harvesting operation. Thus the amount of energy being harvested can be improved for the geophones. Another interesting design strategy could be to use these special waveforms that can maximize the RF energy harvesting efficiency as acknowledgments (positive or negative) for a geophone. Finally, the waveform design in (7) suggests multiple antennas at the data center and a single antenna at a geophone. This is perfect for a typical wireless seismic acquisition setup since it relieves a limited-power geophone while shifting heavy processing to the data center where power requirements are relaxed. ## 8 Proposed Design of Energy Harvesting Geophone Figure 14 illustrates the proposed design of an energy harvesting geophone. It consists of solar cells on the top surface, piezoelectric and electromagnetic/electrostatic harvesters on the sides/edges and inside, respectively, coating of thermoelectric material on the whole body and antenna for RF energy harvesting. Based on this design, the average energy harvested per day can be approximated using the appropriate harvester designs highlighted in the above Sections 3-7. We have selected a suitable harvester per energy source based on the latest design, practicality, size, feasibility to geophones, maximum efficiency, and output power. From the Table 23, it can be concluded that the average harvested energy from multi-source can be used to meet the power requirements of a geophone. Furthermore, RF energy harvester is the least effective in this case, while solar energy harvesters contribute to most of the harvested energy. Although RF energy harvester adds very little to the overall harvested energy, we strongly believe that it can still be useful as geophones can be powered up by RF energy (transmitted from the data center) for \(24\) hours a day especially during the evening/night when geophones are in sleep mode (no recording). In addition, recently, there are considerable improvements in the conversion efficiency of RF circuits (e.g. see [210, 211]) Figure 14: Cross-section of a geophone with various energy harvesting systems. which could improve their performance in the future. The proposed design is a collective solution to harvest energy through various possible means. One can easily adapt to pick and choose some and drop others, depending upon the actual environmental situation. The hardware implementation and extensive comparison of various harvester designs in the real scenario is the focus of our future research. ## 9 Conclusion This paper has presented a comprehensive survey of the promising energy harvesting technologies for realizing self-powered geophones for seismic exploration. First, an overview of a typical wireless geophone with a focus on its energy requirements is provided. Next, detailed discussions about the state-of-the-art research contributions in various small-scale energy harvesting techniques suitable for geophones are presented. These included solar, vibration, wind, thermal, and RF energy harvesting methods. To this end, characteristics and design of different energy harvesting methods, their limitations, amount of harvested energy, comparisons, and various research challenges are discussed along with some real case studies. Finally, we have outlined the proposed design of a geophone equipped with the discussed energy harvesting mechanisms. It is concluded that energy harvesting and storage systems are to be planned based on the combination of more than one alternative energy source. It paves the way for a paradigm shift from traditional wired geophones to a truly autonomous and sustainable geophone energy harvesting network. The hardware design and implementation issues were identified as future research directions. We believe these insights will motivate further research towards the use of energy harvesting in geophones.
2309.02890
Note on radical and prime E-ideals
We show that the ring of exponential polynomials is not Noetherian even respect to prime E-ideals. Moreover we give a characterization of exponential radical ideals
Antongiulio Fornasiero, Giuseppina Terzo
2023-09-06T10:26:13Z
http://arxiv.org/abs/2309.02890v1
# Note on radical and prime E-ideals ###### Abstract. We show that \(\mathbb{C}[\bar{x}]^{E}\) is not Noetherian even respect to prime E-ideals. Moreover we give a characterization of exponential radical ideals Key words and phrases:Exponential rings, exponential polynomials, exponential ideals 2000 Mathematics Subject Classification: 03C60; 03C98 ## 1. Introduction The notion of exponential ideal (E-ideal) was introduced in the papers [6, 9]; it was related to the study of exponential function after the problem posed by Tarski on the decidability of the reals with exponentiation. An exponential ring (E-ring) is a pair \((R,E)\) where R is a commutative ring with 1 and E is a homomorphism \(E:(R,+)\to(R^{*},.)\). We will always assume that \(R\) is a \(\mathbb{Q}\)-algebra. The classical examples are the reals and the complex numbers with the usual exponentiation. Starting from an E-ring \(R\), we can construct the E-polynomial ring in the variables \(\bar{x}\) over \(R\) by induction (see [6, 9]) and we denote it by \(R[\bar{x}]^{E}\). An E-ideal \(I\) of an E-ring \(R\) is an ideal with the property that if \(\alpha\in I\) then \(E(\alpha)-1\in I\); they coincide with the possible kernels of homomorphisms which preserve the exponential map \(E\). Some further contributions to the study of E-ideals were given in [10, 13, 15]. In the recent paper [3], together with P. D'Aquino, we studied E-ideals of E-rings, we gave two notions of maximality for E-ideals, and related them to primeness. We proved that the three notions are independent, unlike in the classical case. Moreover, we showed that, for any exponential field \(K\), not all maximal E-ideal of \(K[\bar{x}]\) correspond to points of \(K^{n}\). In this paper we further investigate E-ideals of \(R[\bar{x}]^{E}\). It was known that the exponential Zariski topology on \(\mathbb{C}^{n}\) is not Noehterian (see [9]). We show that \(\mathbb{C}[\bar{x}]^{E}\) is not Noetherian even respect to prime E-ideals. Moreover, we give a reasonable notion of E-radical E-ideals, characterize them, and prove some of their properties, using a technique introduced in [3] that permits to extend prime ideals to prime E-ideals. ## 2. E-polynomial ring and basic results First to introduce the construction of the E-polynomial ring is useful to recall the notion of partial E-rings and partial E-ideals and some important result related to it proved in [3]. ### Partial E-rings and E-ideals **Definition 2.1**.: A partial E-ring is a triple \(D=(D,V,E)\) where 1. \(D\) is a \(\mathbb{Q}\)-algebra; 2. \(V\) is a \(\mathbb{Q}\)-vector subspace of \(D\) containing \(\mathbb{Q}\); 3. \(E:(V,+)\to(D^{*},\cdot)\) is a group homomorphism. **Definition 2.2**.: A partial E-ideal of \(D\) is an ideal \(I\) of the ring \(D\) such that, for every \(v\in I\cap V\), \(E(v)-1\in I\). If \(D=V\) we say that it is an E-ideal. **Remark 2.3**.: If \(I\) is an ideal of \(D\) with \(I\cap V=(0)\) then \(I\) is an E-ideal of \(D\). **Definition 2.4**.: An E-ideal I of D is prime if it is prime as ideal, i.e. iff \(D/I\) is a domain. An E-ideal I of D is an E-maximal ideal if it is maximal among the E-ideals. It is strongly maximal if it is maximal as ideal. ### Construction of E-polynomial ring The construction of the E-polynomial ring in many variables is well known, see [6, 9]; for the reader's convenience, we briefly recall it here. Starting from an E-ring \(R\), we construct the E-polynomial ring in the variables \(\bar{x}=(x_{1},\ldots,x_{n})\), denoted by \(R[\bar{x}]^{E}\), as a union of a chain of partial E-rings equipped with partial E-morphisms. The following three chains are constructed by recursion: \((R_{k},+,\cdot)_{k\geq-1}\), \((B_{k},+)_{k\geq 0}\), and \((E_{k})_{k\geq-1}\) will be rings, abelian groups and partial E-morphisms, respectively. Let \(R_{-1}=R\), and \(R_{0}=R[\overline{x}]\) as partial exponential ring (where \(\exp\) is defined only on \(R\)). Let \(A_{0}=(\overline{x})\) be the ideal of \(R[\overline{x}]\) generated by \(\overline{x}\). For \(k\geq 1\), let \(A_{k}\) be the \(R_{k-1}\)-submodule of \(A_{k}\) generated by \(t^{a}\) with \(0\neq a\in A_{k}\). So, we define \(R_{k}=R_{k-1}\oplus A_{k}\), \(R_{k+1}=R_{k}[t^{A_{k}}]\) and \(E_{k}:(R_{k},+)\to(R_{k+1}^{*},\cdot)\) such that \(E_{k}(x)=E_{k-1}(r)\cdot t^{b}\), for \(x=r+b\), \(r\in R_{k-1}\) and \(b\in B_{k}\). Then the E-polynomial ring is the limit of this object, i.e \(R[\overline{x}]^{E}=\cup R_{k}.\) Sometimes it is convenient to represents \(R[\bar{x}]^{E}\) as the group ring \(R[\bar{x}][t^{\bigoplus_{i\geq 0}A_{i}}]\). In [3] we generalized this construction to any partial E-ring R, i.e. we gave a free completion of a partial E-ring R that they denoted by \(R^{E}\). Moreover we gave sufficient conditions on a subring S of \(R^{E}\) such that the free completion of \(S,\)\(S^{E}\) is isomorphic to \(R^{E}.\) We recall the result just for E-polynomial rings. **Lemma 2.5** ([3, Lemma 2.10]).: _Let \(R\) be an E-ring and \(S\) be a partial subring of \(R[\bar{x}]^{E},\) and assume that \(S=R[\bar{x}][e^{A}]\) for some \(\mathbb{Q}\)-linear subspace \(A\) of \(R[\bar{x}]\) which has trivial intersection with \(R\). Then, \(S^{E}=R[\bar{x}]^{E}\)._ **Lemma 2.6**.: _Let \(S\subseteq R[\bar{x}]^{E}\) be as in Lemma 2.5. Let \(I\) be an ideal of \(S\). If \(I\) is a partial prime E-ideal of \(R_{n}\), then \(I^{E}\) (the \(E\)-ideal of \(R[\bar{x}]^{E}\) generated by \(I\)) is a prime E-ideal of \(R[\bar{x}]^{E}\)._ ## 3. Noetherianity As Macintyre point out in [9], neither \(\mathbb{C}[x]^{E}\) nor \(\mathbb{R}[x]^{E}\) are Noetherian for E-ideals, by considering the E-ideal \(I=(E(\frac{x}{2^{n}})-1)_{n\in\mathbb{N}}\), since it is not finitely generated (for details see also [15]). There is a notion of noetherianity also for topological spaces: **Definition 3.1**.: A topological space \(X\) is Noetherian if it satisfies the descending chain condition for closed subsets, i.e. any strictly descending sequence of elements of \(X\) is stationary. \(\mathbb{C}[\bar{x}]^{E}\) is very far from being Noetherian for E-ideals, since \(\mathbb{C}\) with the exponential Zariski topology is not a Noetherian space, because of the chain: \[Z(E(x)-1)\supset Z(E(\frac{x}{2})-1)\supset\ldots\supset Z(E(\frac{x}{n!})-1)\supset\ldots\] \[\{2\pi i\mathbb{Z}\}\supset\{4\pi i\mathbb{Z}\}\supset\ldots\supset\{2n!\pi i \mathbb{Z}\}\supset\ldots\] descendes indefinitely. We show that \(\mathbb{C}[\bar{x}]^{E}\) is not Noetherian even respect to prime E-ideals. **Theorem 3.2**.: _The ring \(\mathbb{C}[\bar{x}]^{E}\) doesn't satisfy ACC condition for prime E-ideals._ Proof.: We consider the subring \(S:=\mathbb{C}[\bar{x},e^{\mathbb{C}\bar{x}}]=\mathbb{C}[\bar{x}][e^{\mathbb{C}\bar{ x}}]\) of the ring \(\mathbb{C}[\bar{x}]^{E}\). The idea is to construct an ascending chain of prime E-ideals of \(S\) and to extend it to be an ascending chain of prime E-ideals of \(\mathbb{C}[\bar{x}]^{E}\). Let \(P_{i}<S\) be prime ideals of \(S\) and we define \(Q_{i}=P_{i}\mathbb{C}[\bar{x}]^{E}\). For simplicity, we consider the case when we have only one variable \(\bar{x}=x\). We construct prime ideals in the following way: let be \(B\) a transcendence basis of \(\mathbb{C}\); \(B=(b_{j}:j<2^{\aleph_{0}}).\) For all \(i\in\mathbb{N}\), we define \[p_{i}:=e^{b_{3i}x}+e^{b_{3i+1}x}+e^{b_{3i+2}x}.\] Let be \(A_{n}=(p_{0},\ldots,p_{n})\) as an ideal in \(\mathbb{C}[e^{\mathbb{C}\mathbb{C}}].\) We need the following result: **Lemma 3.3**.: _The ideal \(A_{n}\) is prime for all \(n\in\mathbb{N}.\)_ Proof.: We introduce new variables denoting elements of the form \(e^{b_{i}x}\) for any \(b_{i}\in B\), i.e. \(z_{i}=e^{b_{i}x}\), so we can denote \(A_{n}=(z_{0}+z_{1}+z_{2},z_{3}+z_{4}+z_{5},\ldots,z_{3n}+z_{3n+1}+z_{3n+2})\) ad an ideal of \(\mathbb{C}[\overline{z}^{\mathbb{Q}}].\) We prove by induction on \(n\) that \(A_{n}\) is prime. For \(n=1\) we have that \(A_{1}=(z_{0}+z_{1}+z_{2})\) as an ideal of \(\mathbb{C}[z_{0}^{\mathbb{Q}},z_{1}^{\mathbb{Q}},z_{2}^{\mathbb{Q}}]\). Assume that \(p(z_{0},z_{1},z_{2})\cdot q(z_{0},z_{1},z_{2})\in A_{1}\) then \(p(z_{0},z_{1},z_{2})\cdot q(z_{0},z_{1},z_{2})=r(z_{0},z_{1},z_{2})(z_{0}+z_{1} +z_{2})\); let \(k\) be the common denominator of any exponents in \(p,q,r\), so we can consider \(p,q,r\in\mathbb{C}[z_{0}^{\pm\frac{1}{k}},z_{1}^{\pm\frac{1}{k}},z_{2}^{\pm \frac{1}{k}}]\). By replacing \(z_{i}\) with \(t_{i}^{k}\), we have that \(p,q,r\in\mathbb{C}[t_{0}^{\pm 1},t_{1}^{\pm 1},t_{2}^{\pm 1}]\). Notice that \(z_{0}+z_{1}+z_{2}\) becomes \(s(\bar{t}):=t_{1}^{k}+t_{2}^{k}+t_{3}^{k}\), and that \(s\) is irreducible in \(\mathbb{C}[\bar{t}]\) and hence the ideal \(A_{1}^{\prime}\) generated by \(s\) inside \(\mathbb{C}[\bar{t}]\) is prime. By localization, the ideal \(A_{1}^{\prime\prime}\) generated by \(s\) inside \(\mathbb{C}[\bar{t}^{\mathbb{Z}}]\) is also prime; therefore, since \(pq\in A_{1}^{\prime\prime}\), we have either \(p\in A_{1}^{\prime\prime}\subseteq A_{1}\) or \(q\in A_{1}^{\prime\prime}\subseteq A_{1}\), proving that \(A_{1}\) is prime. For \(n=2\), \(A_{2}=(z_{0}+z_{1}+z_{2},z_{3}+z_{4}+z_{5})\) this is prime since \[\frac{\mathbb{C}[z_{0}^{\mathbb{Q}},\ldots,z_{5}^{\mathbb{Q}}]}{A_{2}}\cong \frac{\mathbb{C}[z_{0}^{\mathbb{Q}},z_{1}^{\mathbb{Q}},z_{2}^{\mathbb{Q}}]}{A _{1}}\otimes\frac{\mathbb{C}[z_{3}^{\mathbb{Q}},z_{4}^{\mathbb{Q}},z_{5}^{ \mathbb{Q}}]}{(p_{1})}\] So \(\frac{\mathbb{C}[z_{0}^{\mathbb{Q}},\ldots,z_{5}^{\mathbb{Q}}]}{A_{2}}\) is a domain, since the tensor product of \(\mathbb{C}\)-algebra which are domains is also a domain, see [2, Chapter V, SS17]. In a similar way we can prove that \(A_{n}\) is prime for any \(n\in\mathbb{N}\). If \(A_{n}\) is prime then \(P_{n}=A_{n}S\) is partial prime E-ideal in S and so \(Q_{n}\), by Lemma 2.5 and Lemma 2.6, will be prime E-ideals in \(\mathbb{C}[\bar{x}]^{E}.\) So we have an ascending chain of prime E-ideals. Now we give conditions to say when an E-ideal of a particular form is prime. Let \(\bar{x},\bar{y}\) be tuples of variables of the same lenght. Given \(p(\bar{x},\bar{y})\in\mathbb{C}[\bar{x},\bar{y}]\), we denote by \(\tilde{p}(\bar{x}):=p(\bar{x},E(\bar{x}))\in\mathbb{C}[\bar{x}]^{E}\). Let \(I\subseteq\mathbb{C}[\bar{x},\bar{y}]\) be an ideal, and let \(\tilde{I}:=\{\tilde{p}:p\in I\}\). We denote by \(S=\mathbb{C}[\overline{x},e^{\overline{\tilde{Q}\mathbb{Z}}}]\), by \(J\) the E-ideal of \(\mathbb{C}[\bar{x}]^{E}\) generated by \(\tilde{I}\), and by \(H=J\cap S\). **Proposition 3.4**.: \(J\) _is prime in \(\mathbb{C}[\overline{x}]^{E}\) iff \(H\) is prime in \(S.\)_ Proof.: One direction is trivial. For the other one we assume that \(H\) is a prime in \(S\) then by Lemma 2.5 and Lemma 2.6 we have that \(J\) is a prime E-ideal of \(\mathbb{C}[\overline{x}]^{E}\). **Proposition 3.5**.: _If the following hold:_ 1. _I is a prime ideal;_ 2. _I doesn't contain non zeros elements of the form_ \(a+\overline{q}\cdot\overline{x}\) _with_ \(\overline{q}\in\mathbb{Q}^{n}\) _and_ \(a\in\mathbb{C};\)__ 3. _I doesn't contain any element of the form_ \(\overline{y}^{\overline{q}}-a\) _where_ \(\overline{q}\in\mathbb{Q}^{n}\) _and_ \(a\in\mathbb{C};\)__ 4. _For all_ \(n\in\mathbb{N}\)__\(I_{n}\) _the ideal in_ \(\mathbb{C}[\bar{x},\bar{y}^{\frac{1}{n}}]\) _is prime._ _Then \(J\) is a prime E-ideal._ Proof.: We denote by \(K\) the ideal generated by \(\tilde{I}\) in \(S.\) If we prove that \(K\) is a partial E-ideal of S and it is prime we conclude the proof, because by Corollary 3.13 of [3] we obtain that \(J\) is prime and \(K=H.\) First we prove that \(K\) is a partial E-ideal of \(S.\) We note that the domain of the exponential map on \(S\) is \(\mathbb{Q}\cdot\bar{x}+\mathbb{C},\) so we have to prove that \(K\cap(\mathbb{Q}\cdot\bar{x}+\mathbb{C})=(0).\) By replacing \(e^{\mathbb{Q}\bar{x}}\) with \(\bar{y}^{\mathbb{Q}}\) we have \(\mathbb{C}[\bar{x},\bar{y}]\subset\mathbb{C}[\bar{x},\bar{y}^{\mathbb{Q}}] \cong\mathbb{C}[\bar{x},e\mathbb{Q}\bar{x}]=S.\) We claim that \[K\cap\mathbb{C}[\bar{x},\bar{y}]=I.\] If \(a\in K\cap\mathbb{C}[\bar{x},\bar{y}]\) then \(a=\frac{p}{\bar{y}^{m}}\) where \(p\in I.\) Then \(a\cdot\bar{y}^{m}\in I,\) but by (2)+(3) we have that \(a\in I\) and this implies that \(K\) is a partial E-ideal. Now we have to prove that \(K\) is prime. We are assuming that the ideal \(I\) of \(\mathbb{C}[\bar{x},\bar{y}]\) is prime so by commutative algebra (see [12]) we know that the ideal \(I^{\prime}\) in the localization \(\mathbb{C}[\bar{x},\bar{y}^{\pm 1}]\) is prime or trivial, but it is prime because we are assuming (3). If we consider the ideal \(I^{\prime\prime}=K\) in the integral extension of \(\mathbb{C}[\bar{x},\bar{y}^{\mathbb{Q}\geq 0}].\) We first observe that \(I^{\prime\prime}=\cup_{n}I_{n},\) from (4) any \(I_{n}\) is prime and this implies that \(I^{\prime\prime}\) is prime, this conclude the proof. ## 4. Exponential radical ideals We give the notion of E-radical ideal for any E-ring \(R\) as follows. **Definition 4.1**.: Let \(J\) an E-ideal of an E-ring \(R\). We define the E-radical ideal of \(J\) as \(\mathrm{E-rad}(J):=\cap_{P\geq J}P\), where \(P\) varies among prime E-ideals. Let \(J\) an E-ideal of the E-polynomial ring \(K[\bar{x}]^{E}\). Let be \(F\) an E-ring containing \(K\). We can define \(\mathcal{I}(V(J))\) as follows: \(V_{F}(J):=\{\overline{a}\in F^{n}:f(\overline{a})=0\text{ for all }f(\bar{x})\in J\},\) \(V(J):=\bigcup V_{F}(J)\) as \(F\) varies among all E-fields containing \(K\), and \(\mathcal{I}(V(J))=\{p(\bar{x})\in K[\bar{x}]^{E}:p(\overline{a})=0\text{ for all }\overline{a}\in V(J)\}.\) **Remark 4.2**.: Let \((R,E)\) be an E-domain and \(K\) be its fraction field. Then, there exists at least one way to extend the exponential function to all of \(K\). Proof.: Let \(A\subset K\) be a complement of \(R\) as \(\mathbb{Q}\)-linear spaces. For every \(r\in R\) and \(a\in A,\) define \(E^{\prime}(a+r):=E(r)\). Then, \(E^{\prime}\) is an exponential function on \(K\) extending \(E\). **Corollary 4.3**.: \(\mathcal{I}(V(J))=\{p(\bar{x})\in K[\bar{x}]^{E}:p(\overline{a})=0\text{ for all }\overline{a}\in V_{F}(J)\text{ as }F\text{ varies among all E-domains containing }K\}\)_._ **Lemma 4.4**.: _Let be \(J\) an E-ideal of the E-polynomial ring \(K[\bar{x}]^{E}.\) Then \(\mathrm{E-rad}(J)=\mathcal{I}(V(J)).\)_ **Remark 4.5**.: The E-ideal \(I=(xy)^{E}\) is not a E-radical ideal, unlike in the classical case. Indeed \(I\) is not the intersection of prime E-ideal, since \((xy)^{E}\neq(x)^{E}\cap(y)^{E},\) because \(x(E(y)-1)\in(x)^{E}\cap(y)^{E}\setminus(xy)^{E}.\) **Remark 4.6**.: If \(J\) is not contained in any prime E-ideals then \(\mathrm{E-rad}(J)=K[\bar{x}]^{E}.\) We use the technique introduced in [3] to construct such example. Let be \(I=(xy,E(x)+1,E(y)+1)\) an ideal of \(S=K[x,y,e^{\mathbb{Q}x},e^{\mathbb{Q}y}]\) where \(S\) is a subring of \(K[\bar{x}]^{E}\), in particular it is a partial \(E\)-ideal of \(S\). \(I^{E}\) is then an E-ideal of \(K[\bar{x}]^{E}\) which is not contained in any prime E-ideal; indeed, if \(I\subseteq P\) where \(P\) is E-prime, then \(xy\in I\subseteq P.\) Since \(P\) is prime, w.l.o.g. \(x\in P\), and so \(E(x)-1\in P\). But \(E(x)+1\in I\subseteq P\), and thus \(2\in P\), contradiction. ## 5. Characterization of radical E-ideals We consider an E-ring \(R\) and let be \(J\) an E-ideal of \(R\). We study prime E-ideals and E-radical ideals, i.e. E-ideals which are equal to their E-radical. We characterize \(\mathrm{E-rad}(J)\) using the following theory. Let be \(\mathcal{L}=\{+,-,\cdot,e^{x},0,1\}\cup\{V\}\) where \(V\) is a unary predicate. We recall the following definition: **Definition 5.1**.: Given \(p_{1},\ldots,p_{k},q\)\(\mathcal{L}\)-terms, we define the associated strict Horn clause the formulas of the type: \[V(p_{1})\wedge\ldots\wedge V(p_{k})\to V(q).\] We denote by \(H\) the set of all \(\mathcal{L}\)-strict Horn clause as above. **Examples 5.2**.: \(V(x^{2})\to V(x)\) is in \(H\). Also \(V(x)\wedge V(y)\to V(x+y)\), \(V(x)\to V(e^{x}-1)\) and \(V(0)\) are in \(H\). In order to lighten the notation, we write \(p_{1}\wedge\ldots\wedge p_{k}\to q\) in place of \(V(p_{1})\wedge\ldots\wedge V(p_{k})\to V(q)\). We consider the following theories: \(T_{1}:=\{\) Horn clause \(\alpha:\) for any E-ring R and E-ideal J, \((R,J)\models\alpha\}\) \(T_{2}:=\{\) Horn clause \(\alpha:\) for any E-ring R and prime E-ideal J, \((R,J)\models\alpha\}\). Clearly, \(T_{1}\) is generated by the following Horn clauses: \[0,\quad x\wedge y\to x-y,\quad x\to xy,\quad x\to e^{x}-1.\] Our aim is to give an explicit description of \(T_{2}\) and relate it to the E-radical. In \(T_{2}\) we have, besides the clauses in \(T_{1}\), also others; for instance, the following clauses are in \(T_{1}\): \(x^{n}\to x\), \((xy\wedge e^{x}+1)\to y\). Let \(T\) be a set of Horn clauses. Let \(\mathcal{M}(T)\) be following family of subsets of \(R\): \[\mathcal{M}(T):=\{J\subseteq R:(R,J)\models T\}\] **Remark 5.3**.: \(\mathcal{M}(T)\) is closed under arbitrary intersections and under increasing unions. Thus, we can consider the "radical" operator associated to \(T\). We will let \(X\) vary among subsets of \(R\). We define \[\mathrm{T-rad}(X)\coloneqq\bigcap\mathcal{M}(T).\] We have that \(\mathrm{T-rad}(X)\) is the smallest subset of \(R\) containing \(X\) and such that \((R,\mathrm{T-rad}(X))\models T\). In particular, \((R,X)\models T\) iff \(X=\mathrm{T-rad}(X)\). **Example 5.4**.: \(\mathrm{T_{1}-rad}(X)=(X)^{E}\), the E-ideal generated by \(X\). **Example 5.5**.: Let \(T_{0}:=\{0,x\wedge y\to x-y,x\to xy\}\). Then, \(\mathcal{M}(T_{0})\) is the family of pairs \((R,J)\) with \(J\) ideal of \(R\). We have that \(\mathrm{T_{0}-rad}(X)\) is the ideal generated by \(X\). Let \(T_{3}:=\{0,x\wedge y\to x-y,x\to xy,x^{2}\to x\}\). Then, \(\mathcal{M}(T_{3})\) is the family of pairs \((R,J)\) with \(J\) radical ideal of \(R\). We have that \(\mathrm{T_{3}-rad}(X)\) is the radical of the ideal generated by \(X\). We can build \(\mathrm{T-rad}(J)\) in a "constructive" way: i.e., we have a description of all elements of \(\mathrm{T-rad}(J)\). **Definition 5.6**.: Given a family \(\mathcal{F}\) of \(\mathcal{L}\)-structures, we denote its theory by \[Th(\mathcal{F}):=\{\alpha\in H:\forall M\in\mathcal{F}\ M\models\alpha\}.\] The deductive closure of \(T\) is \(\overline{T}:=Th(\mathcal{M}(T))\). We say that \(T\) clauses is deductively closed if \(T=Th(\mathcal{F})\) for some family \(\mathcal{F}\), or equivalently if \(T=\overline{T}\). An axiomatization of \(T\) is a set of Horn clauses \(S\) such that \(\overline{S}=\overline{T}\). **Remark 5.7**.: \[\mathrm{T\mbox{-}rad}(X)=\{q(\bar{c}):p_{1}(\bar{x})\wedge\cdots\wedge p_{k}( \bar{x})\to q(\bar{x})\in\overline{T},\bar{c}\in R^{<\omega},p_{i}(\bar{c})\in X,i=1,\ldots,k\}.\] Proof.: Let us denote \(Y:=\{q(\bar{c}):p_{1}(\bar{x})\wedge\cdots\wedge p_{k}(\bar{x})\to q(\bar{x}) \in\overline{T},\bar{c}\in R^{<\omega},p_{i}(\bar{c})\in X,i=1,\ldots,k\}\). It is clear that \(X\subseteq Y\subseteq\mathrm{T\mbox{-}rad}(X)\). We want to show that \(\mathrm{T\mbox{-}rad}(X)\subseteq Y\). It suffices to show that \(Y\in\mathcal{M}(T)\). Let \(p_{1}(\bar{x})\wedge\cdots\wedge p_{k}(\bar{x})\to q(\bar{x})\in T\) and \(\bar{c}\in R^{<\omega}\) such that \(p_{i}(\bar{c})\in Y\), \(i=1,\ldots,k\). It suffices to show the following: _Claim 1_.: \(q(\bar{c})\in Y\). For simplicity of notation, we assume that \(k=1\) and \(p:=p_{1}\). Since \(p(\bar{c})\in Y\), by definition of \(Y\), there exist \(\beta:=r_{1}(\bar{x},\bar{x}^{\prime})\wedge\cdots\wedge r_{\ell}(\bar{x},\bar {x}^{\prime})\to p(\bar{x})\in T\) and \(\bar{c}^{\prime}\in R^{<\omega}\) such that \(r_{j}(\bar{c},\bar{c}^{\prime})\in X\), \(i=1,\ldots,\ell\). Notice that every \(M\in\mathcal{M}(T)\) satisfies \[\beta:=\bigwedge_{\ell}r_{\ell}(\bar{x},\bar{x}^{\prime})\to q(\bar{x}).\] Therefore, \(\beta\in\bar{T}\), and therefore, by definition, \(q(\bar{c})\in Y\). **Corollary 5.8**.: _For every \(b\in R\),_ \[\mathrm{T\mbox{-}rad}(Xb)=\{q(\bar{c}):p_{1}(\bar{x})\wedge\cdots\wedge p_{k}( \bar{x})\to q(\bar{x})\in\overline{T},\bar{c}\in R^{<\omega},p_{i}(\bar{c}) \in X\lor p_{i}(\bar{c})=b,i=1,\ldots,k\}.\] The following theorem gives more explicit description of \(\mathrm{T}_{2}\mbox{-}\mathrm{rad}(X)\). **Theorem 5.9**.: _For every \(n\in N\), we define the operator \(\sqrt[n]{}\) on subsets of \(R\) inductively in the following way: \(\sqrt[n]{}X=(X)^{E};\) \(\sqrt[n]{}X=\sqrt[n]{}\{a\in R:\exists b_{1},b_{2}\in R:b_{1}\cdot b_{2}\in \sqrt[n]{}X\wedge a\in\sqrt[n]{}X\overline{b_{1}}\cap\sqrt[n]{}X\overline{b_{2 }}\};\) \(\sqrt[n]{}X=\sqrt[n]{}\{a:\exists b_{1},b_{2}:b_{1}\cdot b_{2}\in\sqrt[n]{}X \wedge a\in\sqrt[n]{}X\overline{b_{1}}\cap\sqrt[n]{}X\overline{b_{2}}\}.\) Define \(\sqrt[n]{}X:=\bigcup_{n\in\mathbb{N}}\sqrt[n]{}X\) Then, \(\mathrm{E\mbox{-}rad}\,X=\mathrm{T}_{2}\mbox{-}\mathrm{rad}(X)=\sqrt[n]{}X\)._ Proof.: We first need some results. **Lemma 5.10**.: _Let \(J\subseteq R\). Let \(b_{1}\cdot b_{2}\in J\). Assume that \(a\in\mathrm{T}_{2}\mbox{-}\mathrm{rad}(Jb_{1})\cap\mathrm{T}_{2}\mbox{-} \mathrm{rad}(Jb_{2})\). Then, \(a\in\mathrm{T}_{2}\mbox{-}\mathrm{rad}(J)\)._ Proof.: If \(J\) were a prime E-ideal, the result would be clear. In general, there exist Horn clauses \(p_{i,1}(\bar{x})\wedge\cdots\wedge p_{i,k}(\bar{x})\wedge z\to q_{i}(z,\bar{x}) \in I(T)\), \(i=1,2\), and \(\bar{c}\in R^{<\omega}\), such that \[a=q_{i}(b_{i},\bar{c}),\quad p_{i,j}(\bar{c})\in J,\quad i=1,2,\quad j=1, \ldots,k.\] Moreover, the Horn clause \[\alpha(w,z_{1},z_{2},\bar{x}):=z_{1}\cdot z_{2}\wedge w-q_{1}(z_{1},\bar{x}) \wedge w-q_{2}(z_{2},\bar{x})\wedge\bigwedge_{i,k}p_{i,k}(\bar{x})\to w\] is in \(T_{0}\) (since it is satisfied by any prime E-ideal). Thus, \((R,\mathrm{T}_{2}\mbox{-}\mathrm{rad}(J))\models\alpha\) and the conclusion follows by considering \(\alpha(a,b_{1},b_{2},\bar{c})\) It is therefore that \(\mathrm{E-rad}\,X\supseteq\mathrm{T}_{2}\mathrm{-rad}(X)\supseteq\sqrt[e]{X}\). Thus, it suffices to show that \(\mathrm{E-rad}\,X\subseteq\sqrt[e]{X}\), or, equivalently, that for every \(a\in R\setminus\sqrt[e]{X}\), we have \(a\notin\mathrm{E-rad}\,X\). We need a further result. **Lemma 5.11**.: _Let \(a\in R\) and \(P\) be an E-ideal of \(R\) maximal among the E-ideals \(J\)of \(R\) not containing \(a\) and such that \(\sqrt[e]{J}=J\). Then, \(P\) is prime._ Proof.: Suppose \(b_{1}\cdot b_{2}\in P\) and by contradiction we assume that \(b_{1},b_{2}\not\in P\). We define \(Q_{1}=\sqrt[e]{Pb_{1}}\) and \(Q_{2}=\sqrt[e]{Pb_{2}}\). We have that \(Q_{1},Q_{2}\in\mathcal{M}(T_{0})\). Notice that \(Q_{1},Q_{2}\supset P\), and therefore the maximality of \(P\) implies that \(a\in Q_{1}\) and \(a\in Q_{2}\). Moreover, \(b_{1}\cdot b_{2}\in P\); since \(\sqrt[e]{P}=P\), we have that \(a\in\mathrm{T}_{2}\mathrm{-rad}(P)=P\), contradiction. We can now conclude the proof of Thm. 5.9. Let \(a\notin\sqrt[e]{X}\). By Lemma 5.11, there exists a prime E-ideal \(P\) containing \(X\) such that \(a\notin P\). Thus, \(a\notin\mathrm{E-rad}\,X\), and we are done. **Corollary 5.12**.: \(J\) _is a \(E\)-radical iff, for every \(a,b_{1},b_{2}\in R\)_ \[b_{1}\cdot b_{2}\in J\wedge a\in\sqrt[e]{Jb_{1}}\cap\sqrt[e]{Jb_{1}}\to a\in J.\] From the above Corollary we can extract a recursive axiomatization of \(T_{0}\), using Thm. 5.9 to characterize \(\sqrt[e]{Jb_{i}}\). We can interpret the discussion above in terms of quasi-varieties. **Definition 5.13**.: An E-ring is E-reduced if \(\mathrm{E-rad}(0)=(0)\). By Thm. 5.9, the class E-red of E-reduced E-rings is a quasi-variety: i.e., it can be axiomatized via Horn formulae in the language \[(=,+,-,\cdot,e,0,1)\] (we replace the condition \(t\in(0)\) with the condition \(t=0\)). By [1, Thm. 2.25], E-red is closed under isomorphisms, taking substructures, reduced product. It is not difficult to see directly that E-red is closed under isomorphisms, substructures, direct products, and ultraproducts: thus, by [1, Thm. 2.25], E-red is a quasi-variety. From this it is easy to deduce that \(\mathrm{E-rad}=\mathrm{T}_{2}\mathrm{-rad}\). With the proof we gave of Thm. 5.9 we obtained the extra information that \(\mathrm{E-rad}=\sqrt[e]{J}\), and Corollary 5.12, with the recursive axiomatization of \(T_{0}\).
2310.01222
Future-null singularity due to gravitational collapse
In the case of unhindered gravitational collapse of matter cloud governed by the Lemaitre-Tolman-Bondi (LTB) spacetime, the end-state singularity is either locally visible, globally visible or completely hidden. We have a past-null singularity in the first two cases while a future-spacelike singularity in the last case. Here, we show an example of a gravitational collapse model whose end-state is a future-null-singularity (it has a causal property that is unlike the cases involving LTB spacetime). We depict such a distinct causal structure of the singularity by conformal diagrams.
Ashok B. Joshi, Karim Mosani, Pankaj S. Joshi
2023-10-02T14:09:48Z
http://arxiv.org/abs/2310.01222v2
# Future-null singularity due to gravitational collapse ###### Abstract In the case of unhindered gravitational collapse of matter cloud governed by the Lemaitre-Tolman-Bondi (LTB) spacetime, the end-state singularity is either locally visible, globally visible or completely hidden. We have a past-null singularity in the first two cases while a future-spacelike singularity in the last case. Here, we show an example of a gravitational collapse model whose end-state is a future-null-singularity (it has a causal property that is unlike the cases involving LTB spacetime). We depict such a distinct causal structure of the singularity by conformal diagrams. **keywords**: Singularity, Causal structure of spacetime, Gravitational collapse ## I Introduction The contraction of an astronomical body under its own gravitational influence is called gravitational collapse. When the mass of the star is above a certain limit, one obtains an unhindered gravitational collapse that gives rise to a singularity. In addition to the existence of incomplete causal curves, we identify the singularity by the divergence of curvature-invariant quantities [1]. In 1939, Oppenheimer and Snyder studied the gravitational collapse of a spherically symmetric spatially homogeneous matter field with zero pressure [2]. Such collapse ends in a singularity covered by an event horizon. One could argue that such a singularity is merely an artefact of spherical symmetry and that dropping the assumption of spherical symmetry could resolve the singularity. However, Penrose and Hawking showed the occurrence of singularities (identified by the existence of incomplete causal curves) under generic conditions. Penrose proposed what is now known as the cosmic censorship conjecture [3]. The weak version of cosmic censorship states: A spacetime singularity can never be visible to asymptotic observers. In other words, it cannot be globally visible. The strong version states: All physically reasonable spacetimes are globally hyperbolic, i.e., a singularity cannot even be locally naked. Here, a globally naked singularity is identified by the existence of past-incomplete causal geodesic that is future-complete, while a locally naked singularity is identified by the existence of past-incomplete causal geodesic that is also future-incomplete. Let \(\mathcal{M}\) be a spacetime with Lorentzian signature \(+2\), and let \(\mathcal{O}\subset\mathcal{M}\) be open. A _congruence_\(\mathcal{O}\) is a family of curves such that \(\forall\;p\in\mathcal{O}\), \(\exists\) precisely one curve in this family that intersects \(p\). The tangents to a congruence yield a vector field in \(\mathcal{O}\). Conversely, every continuous vector field generates a congruence of curves. \(\mathcal{O}\) is smooth if the corresponding vector field is smooth [1; 4; 5]. Consider a smooth congruence of null geodesics yielding a vector field \(k^{\alpha}\). The object \(\theta=\nabla_{\alpha}k^{\alpha}\) is called the expansion of the congruence. A _trapped surface_ (T) is then defined in terms of congruence as a compact two-dimensional smooth spacelike submanifold with the property that the expansion of both the families of future-directed null geodesics orthogonal to \(T\) is \(<0\) everywhere. A _marginally trapped surface_ is then defined in terms of the trapped surface with a relaxation that now the expansion of both the families of future-directed null geodesics orthogonal to \(T\) is \(\leq 0\)\(\forall\) points in \(T\), rather than strictly negative. A _marginally outer trapped surface_ is then defined by relaxing further the definition of marginally trapped surface such that the expansion of only "outgoing" future-directed null geodesics orthogonal to \(T\) is \(\leq 0\)\(\forall\) points in \(T\) (Outgoing family: Family of null geodesics orthogonal to \(T\) satisfying \(g(k,N)\geq 0\), where \(k\): Normal to null geodesics, \(N\): Normal to \(T\) in \(S\) (Spacelike hypersurface containing \(T\)) that points outwards from \(T\)). A _trapped region_ is defined as a closed subset \(C\) of partial Cauchy surface \(S\) (\(S\subset\mathcal{M}\): A spacelike hypersurface that intersects every causal curve \(\gamma\subset\mathcal{M}\) at most once) that forms a three-dimensional manifold with a boundary such that its two-dimensional boundary \(\partial C\) is a marginally outer trapped surface. Finally, the _apparent horizon_ is defined as the boundary \(\partial C\) of the trapped region. In the case of a dynamical spacetime that admits a singularity, e.g., gravitational collapse, the spacetime's causal property relates to the evolution of the apparent and the event horizon. As far as dust collapse is concerned, the introduction of spatial inhomogeneity in the density of the collapsing cloud influences the evolution of these horizons such that the end-state singularity becomes locally or globally naked [6; 7; 8]. A spacetime singularity can be classified into five types based on the following two characteristics: 1) The causal structure of spacetime (globally hyperbolic or not) admitting the singularity, and 2) The existence of past/future complete/incomplete timelike/null geodesics [9]. The classification is as follows: 1. _Past spacelike singularity_: The spacetime is timelike as well as null geodesically past incomplete and globally hyperbolic; e.g. Schwarzschild white hole singularity. 2. _Future spacelike singularity_: The spacetime is timelike and null geodesically future incomplete and globally hyperbolic; e.g. Schwarzschild black hole singularity.. 3. _Past null singularity_: The spacetime is timelike as well as null geodesically past incomplete and not globally hyperbolic; e.g. locally/globally visible singularities in LTB spacetime. 4. _Future null singularity_: The spacetime is timelike as well as null geodesically future incomplete and globally hyperbolic. 5. _Timelike singularity_: The spacetime is timelike geodesically complete (though not null), but \(\exists\) at least one incomplete radial null geodesic or incomplete non-geodesic timelike curve [4]. Additionally, the spacetime is not globally hyperbolic. Here, we show that a future null singularity can be obtained as an end-state of gravitational collapse of a physically reasonable matter cloud. The structure of the paper is as follows. In section (II), we rederive the equations governing the dynamics of the apparent horizon, the singularity curve, and the event horizon curve for the gravitational collapse governed by LTB spacetime with the exterior Schwarzschild metric. In section (III), we do the same for the gravitational collapse governed by the spacetime constructed by glueing two metrics: the spatially homogeneous FLRW with non-zero pressure and the static spacetime first discussed in [10]. Finally, we end the paper with concluding remarks. Throughout the paper, we take \(G=c=1\). ## II Lemaitre-Tolman-Bondi spacetime Consider the gravitational collapse of spherically symmetric inhomogeneous perfect fluid. The components of the stress-energy tensor in the coordinate basis \(\{dx^{\mu}\bigotimes\partial_{\nu}|0\leq\mu,\nu\leq 3\}\) of the comoving coordinates \((t,r,\theta,\phi)\) are given by \[T^{\mu}_{\nu}=\text{diag}\left(-\rho,p,p,p\right), \tag{1}\] where \(\rho=\rho(t,r)\) and \(p=p(t,r)\) are the density and the isotropic pressure of the collapsing matter cloud, respectively. The corresponding spacetime metric is written as follows: \[ds^{2}=-dt^{2}+R^{\prime 2}dr^{2}+R^{2}d\Omega^{2} \tag{2}\] where \(d\Omega^{2}\) is the line element of the two-sphere, \(R=R(t,r)\) and the superscript \({}^{\prime}\) denotes the partial derivative of the function with respect to radial coordinate \(r\). For such spacetime, we have \[\rho=\frac{F^{{}^{\prime}}}{R^{\prime}R^{2}}, \tag{3}\] and \[p=-\frac{\dot{F}}{\dot{R}R^{2}}, \tag{4}\] where \[F=\dot{R}^{2}R. \tag{5}\] The superscript dot denotes the partial derivative of the function with respect to time coordinate \(t\). \(F=F(t,r)\) is called the Misner-Sharp mass function. A collapsing spherical solid ball contained in the initial data is made of concentric spherical shells, each of which is identified by a radial coordinate \(r\). In the case of dust, we have \(p=0\). Eq. (4) then implies that \(F=F(r)\). In such scenario, one can integrate Eq. (5) to obtain \[R(t,r)=\left(r^{\frac{3}{2}}-\frac{3}{2}\sqrt{F(r)}t\right)^{\frac{3}{3}}. \tag{6}\] We define the _scaling function_\(a(t,r)\) as the ratio \[a(t,r)=\frac{R(t,r)}{r}, \tag{7}\] and rewrite Eq. (6) to obtain the _time curve_ \[t(r,a)=\frac{2r^{3/2}}{3\sqrt{F(r)}}\left(1-a^{3/2}\right). \tag{8}\] Given a spherical shell of fixed radial coordinates, as one evolves the initial data, the physical radius of this shell decreases and becomes zero. The corresponding comoving time \(t_{s}(r)\) is obtained by substituting \(R(t_{s},r)=0\) in Eq. (6) or \(a=0\) in Eq. (8) as \[t_{s}(r)=\frac{2r^{\frac{3}{2}}}{3\sqrt{F(r)}}. \tag{9}\] We call this function the _singularity curve_. As mentioned in the introduction, the expansion of the outgoing null geodesic congruence vanishes on the apparent horizon. In terms of the metric components (refer Eq. (2)), it is \[\theta=\frac{2}{R}\left(1-\sqrt{\frac{F}{R}}\right). \tag{10}\] \(\theta=0\) is equivalent to \(F=R\) from the above equation. Hence, the apparent horizon for such spacetime is the set \[\mathcal{A}=\left\{a\in S\ \ :\ F|_{a}=R|_{a}\right\}, \tag{11}\] where \(S\) is a partial Cauchy surface (We choose \(S\) to be a constant \(t\) time slice. In other words, \(S\) should be such that \(\partial_{t}\) is the (global) unit normal vector field). Geometrically, \(\mathcal{A}\) is a two-dimensional sphere satisfying \[H+{}^{\mathcal{A}}\text{tr}K=0. \tag{12}\] Here, \(H\) is the mean curvature of \(\mathcal{A}\) in \(\left(S,\,^{S}g\right)\), \(K\) is the extrinsic curvature of \(S\) in \(\mathcal{M}\), \({}^{\mathcal{A}}\)tr\(K\) is the trace of \(K\) over \(\mathcal{A}\), or in other words, the trace of \(K|_{T\mathcal{A}\times T\mathcal{A}}\) with respect to the induced metric \({}^{\mathcal{A}}g\) on \(\mathcal{A}\). In the case of spherical symmetry, the apparent horizon is a spherical ball embedded in \(S\) whose points have a fixed radial coordinate. As one evolves the initial data \(S\), \(\mathcal{A}\) evolves (the radial coordinate of points in \(\mathcal{A}\) evolves). This evolution is obtained from (6) along with equating \(F=R\) to get \[t_{ah}(r)=\frac{2}{3}\left(\frac{r^{3/2}}{\sqrt{F}}-F\right). \tag{13}\] We call this function the _apparent horizon curve_. It gives us the relation between the radial coordinate of points in \(\mathcal{A}\) and the comoving time \(t\). The collapsing perfect fluid spacetime Eq. (2) can be matched smoothly with the exterior Schwarzschild spacetime such that the union forms a valid solution of Einstein's field equations. By smooth matching, we mean the satisfaction of the Darmios-Israel junction conditions [11]. It states that at the hypersurface (in our case: a spacelike hypersurface identified by \(r=r_{c}\), where \(r_{c}\in\mathbb{R}^{+}\)), the induced LTB metric and the induced Schwarzschild metric should be the same. Secondly, the extrinsic curvature of this hypersurface in LTB spacetime and in Schwarzschild spacetime should be the same. In the case of static spacetime, the event horizon is also the apparent horizon. Hence, one can obtain the evolution of the event horizon by solving the differential equation \[\frac{dt}{dr}=R^{{}^{\prime}}(t,r), \tag{14}\] with the initial condition that at the boundary of the collapsing cloud, it should coincide with the apparent horizon, i.e. \(F|_{a}=R|_{a}\), where \(a\) has radial coordinate \(r=r_{c}\). The solution to this initial value problem gives us the _event horizon curve_ and is denoted by \(t_{eh}(r)\). Regarding the formation of at least locally visible singularity, we have the following statement: Consider an unhindered gravitational collapse of a spherically symmetric perfect fluid. The singularity formed as an end state of such collapse is at least locally naked if and only if \(\exists\,X_{0}\in\mathbb{R}^{+}\) as a root of \(V(X)\), where \[V(X)=X-\frac{1}{\alpha}\left(X+\sqrt{\frac{F_{0}(0)}{X}}\left(\chi_{1}(0)+2r \chi_{2}(0)+3r^{2}\chi_{3}(0)\right)r^{\frac{5-3\alpha}{2}}\right)\left(1- \sqrt{\frac{F_{0}(0)}{X}}r^{\frac{3-\alpha}{2}}\right). \tag{15}\] Here \[\alpha\in\left\{\frac{2n}{3}+1;\quad n\in\mathbb{N}\right\},\quad\ \chi_{i}(v)=\frac{1}{i!}\frac{\partial^{i}t(r,v)}{\partial r^{i}}\bigg{|}_{r=0 },\quad\ F(r,v)=\sum_{i=0}^{\infty}F_{i}(v)r^{i+3}, \tag{16}\] and \(t=t(r,v)\) is the time curve [12]. To depict examples of past null singularities, we consider two different Misner-Sharp mass functions \(F_{a}\) and \(F_{b}\) given by \[F_{a}(r)=F_{0}r^{3}+F_{3}r^{6};\ \ \ F_{0}>0,\ \ \ F_{3}<0. \tag{17}\] and \[F_{b}(r)=F_{0}r^{3}+F_{2}r^{5};\ \ \ F_{0}>0,\ \ \ F_{2}<0. \tag{18}\] respectively. Corresponding \(V(X)\) (Eq. (15)) for such mass functions are obtained as: \[\begin{array}{c}V_{a}(X)=2X^{2}+\sqrt{F_{0}}X^{3/2}-3\sqrt{F_{0}}\chi_{3}(0) \sqrt{X}\\ +3F_{0}\chi_{3}(0),\end{array} \tag{19}\] where \[\chi_{3}(0)=-\frac{1}{2}\int_{0}^{1}\frac{F_{3}/a}{\left(F_{0}/a\right)^{3/2} }\,da, \tag{20}\] Figure 1: The gravitational collapse of spatially-homogeneous dust governed by the LTB spacetime. The solid yellow, blue, and black curves represent the event horizon, the boundary of the collapsing cloud, and the apparent horizon, respectively. and \[V_{b}(X)=4X^{2}-6\sqrt{F_{0}}\chi_{2}(0)\sqrt{X} \tag{21}\] where \[\chi_{2}(0)=-\frac{1}{2}\int_{0}^{1}\frac{F_{2}/a}{\left(F_{0}/a\right)^{3/2}}\;da. \tag{22}\] Setting \(F_{0}=1\), Eq. (19) has positive real roots if and only if \(F_{3}<\sim-25.967\), in which case, the singularity is visible. Similarly, Eq. (21) has a positive real root if and only if \(F_{2}<0\), in which case the singularity is visible. For the singularity to be globally visible, apart from the above-mentioned criteria, the following equality should be satisfied: \[t_{eh}(0)=t_{s}(0). \tag{23}\] It can be seen numerically that in the former example (Eq. (17)), Eq. (23) is always satisfied if Eq. (19) has a positive real root. Hence, the visible singularity is globally visible. In the latter example, if Eq. (21) has a positive real root, then the singularity is either locally visible or globally visible depending on the choice of the coefficients \(F_{0}\) and \(F_{2}\). Figs. (1) and (2) depict the spacetime diagrams consisting of the evolution of the apparent horizon and the event horizon in the case for which the LTB collapse leads to a locally hidden singularity, a locally visible singularity, and a globally visible singularity, respectively. We now proceed to depict the formation of future-null singularity as an end-state of gravitational collapse. ## III Future-null singularity admitting spacetime Consider the gravitational collapse of a spherically symmetric spatially homogeneous perfect fluid. The components of the stress-energy tensor in the coordinate basis \(\{dx^{\mu}\bigotimes\partial_{\nu}|0\leq\mu,\nu\leq 3\}\) of the comoving coordinates \((t,r,\theta,\phi)\) are given by \[T^{\mu}_{\nu}=\text{diag}\left(-\rho,p,p,p\right), \tag{24}\] where \(\rho=\rho(t)\) and \(p=p(t)\) are the spatially homogeneous density and the isotropic pressure of the collapsing matter cloud. The corresponding spacetime metric is written just as Eq. (2). In the case of dust, the Misner-Sharp mass function is conserved inside a shell of radial coordinate \(r\), and the exterior metric is the Schwarzschild spacetime. Hence, for the exterior spacetime to be non-vacuum, we choose the Misner-Sharp mass function such that it is not conserved inside a shell of the fixed radial coordinate. Hence, it is a function of both \(t\) and \(r\) coordinates. The scaling function \(a\) is a function of only \(t\) coordinate in the case of spatially homogeneous collapse. Unlike the collapsing dust LTB spacetime, the collapsing non-zero pressurel FLRW spacetime is matched with the exterior spherically symmetric asymptotically flat non-vacuum spacetime discussed in [10], instead of Schwarzschild spacetime. This exterior spacetime metric is given in Schwarzschild coordinates by \[ds^{2}=-\frac{dt^{2}}{\left(1+\frac{M}{r}\right)^{2}}+\left(1+\frac{M}{r} \right)^{2}dr^{2}+r^{2}d\Omega^{2}, \tag{25}\] where \(M\) is a positive constant. In Eddington-Finkelstein coordinates, it is expressed as \[ds^{2}=-\left(1-\frac{2D(\mathcal{R})}{\mathcal{R}}\right)d\nu^{2}-2d\nu d \mathcal{R}+\mathcal{R}^{2}d\Omega^{2} \tag{26}\] where, \[D(\mathcal{R})=\frac{M\mathcal{R}(M+2\mathcal{R})}{2(M+\mathcal{R})^{2}} \tag{27}\] Figure 2: The gravitational collapse of spatially-inhomogeneous dust governed by the LTB spacetime. The solid yellow, blue, and black curves represent the event horizon, the boundary of the collapsing cloud, and the apparent horizon, respectively: (a) \(F(r)=F_{0}r^{3}+F_{3}r^{5}\). The singularity is locally visible (\(F_{0}=1\) and \(F_{2}=-2\)), (b) \(F(r)=F_{0}r^{3}+F_{3}r^{6}\). The singularity is globally visible (\(F_{0}=1\) and \(F_{3}=-26\)). Here \(\mathcal{R}\) is the radial coordinate of the exterior spacetime, and \(\nu\) is retarded null coordinate. In the coordinate basis \(\{dx^{\mu}\bigotimes\partial_{\nu}|0\leq\mu,\nu\leq 3\}\) of the Schwarzschild coordinates, the stress-energy tensor corresponding to Eq. (26) is given by \[T=\text{diag}\{-\epsilon,-\epsilon,\mathcal{P},\mathcal{P}\} \tag{28}\] where \(\epsilon>0\). In the orthonormal basis \(\{e_{(i)}\bigotimes e_{(j)}\mid 0\leq i,j\leq 3\}\), where \[e_{(i)}=\frac{\partial_{i}}{\sqrt{g_{ii}}}, \tag{29}\] it is \[T=\text{diag}\{\epsilon,-\epsilon,\mathcal{P},\mathcal{P}\}, \tag{30}\] (Here \(g_{ii}\) is the \(ii\)'th component of the metric tensor in the coordinate basis \(dx^{i}\bigotimes dx^{j}\) of the Schwarzschild coordinates). The stress-energy tensor of generalized Vaidya spacetime in the abovementioned orthonormal basis is written as \[T=\begin{pmatrix}\frac{\bar{\epsilon}}{2}+\epsilon&\frac{\bar{\epsilon}}{2}&0 &0\\ \frac{\bar{\epsilon}}{2}&\frac{\bar{\epsilon}}{2}-\epsilon&0&0\\ 0&0&\mathcal{P}&0\\ 0&0&0&\mathcal{P}.\end{pmatrix} \tag{31}\] Hence, the exterior spacetime with metric Eq. (25) is a special case of generalized Vaidya spacetime with \(\bar{\epsilon}=0\)[13]. Matching the first and second fundamental forms for the interior and exterior metric on the matching surface \(\Sigma\) identified by the radial coordinate \(r=r_{c}\) gives the following four equations [14]: \[\mathcal{R}=R(t,r_{c})=r_{c}a(t), \tag{32}\] \[\left(\frac{d\nu}{dt}\right)_{\Sigma}=\frac{1+\dot{\mathcal{R}}}{\left(1- \frac{F(t,r_{c})}{\mathcal{R}}\right)}, \tag{33}\] \[F(t,r_{c})=2D(\mathcal{R}), \tag{34}\] and \[D(\mathcal{R})_{,\mathcal{R}}=\frac{F(t,r_{c})}{2\mathcal{R}}+\mathcal{R}\ddot {\mathcal{R}}. \tag{35}\] Eqs. (32) and (33) are obtained from matching the first fundamental forms, while Eqs. (34) and (35) are obtained from matching the second fundamental forms at \(\Sigma\)[15]. From Eqs. (5), (27), (32) and (34), and the fact that \(\dot{a}<0\),we obtain \[\dot{a}=-\frac{\sqrt{M\left(M+2r_{c}a\right)}}{r_{c}\left(M+r_{c}a\right)}. \tag{36}\] Solving this with the initial condition \(a(t=0)=1\), and the constrain \(\dot{a}<0\)\(\forall\)\(t\in[0,t_{s})\) gives us \[a(t)=\frac{1}{2r_{c}}\left(-3M+\frac{M^{2}}{\psi(t)^{\frac{1}{3}}}+\psi(t)^{ \frac{1}{3}}\right), \tag{37}\] where \[\psi(t)=9M^{3}+24M^{2}r_{c}+4r_{c}^{3}-12r_{c}\sqrt{M(M+2r_{c})}t -24\sqrt{M^{3}(M+2r_{c})}t+18M(r_{c}^{2}+t^{2})+\] \[\sqrt{-M^{6}+(M^{3}+2(2M+r_{c})^{2}(M+2r_{c})-12(2M+r_{c})\sqrt{ M(M+2r_{c})}t+18Mt^{2})^{2}}.\] Eqs. (27), (34) and (35) gives \[\ddot{\mathcal{R}}=-\frac{M\mathcal{R}}{(M+\mathcal{R})^{3}}. \tag{38}\] which is again satisfied by \(a(t)\) in Eq. (37). We now have an example of the gravitational collapse (interior spatially-homogeneous perfect fluid spacetime (2) with the scaling function given by Eq. (37), matched smoothly with Figure 3: The gravitational collapse of spatially-homogeneous perfect fluid governed by the FLRW spacetime glued to an asymptotically flat non-vacuum spacetime (Eq. (25)). The solid blue curve represents the boundary of the collapsing cloud. The event horizon and the apparent horizon are absent. the exterior specific example of generalized Vaidya spacetime (26)) that gives rise to a future-null singularity. Such singularity is obtained at comoving time \[t_{s}=\frac{1}{3}\left((2M+r_{c})\sqrt{M+2r_{c}}-2M^{3/2}\right), \tag{39}\] that is achieved by substituting \(a(t=t_{s})=0\) in Eq. (37). For the scaling function given by Eq. (37), we can obtain the explicit expression of \(F(t,r)\) using Eq. (5). We can then see that \(\frac{r}{d}\left(t,r\right)\in(0,t_{s})\times(0,r_{c})\) satisfying \(F=R\). This implies the absence of the apparent horizon and, hence, the event horizon. Fig. (3) depicts the spacetime diagram of spatially homogeneous collapse giving rise to a future-null singularity. Fig. (4) depicts the conformal diagram of four spacetimes with different causal structures (existence of future spacelike singularity, past null singularity: 1) locally visible, 2) globally visible, and finally, future null singularity). ## IV Concluding Remark In the collapsing spatially homogeneous LTB spacetime, the end-state is a future-spacelike singularity. In the collapsing spatially inhomogeneous LTB spacetime, the end-state is a past-null singularity. Here, we showed that the gravitational collapse can also lead to the formation of future-null singularity as an end-state. An interior collapsing FLRW spacetime with a time-dependent Misner-Sharp mass func Figure 4: Conformal diagram of four different causal structures of the singularities formed due to gravitational collapse (depicted by brown dashed curve). (a) Schwarzschild singularity formed due to spatially homogeneous gravitationally collapsing dust glued to exterior Schwarzschild spacetime, (b) Locally naked singularity formed due to spatially inhomogeneous gravitationally collapsing dust glued to exterior Schwarzschild spacetime, (c) Globally naked singularity formed due to spatially inhomogeneous gravitationally collapsing dust glued to exterior Schwarzschild spacetime, (d) Future null singularity formed due to spatially homogeneous gravitationally collapsing perfect fluid glued to exterior asymptotically-flat non-vacuum spacetime (25). tion (\(F(t,r)=a\ \dot{a}^{2}r^{3}\), where \(a(t)\) is as shown in Eq. (37)) glued to an exterior asymptotically flat non-vacuum spacetime first discussed in [10] gives rise to such singularities.
2307.04795
Multi-fractional instantons in $SU(N)$ Yang-Mills theory on the twisted $\mathbb T^4$
We construct analytical self-dual Yang-Mills fractional instanton solutions on a four-torus $\mathbb{T}^4$ with 't Hooft twisted boundary conditions. These instantons possess topological charge $Q=\frac{r}{N}$, where $1\leq r< N$. To implement the twist, we employ $SU(N)$ transition functions that satisfy periodicity conditions up to center elements and are embedded into $SU(k)\times SU(\ell)\times U(1)\subset SU(N)$, where $\ell+k=N$. The self-duality requirement imposes a condition, $k L_1L_2=r\ell L_3L_4$, on the lengths of the periods of $\mathbb{T}^4$ and yields solutions with abelian field strengths. However, by introducing a detuning parameter $\Delta\equiv (r\ell L_3L_4-k L_1 L_2)/\sqrt{L_1 L_2L_3L_4}$, we generate self-dual nonabelian solutions on a general $\mathbb{T}^4$ as an expansion in powers of $\Delta$. We explore the moduli spaces associated with these solutions and find that they exhibit intricate structures. Solutions with topological charges greater than $\frac{1}{N}$ and $k\neq r $ possess non-compact moduli spaces, along which the $O(\Delta)$ gauge-invariant densities exhibit runaway behavior. On the other hand, solutions with $Q=\frac{r}{N}$ and $k=r$ have compact moduli spaces, whose coordinates correspond to the allowed holonomies in the $SU(r)$ color space. These solutions can be represented as a sum over $r$ lumps centered around the $r$ distinct holonomies, thus resembling a liquid of instantons. In addition, we show that each lump supports $2$ adjoint fermion zero modes.
Mohamed M. Anber, Erich Poppitz
2023-07-10T18:00:06Z
http://arxiv.org/abs/2307.04795v2
# Multi-fractional instantons in \(Su(n)\) Yang-Mills theory on the twisted \(\mathbb{T}^{4}\) ###### Abstract We construct analytical self-dual Yang-Mills fractional instanton solutions on a four-torus \(\mathbb{T}^{4}\) with 't Hooft twisted boundary conditions. These instantons possess topological charge \(Q=\frac{r}{N}\), where \(1\leq r<N\). To implement the twist, we employ \(SU(N)\) transition functions that satisfy periodicity conditions up to center elements and are embedded into \(SU(k)\times SU(\ell)\times U(1)\subset SU(N)\), where \(\ell+k=N\). The self-duality requirement imposes a condition, \(kL_{1}L_{2}=r\ell L_{3}L_{4}\), on the lengths of the periods of \(\mathbb{T}^{4}\) and yields solutions with abelian field strengths. However, by introducing a detuning parameter \(\Delta\equiv(r\ell L_{3}L_{4}-kL_{1}L_{2})/\sqrt{L_{1}L_{2}L_{3}L_{4}}\), we generate self-dual nonabelian solutions on a general \(\mathbb{T}^{4}\) as an expansion in powers of \(\Delta\). We explore the moduli spaces associated with these solutions and find that they exhibit intricate structures. Solutions with topological charges greater than \(\frac{1}{N}\) and \(k\neq r\) possess non-compact moduli spaces, along which the \(\mathcal{O}(\Delta)\) gauge-invariant densities exhibit runaway behavior. On the other hand, solutions with \(Q=\frac{r}{N}\) and \(k=r\) have compact moduli spaces, whose coordinates correspond to the allowed holonomies in the \(SU(r)\) color space. These solutions can be represented as a sum over \(r\) lumps centered around the \(r\) distinct holonomies, thus resembling a liquid of instantons. In addition, we show that each lump supports 2 adjoint fermion zero modes. ## 1 Introduction, summary, and outlook * 2 Review of 't Hooft's constant-flux solutions on \(\mathbb{T}^{4}\) * 3 Fermion zero modes in the \(Q=\frac{r}{N}\) constant-flux background * 3.1 The solution with topological charge \(Q=\frac{r}{N}\) * 3.2 Boundary conditions for the adjoint fermions * 3.3 Dotted-fermion zero modes * 3.4 Undotted-fermion zero modes * 3.4.1 The "diagonal": \(U(1)\), \(SU(\ell)\) and \(SU(k)\) undotted zero modes * 3.4.2 The "off-diagonal" \(k\times\ell\) and \(\ell\times k\) undotted zero modes. * 4 Deforming the self-dual torus: small-\(\Delta\) expansion for the bosonic background with \(Q=\frac{r}{N}\) * 5 The moduli of the \(Q=\frac{r}{N}\) bosonic solution: compact vs. noncompact * 6 Local gauge invariants of the \(Q=\frac{r}{N}\) solution and its "dissociation" * 6.1 Gauge-invariant local densities to order \(\Delta\) and their blow up for \(k\neq r\) * 6.2 Fractionalization of solutions with topological charges \(r>1\) * 6.2.1 Bosonic gauge invariant densities * 6.2.2 Fermionic zero modes and their localization * A Derivation of the off-diagonal fermion zero modes * A.1 The zero modes at zero holonomy * A.2 Turning on holonomies * B A useful identity * C Field strength and action of the multifractional instanton * D Blow up of the gauge invariant local densities along the noncompact moduli of the \(k\neq r\) solution * E Fermion zero modes on the deformed-\(\mathbb{T}^{4}\), for \(k=r\) Introduction, summary, and outlook Instantons are prominent in studying many nonperturbative phenomena in Yang-Mills theory, including the vacuum structure, condensates, and confinement. One of the least-explored instantons are _'t Hooft fluxes_ of \(SU(N)\) gauge theory on the 4-torus \(\mathbb{T}^{4}\) with twisted boundary conditions [1]. Such solutions, found by 't Hooft, carry fractional topological charges and have constant abelian field strength. While the field strength is abelian, for a general number of colors \(N\), the boundary conditions on \(\mathbb{T}^{4}\) are implemented via non-abelian transition functions (i.e. there exists no gauge where all transition functions commute). Although 't Hooft's solutions have been known since the 1980s, relatively little attention has been devoted to their study since [2]. The notable exception is the work of the Madrid group over many years, reviewed in [3]. The recent development of generalized global symmetries [4] resurrected the interest in this subject. It was shown in [5] that introducing background fields for the 1-form \(\mathbb{Z}_{N}^{(1)}\) center symmetry of Yang-Mills theory can lead to new 't Hooft anomalies, restricting the symmetry realizations and thus the infrared dynamics. The gauge field of the 1-form symmetry is a 2-form field whose nonvanishing holonomies implement the 't Hooft twist of the boundary conditions on \(\mathbb{T}^{4}\). The fractional 2-form flux is merely an external field that imposes kinematical constraints. On the other hand, finding the field configurations which minimize the action (or energy) in the presence of twists requires dynamical considerations. Recently, the authors questioned the role instantons in the presence of twists could play in determining the dynamics of the theory [6]. In particular, we examined the gaugino condensate in \(SU(2)\) super Yang-Mills theory with twists on \(\mathbb{T}^{4}\). The fractional topological charge \(Q=\frac{1}{2}\) of the \(SU(2)\) solution supports two gaugino zero modes and yields a non-vanishing condensate, which was found to be independent of the torus size. The computations were carried within the limit of the small-torus size, taken to be much smaller than the inverse strong scale, so we remained in the semi-classical domain. Thus, we could perform reliable computations and, thanks to supersymmetry, extract the numerical coefficient of the condensate. However, our computations gave twice the condensate's numerical value on \(\mathbb{R}^{4}\). Thus, our results warrant further examination of the situation for \(SU(2)\) and for a general number of colors. The current work is a continuation of the efforts in this direction. One of the crucial conditions for studying the dynamics is the self-duality of the fractional instantons. A non-self dual solution is not a minimum of the action; it has negative fluctuation modes and hence, is unstable. Insisting on the abelian solutions found by 't Hooft [1], the ratio between the periods of \(\mathbb{T}^{4}\) needs to satisfy a specific condition to respect the self-duality of the solutions. We call such \(\mathbb{T}^{4}\) a self-dual torus. However, in [6], it was found that instantons on the self-dual torus support extra fermion zero modes, more than needed to support the bilinear gaugino condensate. A way to lift the extra zero modes is to deform the self-dual \(\mathbb{T}^{4}\). The price to pay, insisting on the self-duality of the instantons, is to depart from the simple abelian solutions found by 't Hooft. One is then faced with the fact that a non-abelian analytical solution on a generic \(\mathbb{T}^{4}\) with general 't Hooft twists is not currently known. Furthermore, even a description of the moduli space and of its metric1 of such self-dual solutions is not available. Fortunately, the authors of [7] developed a systematic approach to obtaining approximate \(SU(2)\) nonabelian self-dual solutions as expansion in a small parameter \(\Delta\), measuring the deviation from the self-dual torus.2 The approach in [7] was generalized in [8] to the case of \(SU(N)\). Nevertheless, it was only used to obtain solutions with minimal topological charge \(Q=\frac{1}{N}\). Footnote 1: These data alone suffice to perform certain instanton computations in supersymmetric theories. Footnote 2: This is the solution used in [6], which, at \(\Delta>0\), supports exactly two zero modes needed to give rise to the bilinear condensate. In this paper, we carry out a systematic analysis to obtain self-dual solutions with generic topological charge \(Q=\frac{r}{N}\), with integer \(N>r>1\), on a non-self dual torus. The main effort of the present work is directed at exploring the structure of the bosonic moduli space of the solutions as well as the fermion zero modes in these backgrounds. Summary.The main findings of this rather technical paper are described below: We let \(L_{1},L_{2},L_{3},L_{4}\) be the lengths of the periods of \(\mathbb{T}^{4}\). Following 't Hooft [1], we embed the \(SU(N)\) transition functions and gauge fields in \(SU(k)\times SU(\ell)\times U(1)\subset SU(N)\), such that \(k\) and \(\ell\) are positive integers and \(k+\ell=N\). We choose the transition functions to give rise to 't Hooft twists on \(\mathbb{T}^{4}\) corresponding to topological charge \(Q=\frac{r}{N}\) (Section 2). Even though the transition functions are fully non-abelian, the original 't Hooft solution with topological charge \(Q=\frac{r}{N}\) has only an abelian gauge field \(A_{\mu}\) along the \(U(1)\) generator.3 The self-duality of this solution imposes the condition \(kL_{1}L_{2}=r\ell L_{3}L_{4}\). As already mentioned, a \(\mathbb{T}^{4}\) that satisfies this condition is said to be self-dual. Footnote 3: See Section 3.1: the \(Q=\frac{r}{N}\) transition functions are in (3.1) and the abelian solution is in (3.2). Next, we define a _detuning parameter_\(\Delta\), that measures the deviation from the self-dual \(\mathbb{T}^{4}\), as \(\Delta\equiv(r\ell L_{3}L_{4}-kL_{1}L_{2})/\sqrt{L_{1}L_{2}L_{3}L_{4}}\). Then, the self-dual non-abelian solution is obtained as an expansion in \(\Delta\), similar to [7; 8]. The solution now has nontrivial components along the abelian \(U(1)\) generator as well as the nonabelian subgroups \(SU(k)\times SU(\ell)\). We carry out our analysis to the leading order in \(\Delta\), from which we observe the following: 1. To the leading order in \(\Delta\), the solution of the self-dual Yang-Mills equations is in one-to-one correspondence with the solution to the Dirac equation of the gaugino zero modes on the self-dual \(\mathbb{T}^{4}\) (Section 3). Thus, one can borrow the latter's solutions and show that they satisfy the self-dual Yang-Mills equations to the leading order (Section 4). 2. Among all solutions with \(Q=\frac{r}{N}\), the ones with \(k=r\) stand out. For this case, we find \(4r\) arbitrary physical parameters that label the self-dual Yang-Mills solutions, in accordance with the index theorem. We interpret these parameters as the coordinates on the compact moduli space: these are the \(r\) (\(=k\)) holonomies in the \(SU(k)\) color space in each of the 4 spacetime directions (Section 5). 3. In addition, we find that gauge-invariant densities for the \(k=r\) solutions can be cast into the form of a sum over \(r\) identical lumps centered about the values taken by the \(r\) (\(=k\)) different holonomies. This indicates that a solution with topological charge \(Q=\frac{r}{N}\) can be thought of as composed of \(r\) "elementary," yet strongly overlapping ones--thus, resembling a liquid, rather than a dilute gas [3] (Section 6.2.1). See Figure 1 for an illustration. Further support for this interpretation follows from solving the Dirac equation in the background of the full non-abelian solution, showing that 2 fermion zero modes are centered about each of the \(r\) holonomies, giving a total of \(2r\) fermion zero modes as required by the index theorem (Section 6.2.2). 4. We also study the \(\Delta\)-expansion around the other \(Q=\frac{r}{N}\) solutions, the ones with \(k\neq r\) (Section 5). Here, we find that the moduli space becomes non-compact. To further understand the significance of this finding, we show that gauge-invariant local densities grow without limit in the noncompact moduli directions, clashing with the spirit of the \(\Delta\) expansion for \(k\neq r\) (Section 5 and Appendix D). This blow-up leads us to conjecture that the only self-dual \(Q=\frac{r}{N}\) solutions, obtained via the \(\Delta\)-expansion, are the ones with \(k=r\). **Outlook.** There are several directions where this work can be applied to or extended: The study of the present paper sets the stage for a forthcoming paper to shed light on a few dynamical and kinematical aspects of supersymmetric and non-supersymmetric \(SU(N)\) gauge theories. This includes the higher-order condensates, cluster decomposition principle, and exactness/holomorphy of supersymmetric results. We have yet to achieve a deeper understanding of the apparent failure of the \(\Delta\) expansion for \(k\neq r\) that we observed in the leading order. This may be require better control of the higher orders in the \(\Delta\)-expansion. Numerical studies of instantons on the twisted torus can also be used to study the convergence of the expansion as well as the approach to various large volume limits. ## 2 Review of 't Hooft's constant-flux solutions on \(\mathbb{T}^{4}\) This section quickly reviews \(SU(N)\) 't Hooft twisted solution on the four-torus \(\mathbb{T}^{4}\). We take the torus to have periods of length \(L_{\mu}\), \(\mu=1,2,3,4\), where \(\mu,\nu\) runs over the spacetime dimensions. The gauge fields \(A_{\mu}\) are Hermitian traceless \(N\times N\) matrices, Figure 1: A \(3D\) plot of the profile given by Eq. (6.11), with \(r=3\), as a function of \((x_{1},x_{2})\), for fixed \((x_{3},x_{4})\). For better visualization, we show double the periods in \(x_{1}\) and \(x_{2}\). We see three solutions, in red, yellow, and blue, lumped around three distinct centers. These lumps, however, are not well-separated, comprising a liquid rather than a dilute gas. Earlier [9], similar configurations were constructed numerically and used to study confinement, see [3]. and taken to obey the boundary conditions \[A_{\nu}(x+L_{\mu}\hat{e}_{\mu})=\Omega_{\mu}(x)A_{\nu}(x)\Omega_{\mu}^{-1}(x)-i \Omega_{\mu}(x)\partial_{\nu}\Omega_{\mu}^{-1}(x)\,, \tag{1}\] upon traversing \(\mathbb{T}^{4}\) in each direction. The transition functions \(\Omega_{\mu}\) are \(N\times N\) unitary matrices, and \(\hat{e}_{\nu}\) are unit vectors in the \(x_{\nu}\) direction. The subscript \(\mu\) in \(\Omega_{\mu}\) means that the function \(\Omega_{\mu}\) does not depend on the coordinate \(x_{\mu}\). The boundary condition (1) means that the gauge fields \(A_{\mu}\) are periodic up to a gauge transformation. Let us for the moment use the short-hand-notation \([\Omega_{\mu}]A_{\nu}\) to denote \(\Omega_{\mu}A_{\nu}\Omega_{\mu}^{-1}-i\Omega_{\mu}\partial_{\nu}\Omega_{\mu} ^{-1}\). Then, the compatibility of (1) at the corners of the \(x_{\mu}-x_{\nu}\) plane of \(\mathbb{T}^{4}\) gives: \[A_{\lambda}(x+L_{\mu}\hat{e}_{\mu}+L_{\nu}\hat{e}_{\nu}) = [\Omega_{\mu}(x+L_{\nu}\hat{e}_{\nu})][\Omega_{\nu}(x+L_{\mu}\hat{ e}_{\mu})]A_{\lambda}(x) \tag{2}\] \[= [\Omega_{\nu}(x+L_{\mu}\hat{e}_{\mu})][\Omega_{\mu}(x+L_{\nu}\hat {e}_{\nu})]A_{\lambda}(x)\,,\] from which we obtain the periodicity conditions on the transition functions \(\Omega_{\mu}\) (now giving up the short-hand notation and going back to the original \(\Omega_{\mu}\) that appears in (1)) \[\Omega_{\mu}(x+\hat{e}_{\nu}L_{\nu})\;\Omega_{\nu}(x)=e^{i\frac{2\pi n_{\mu\nu }}{N}}\Omega_{\nu}(x+\hat{e}_{\mu}L_{\mu})\;\Omega_{\mu}(x). \tag{3}\] Equation (3) is the cocycle conditions on the transition functions \(\Omega_{\mu}\). The exponent \(e^{i\frac{2\pi n_{\mu\nu}}{N}}\), with integers \(n_{\mu\nu}=-n_{\nu\mu}\), is the \(\mathbb{Z}_{N}\) center of \(SU(N)\). The freedom to introduce the center stems from the fact that both the transition function and its inverse appear in (1). 't Hooft found a solution to the consistency conditions (3) carrying a fractional topological charge by embedding the \(SU(N)\) transition functions \(\Omega_{\mu}(x)\) in \(SU(k)\times SU(\ell)\times U(1)\subset SU(N)\), such that \(N=k+\ell\) and writing them in the form \[\Omega_{\mu}(x)=P_{k}^{s_{\mu}}Q_{k}^{t_{\mu}}\otimes P_{\ell}^{u_{\mu}}Q_{ \ell}^{v_{\mu}}\;e^{i\omega\frac{\alpha_{\mu\lambda}x_{\lambda}}{L_{\lambda}}}\,. \tag{4}\] Here, \(s_{\mu},t_{\mu},u_{\mu},v_{\mu}\) are integers, a sum over \(\lambda\) is implied in the exponent, and \(\alpha_{\mu\lambda}\) is a real matrix with vanishing diagonal components without any (anti-)symmetry properties. The matrices \(P_{k}\) and \(Q_{k}\) (similarly the matrices \(P_{\ell}\) and \(Q_{\ell}\)) are the \(k\times k\) (similarly \(\ell\times\ell\)) shift and clock matrices: \[P_{k}=\gamma_{k}\left[\begin{array}{cccc}0&1&0&...\\ 0&0&1&...\\...&&\\...&&0&1\\ 1&0&...&0\end{array}\right]\,,\quad Q_{k}=\gamma_{k}\;{\rm diag}\left[1,e^{ \frac{i2\pi}{k}},e^{2\frac{i2\pi}{k}},...\right]\,, \tag{5}\] which satisfy the relation \(P_{k}Q_{k}=e^{i\frac{2\pi}{k}}Q_{k}P_{k}\). The factor \(\gamma_{k}\equiv e^{\frac{i\pi(1-k)}{k}}\) ensures that \(\det Q_{k}=1\) and \(\det P_{k}=1\). In the rest of this paper, we take primed upper-case Latin letters to denote elements of \(k\times k\) matrices: \(C^{\prime},D^{\prime}=1,2,...,k\), and the unprimed upper-case Latin letters to denote \(\ell\times\ell\) matrices: \(C,D=1,2,..,\ell\). The matrices \(P_{k}\) and \(Q_{k}\) can then be written as \((P_{k})_{B^{\prime}C^{\prime}}=\delta_{B^{\prime},C^{\prime}-1\;(\text{mod}k)}\) and \((Q_{k})_{C^{\prime}B^{\prime}}=\gamma_{k}\;e^{i2\pi\frac{C^{\prime}-1}{k}} \delta_{C^{\prime}B^{\prime}}\). The matrix \(\omega\) is the \(U(1)\) generator. It is given by \[\omega=2\pi\text{diag}\left[\underbrace{\ell,\ell,...,\ell}_{k\;\text{times} },\underbrace{-k,-k,...,-k}_{\text{$\ell$ times}}\right]\,, \tag{6}\] and clearly commutes with \(P_{k},P_{\ell},Q_{k},Q_{\ell}\). Writing the twist matrix \(n_{\mu\nu}\) appearing in the cocycle condition (3) as \(n_{\mu\nu}=n_{\mu\nu}^{(1)}+n_{\mu\nu}^{(2)}\), the antisymmetric part of the coefficients \(\alpha_{\mu\nu}\) are taken to be \[\alpha_{\mu\nu}-\alpha_{\nu\mu}=\frac{n_{\mu\nu}^{(2)}}{N\ell}-\frac{n_{\mu \nu}^{(1)}}{Nk}\,. \tag{7}\] Recall that \(\alpha_{\mu\nu}\) have vanishing diagonal elements; it is convenient, see Section 3.1, to choose a particular form for their symmetric part, which amounts to a gauge choice. A solution of the transition functions (4) obeying the cocycle conditions (3) with \(\alpha_{\mu\nu}\) and \(n_{\mu\nu}\) related as in (7) can be obtained provided that \(s_{\mu},t_{\mu},u_{\mu},v_{\mu}\in\mathbb{Z}\) can be found such that \[n_{\mu\nu}^{(1)}=s_{\mu}t_{\nu}-s_{\nu}t_{\mu}+kA_{\mu\nu}\,,\quad n_{\mu\nu}^ {(2)}=u_{\mu}v_{\nu}-v_{\nu}u_{\mu}+\ell B_{\mu\nu}\,, \tag{8}\] where \(A_{\mu\nu}\) and \(B_{\mu\nu}\) are integers, and \[n_{\mu\nu}^{(1)}\tilde{n}_{\mu\nu}^{(1)}=0\;(\text{mod}\,k)\,,\quad n_{\mu \nu}^{(2)}\tilde{n}_{\mu\nu}^{(2)}=0\;(\text{mod}\,\ell)\,, \tag{9}\] and \(\tilde{n}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}n_{\alpha\beta}\). While the details of the derivation are not shown here (see [1]), the data we have given above suffice to check that upon plugging (9, 8, 7) into (4) one finds, using (6) and (5), that the cocycle conditions (3) are obeyed, with twist matrices \(n_{\mu\nu}=n_{\mu\nu}^{(1)}+n_{\mu\nu}^{(2)}\). An abelian gauge field configuration along the \(U(1)\) generator \(\omega\), which obeys the boundary conditions specified by the \(\Omega_{\mu}\) thus constructed, is given by the expression \[A_{\lambda}=-\omega\left(\frac{\alpha_{\mu\lambda}x_{\mu}}{L_{\mu}L_{\lambda} }+\frac{z_{\lambda}}{L_{\lambda}}\right)\,. \tag{10}\] The corresponding field strength \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+i[A_{\mu},A_{\nu}]\) is constant everywhere on \(\mathbb{T}^{4}\): \[F_{\mu\nu}=-\omega\frac{\alpha_{\mu\nu}-\alpha_{\nu\mu}}{L_{\mu}L_{\lambda}}\,. \tag{11}\] The constants \(z_{\mu}\) label the holonomies along the \(U(1)\) generator, which are translational moduli. This solution carries a fractional topological charge: \[Q=-\frac{1}{4N}n_{\mu\nu}\tilde{n}_{\mu\nu}=-\frac{n_{12}n_{34}+n_{13}n_{42}+n _{14}n_{23}}{N}\,. \tag{12}\] Without loss of generality, we can always assume \(n_{13}=n_{42}=n_{14}=n_{23}=0\). Thus, we only consider fluxes in the 1-2 and 3-4 planes. Then, a self-dual solution satisfies the relation \(F_{12}=F_{34}\), from which one can find the ratio \(\frac{L_{1}L_{2}}{L_{3}L_{4}}\) that defines the self-dual torus. The action of the self-dual solution is \[S_{0}=\frac{1}{2g^{2}}\int_{\mathbb{T}^{4}}\mathrm{tr}\left[F_{\mu\nu}F_{\mu \nu}\right]=\frac{8\pi^{2}|Q|}{g^{2}}\,. \tag{13}\] ## 3 Fermion zero modes in the \(Q=\frac{r}{N}\) constant-flux background In this Section, we study the zero modes of the adjoint fermions in the constant-flux abelian background with topological charge \(\frac{r}{N}\), described in Section 3.1 (see eqn. (3.2)). These results are useful when constructing the nonabelian self-dual solution with \(Q=\frac{r}{N}\) on the deformed \(\mathbb{T}^{4}\). We find that there are \(2\mathrm{gcd}(k,r)\) dotted (Section 3.3) and \(2\mathrm{gcd}(k,r)\) undotted (Section 3.4.1) _constant_ fermion zero modes. We also find \(2r\) undotted adjoint fermion zero modes with nontrivial \(x\)-dependence (Section 3.4.2, see eqns. (3.18-3.21) for the explicit solution and Appendix A for the rather technical derivation). The latter are the ones determining the bosonic nonabelian self-dual background on the deformed torus in the \(\Delta\)-expansion. ### The solution with topological charge \(Q=\frac{r}{N}\) A solution with topological charge \(Q=\frac{r}{N}\) is obtained from Section 2 by taking \(n_{12}^{(1)}=-r,n_{12}^{(2)}=0,n_{34}^{(1)}=0,n_{34}^{(2)}=1\), and, hence \(n_{12}=-r,n_{34}=1\). We also take \(s_{1}=-r,t_{2}=1,u_{3}=v_{4}=1\) and set \(A_{\mu\nu}=B_{\mu\nu}=0\) and the rest of \(s_{\mu}\), \(t_{\mu}\), \(u_{\mu}\), and \(v_{\mu}\) to zero. Thus, without loss of generality, we take \(\alpha_{12}=\frac{r}{Nk}\,,\alpha_{21}=0\,,\alpha_{34}=\frac{1}{N\ell}\,, \alpha_{43}=0\). The upshot is that the transition functions (4) now read \[\Omega_{1} = P_{k}^{-r}\otimes I_{\ell}e^{i\omega\frac{rx_{2}}{NkL_{2}}}=\left[ \begin{array}{cc}P_{k}^{-r}e^{i2\pi\ell r\frac{x_{2}}{NkL_{2}}}&0\\ 0&e^{-i2\pi r\frac{x_{2}}{NL_{2}}}I_{\ell}\end{array}\right],\] \[\Omega_{2} = Q_{k}\otimes I_{\ell}=\left[\begin{array}{cc}Q_{k}&0\\ 0&I_{\ell}\end{array}\right],\] \[\Omega_{3} = I_{k}\otimes P_{\ell}e^{i\omega\frac{x_{4}}{NkL_{4}}}=\left[ \begin{array}{cc}e^{i2\pi\frac{x_{4}}{NkL_{4}}}I_{k}&0\\ 0&e^{-i2\pi k\frac{x_{4}}{N\ell L_{4}}}P_{\ell}\end{array}\right],\] \[\Omega_{4} = I_{k}\otimes Q_{\ell}=\left[\begin{array}{cc}I_{k}&0\\ 0&Q_{\ell}\end{array}\right]. \tag{10}\] where we recall that \(\omega\) is given by (6), \(P\) and \(Q\) in (5), and \(I_{k}\) (\(I_{\ell}\)) denote \(k\times k\) (\(\ell\times\ell\)) unit matrices. Above, we introduced our \(k\times\ell\) block-matrix notation, to be used further in this paper. The reader can use (10), recalling that \(k+\ell=N\), with \(k\), \(\ell\) being positive integers, and that \(P\) and \(Q\) are the clock and shift matrices (5), to explicitly check that \(\Omega_{\mu}\) obey the cocycle conditions (3) with only \(n_{12}=-r\) and \(n_{34}=1\) being nonzero, and that these hold for any \(1\leq r\leq N\). Likewise, it is easy to check that the abelian gauge field and the field strength of the constant flux background \[A_{1} = -\omega\frac{z_{1}}{L_{1}}\,,\,\,A_{2}=-\omega\left(\frac{rx_{1}} {NkL_{1}L_{2}}+\frac{z_{2}}{L_{2}}\right)\,,\,\,A_{3}=-\omega\frac{z_{3}}{L_{ 3}}\,,\,\,A_{4}=-\omega\left(\frac{x_{3}}{N\ell L_{3}L_{4}}+\frac{z_{4}}{L_{4}}\right)\] \[F_{12} = -\omega\frac{r}{NkL_{1}L_{2}}\,,F_{34}=-\omega\frac{1}{N\ell L_{3 }L_{4}}\,. \tag{11}\] obey the boundary conditions (1) with transition functions (10).4 Footnote 4: If one of \(k\) or \(\ell\) is unity, the cocycle conditions with \(n_{12}=-r\), \(n_{34}=1\) and the corresponding boundary conditions (1) are obeyed with the corresponding \(P\) and \(Q\) in \(\Omega_{\mu}\) replaced by unity. If we require the self-duality of the solution \(F_{12}=F_{34}\), we find that a self-dual torus sides have to obey the relation \[\frac{L_{1}L_{2}}{L_{3}L_{4}}=\frac{r\ell}{k}\,. \tag{12}\] ### Boundary conditions for the adjoint fermions In the rest of Section 3, we solve the Weyl equations \(D_{\mu}\bar{\sigma}_{\mu}\lambda=0\) and \(D_{\mu}\sigma_{\mu}\bar{\lambda}=0\) for massless adjoint fermions in the background (11).5 This will enable us to understand the fermionic zero modes in the background with topological charge \(Q=\frac{r}{N}\) on the self-dual torus. In subsequent sections, the results help the construction of the self-dual bosonic background on the deformed torus in the small-\(\Delta\) expansion. Before we begin, let us discuss the moduli of the solution. We first note that the constant holonomies \(z_{\mu}\) in the \(U(1)\) direction \(\omega\), appearing in (10), are the most general ones commuting with the transition functions (11), provided \(\gcd(k,r)=1\) (that this is so follows from the discussion below). However, when \(\gcd(k,r)>1\), there are \(\gcd(k,r)\) different holonomies permitted for each \(\mu\). To work them out for future use, we first note that the holonomies have to be in the Cartan subalgebra, because they have to commute with \(Q_{k}\) and \(Q_{l}\) from (11) in order that (1) be obeyed. Thus, the additional (to \(z_{\mu}\) from (10)) holonomies would add, to the background (10), \(\delta A_{\mu}=H^{a^{\prime}}\phi_{\mu}^{a^{\prime}}+H^{a}\phi_{\mu}^{a}\), with constant \(\phi\)'s, where \(H^{a^{\prime}}\) (\(a^{\prime}=1,...,k-1\)) and \(H^{a}\) (\(a=1,...l-1\)) are the \(SU(k)\) and \(SU(l)\) Cartan generators, respectively. The generators \(H^{a^{\prime}}\), \(H^{a}\) are extended to have zero entries in their respective complement to \(SU(N)\). In addition, \(H^{a^{\prime}}\) and \(H^{a}\) have to commute with the transition functions (11), which means that \(P_{k}^{-r}H^{a^{\prime}}P_{k}^{r}=H^{a^{\prime}}\) and \(P_{l}H^{a}P_{l}^{-1}=H^{a}\). Clearly, there are no nonzero \(SU(\ell)\) generators \(H^{a}\) allowed, thus we set the corresponding holonomies to zero \(\phi_{\mu}^{a}=0\). The condition for \(H^{a^{\prime}}\) only allows nonzero \(\phi_{\mu}^{a^{\prime}}\) if \(\gcd(k,r)>1\). If \(\gcd(k,r)=k\), any Cartan generator obeys \(P_{k}^{-r}H^{a^{\prime}}P_{k}^{r}=H^{a^{\prime}}\) and so there are \(k-1\)\(\phi_{\mu}^{a^{\prime}}\)'s allowed (for reasons that become clear later, we shall study this case in great detail in what follows). For generic values of \(\gcd(k,r)\), \(1<\gcd(k,r)\leq k\), there are only \(\gcd(k,r)\) holonomies along the \(SU(k)\) Cartan generators allowed. Let us now describe them in a manner useful for the future. For general values of \(\gcd(r,k)\), we combine the allowed holonomies in the \(SU(k)\) part of \(SU(N)\) with the \(z_{\mu}\) holonomies (the ones proportional to \(\omega\), see (10)). We use primed indices \(C^{\prime},B^{\prime}=1...k\) to denote the \(k\times k\) part of the components of the \(SU(N)\) gauge field and unprimed \(C,B=1,...\ell\) to denote the \(SU(\ell)\) components. Thus, we describe the general abelian background (10) as \[\hat{A}_{\mu}=(A_{\mu})|_{\text{of eqn.~{}\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq in the second term and are given by \[\delta A_{\mu\;CD} = \delta_{CD}\;2\pi k\frac{z_{\mu}}{L_{\mu}}, \tag{3.5}\] \[\delta A_{\mu\;C^{\prime}D^{\prime}} = \delta_{C^{\prime}D^{\prime}}(-2\pi\ell\frac{z_{\mu}}{L_{\mu}}+ \phi_{\mu}^{C^{\prime}}),\] \[\text{where }\phi_{\mu}^{C^{\prime}} = \phi_{\mu}^{C^{\prime}-r(\text{mod }k)}\equiv\phi_{\mu}^{[C^{\prime}-r]_{k}}\text{ and }\sum_{C^{\prime}=1}^{k}\phi_{\mu}^{C^{\prime}}=0.\] The \(SU(k)\) holonomies, denoted by \(\phi_{\mu}^{C^{\prime}}\), must obey the condition from the last line to ensure commutativity with \(P_{k}^{r}\). In (3.5) we also introduced the short-hand notation that we shall often use in this paper:6 Footnote 6: Notice that, to conform to (3.6), in (3.5) and further, since \(q(\text{mod}q)=0\), we take the range of the \(SU(k)\) index \(C^{\prime}\) to be \(0...k-1\) instead of \(1...k\). Likewise, we take the range of the unprimed \(SU(\ell)\) indices \(0...\ell-1\). \[\left[x\right]_{q}\equiv x(\text{mod }q). \tag{3.6}\] We now turn to the adjoint fermions (gauginos), which obey the boundary conditions (2.1) without the inhomogeneous term \[\lambda(x+L_{\mu}\hat{e}_{\mu})=\Omega_{\mu}\lambda(x)\Omega_{\mu}^{-1}\,, \tag{3.7}\] with \(\Omega_{\mu}\) from (3.1). Omitting the spinor index, we write the gaugino field, an \(N\times N\) traceless matrix, as a block of \(k\times k\), \(k\times\ell\), \(\ell\times k\) and \(\ell\times\ell\) matrices (recall \(N=k+\ell\)): \[\lambda=\left[\begin{array}{cc}||\lambda_{C^{\prime}B^{\prime}}| |&||\lambda_{C^{\prime}B}||\\ ||\lambda_{CB^{\prime}}|&||\lambda_{CB}||\end{array}\right]\,\ C^{\prime},B^{ \prime}\in\{0,...k-1\},\ C,B\in\{0,...\ell-1\}\, \tag{3.8}\] obeying the tracelessness condition \[\sum_{C^{\prime}=0}^{k-1}\lambda_{C^{\prime}C^{\prime}}+\sum_{C=0}^{\ell-1} \lambda_{CC}=0. \tag{3.9}\] The explicit form of the boundary conditions follows from (3.7) and (3.8). For \(\lambda_{C^{\prime}B^{\prime}}\), they are \[\lambda_{C^{\prime}B^{\prime}}(x+L_{1}\hat{e}_{1}) = \lambda_{[C^{\prime}-r]_{k}\;[B^{\prime}-r]_{k}}(x)\,,\] \[\lambda_{C^{\prime}B^{\prime}}(x+L_{2}\hat{e}_{2}) = e^{i2\pi\frac{C^{\prime}-B^{\prime}}{k}}\lambda_{C^{\prime}B^{ \prime}}(x)\,,\] \[\lambda_{C^{\prime}B^{\prime}}(x+L_{3}\hat{e}_{3}) = \lambda_{C^{\prime}B^{\prime}}(x)\,,\] \[\lambda_{C^{\prime}B^{\prime}}(x+L_{4}\hat{e}_{4}) = \lambda_{C^{\prime}B^{\prime}}(x)\,, \tag{3.10}\] while \(\lambda_{CB}\) obeys \[\lambda_{CB}(x+L_{1}\hat{e}_{1}) = \lambda_{CB}(x)\,,\] \[\lambda_{CB}(x+L_{2}\hat{e}_{2}) = \lambda_{CB}(x)\,,\] \[\lambda_{CB}(x+L_{3}\hat{e}_{3}) = \lambda_{[C+1]_{\ell}\;[B+1]_{\ell}}(x)\,,\] \[\lambda_{CB}(x+L_{4}\hat{e}_{4}) = e^{i2\pi\frac{C-B}{\ell}}\;\lambda_{CB}(x)\,, \tag{3.11}\] and \(\lambda_{C^{\prime}B}\): \[\lambda_{C^{\prime}B}(x+L_{1}\hat{e}_{1}) = \gamma_{k}^{-r}e^{i2\pi\frac{rx_{2}}{kL_{2}}}\;\lambda_{[C^{ \prime}-r]_{k}\;B}(x)\,,\] \[\lambda_{C^{\prime}B}(x+L_{2}\hat{e}_{2}) = \gamma_{k}e^{i2\pi\frac{(C^{\prime}-1)}{k}}\;\lambda_{C^{\prime} B}(x)\,,\] \[\lambda_{C^{\prime}B}(x+L_{3}\hat{e}_{3}) = \gamma_{\ell}^{-1}e^{i2\pi\frac{x_{4}}{kL_{4}}}\;\lambda_{C^{ \prime}[B+1]_{\ell}}(x)\,,\] \[\lambda_{C^{\prime}B}(x+L_{4}\hat{e}_{4}) = \gamma_{\ell}^{-1}e^{-i2\pi\frac{(B-1)}{\ell}}\;\lambda_{C^{ \prime}B}(x)\,. \tag{3.12}\] We also note that \(\lambda_{CB^{\prime}}\) obeys the h.c. conditions to (3.12). In addition, the dotted fermions \(\bar{\lambda}\) obey boundary conditions equal to the ones given above, written in terms of a decomposition of \(\bar{\lambda}\) in terms of \(\bar{\lambda}_{C^{\prime}B^{\prime}}\), \(\bar{\lambda}_{C^{\prime}B}\), \(\bar{\lambda}_{CB}\) and \(\bar{\lambda}_{CB^{\prime}}\), identical to the one in (3.8). We can now solve the Weyl equations \(D_{\mu}\bar{\sigma}_{\mu}\lambda=0\) and \(D_{\mu}\sigma_{\mu}\bar{\lambda}=0\) with the above boundary conditions. The covariant derivative is given by \(D_{\mu}=\partial_{\mu}+i[A_{\mu},\,]\) with \(A_{\mu}\) already given in (3.4) and (3.5). We solve for the zero modes of the Weyl equation in the abelian background, beginning with the simplest cases. ### Dotted-fermion zero modes First, we solve the Weyl equation for the dotted fermions, \(D_{\mu}\sigma^{\mu}\bar{\lambda}=0\). Here, we ignore the allowed nonzero holonomies from (3.5), since (as we shall see later) they do not affect the solution in an interesting way. We find, keeping in mind the tracelessness condition (3.9), \[\partial_{\mu}\sigma^{\mu}\bar{\lambda}_{CB\;\dot{\alpha}}=0\,, \quad\partial_{\mu}\sigma^{\mu}\bar{\lambda}_{C^{\prime}B^{\prime}\;\dot{ \alpha}} = 0,\;\;\text{with}\;\dot{\alpha}=\dot{1},\dot{2},\] \[\left(\partial_{3}-i\partial_{4}-\frac{2\pi x_{3}}{\ell L_{3}L_{4} }\right)\bar{\lambda}_{C^{\prime}B\;\dot{1}}+\left(\partial_{1}-i\partial_{2} -\frac{2\pi rx_{1}}{kL_{1}L_{2}}\right)\bar{\lambda}_{C^{\prime}B\;\dot{2}} = 0\,,\] \[\left(\partial_{1}+i\partial_{2}+\frac{2\pi rx_{1}}{kL_{1}L_{2}} \right)\bar{\lambda}_{C^{\prime}B\;\dot{1}}+\left(-\partial_{3}-i\partial_{4} -\frac{2\pi x_{3}}{\ell L_{3}L_{4}}\right)\bar{\lambda}_{C^{\prime}B\;\dot{2}} = 0\,, \tag{3.13}\] and similar equations for \(\bar{\lambda}_{CB^{\prime}\;\dot{\alpha}}\). One can convince themselves that there exist no normalizable solutions for \(\bar{\lambda}_{C^{\prime}B\;\dot{\alpha}}\) and \(\bar{\lambda}_{CB^{\prime}\;\dot{\alpha}}\) obeying the boundary conditions. We shall not repeat the details here but only note that this follows from the analysis of [6] and the realization that normalizability of the solutions on the four torus (after expanding in eigenmodes) ends up requiring normalizability of simple-harmonic oscillator wavefunctions, the solutions of (3.13), in the infinite \(x_{1}\)-\(x_{3}\) plane (the two oscillators being in the \(x_{1}\) and \(x_{3}\) directions). The only normalizable solution involves the diagonal components \(\bar{\lambda}_{CC\;\dot{\alpha}}\) and \(\bar{\lambda}_{C^{\prime}C^{\prime}\;\dot{\alpha}}\) and is constant. This is because the boundary conditions (3.11, 3.10) only allow for constant diagonal solutions and also further restrict the solutions as we now discuss. The boundary conditions for the \(\ell\times\ell\)-components only permit the solution \[\bar{\lambda}_{CC\;\dot{\alpha}}=\bar{\vartheta}_{\dot{\alpha}},\ \forall C=0,...,\ell-1, \tag{3.14}\] with equal diagonal entries. Here \(\bar{\vartheta}_{\dot{\alpha}}\) are two Grassmann variables. The \(k\times k\) part of the dotted fermions, \(\bar{\lambda}_{C^{\prime}C^{\prime}\;\dot{\alpha}}\) allows for \(\gcd(k,r)\) such solutions (due to the first boundary condition in (3.10)), which can be written as \[\bar{\lambda}_{C^{\prime}C^{\prime}\;\dot{\alpha}}=\bar{\vartheta}_{\dot{ \alpha}}^{[C^{\prime}-r]_{k}}, \tag{3.15}\] for arbitrary Grassmann \(\bar{\vartheta}_{\dot{\alpha}}^{[C^{\prime}-r]_{k}}\). Clearly, for every value of \(\dot{\alpha}\), there are \(\gcd(k,r)\) such different \(\bar{\vartheta}_{\dot{\alpha}}^{[C^{\prime}-r]_{k}}\), which one can label \(\bar{\vartheta}_{\dot{\alpha}}^{0}\), \(\bar{\vartheta}_{\dot{\alpha}}^{1}\) to \(...\bar{\vartheta}_{\dot{\alpha}}^{\gcd(k,r)-1}\). The tracelessness condition (3.9), however, determines the \(SU(\ell)\) Grassmann variables (3.14) in terms of the \(SU(k)\) ones, (3.15). In conclusion, there are a total of \(2\mathrm{gcd}(k,r)\) dotted-fermion zero modes in the constant-flux instanton background. ### Undotted-fermion zero modes #### 3.4.1 The "diagonal": \(U(1)\), \(Su(\ell)\) and \(Su(k)\) undotted zero modes Now, we continue with the undotted fermions \(\lambda_{BC}\) and \(\lambda_{B^{\prime}C^{\prime}}\), i.e. their componets in the \(U(1)\), \(SU(k)\) and \(SU(\ell)\) directions. Because the abelian background (3.4, 3.5) commutes with the \(U(1)\), \(SU(k)\) and \(SU(\ell)\) generators, these "diagonal" components satisfy a free Dirac equation: \[\partial_{\mu}\bar{\sigma}_{\mu}\lambda_{C^{\prime}B^{\prime}} = 0,\] \[\partial_{\mu}\bar{\sigma}_{\mu}\lambda_{CB} = 0,\ \mathrm{with}\ \ \sum_{C^{\prime}=0}^{k-1}\lambda_{C^{\prime}C^{\prime}}+\sum_{C=0}^{\ell-1} \lambda_{CC}=0\, \tag{3.16}\] along with the \(SU(N)\) tracelessness condition (3.9). One needs to solve these equations with the boundary conditions (3.10) and (3.11). We now state the results, since the analysis is similar to that in [6; 11]. The first remark is that, following the steps outlined for the dotted zero modes, one finds that there are no normalizable solutions for the components of \(\lambda_{C^{\prime}B^{\prime}}\) and \(\lambda_{CB}\) with \(C^{\prime}\neq B^{\prime}\) and \(C\neq B\) obeying the boundary conditions. Next, we note that the only solution for \(\lambda_{CC}\) is the one where \(\lambda_{CC\;\alpha}=\eta_{\alpha}\), with a constant spinor \(\eta_{\alpha}\), for all \(C\) (this is needed to satisfy (3.11)). The tracelessness condition in (3.16), however, relates this to the \(\lambda_{B^{\prime}B^{\prime}}\) solutions on which we now focus. The boundary conditions (3.10) are satisfied by the constant solutions \[\lambda_{B^{\prime}C^{\prime}\;\alpha}=\delta_{B^{\prime}C^{\prime}}\sum_{j=0} ^{\rm gcd(k,r)-1}\vartheta_{\alpha}^{(j)}\sum_{n=0}^{\frac{k}{\rm gcd(k,r)}-1 }\delta_{B^{\prime},[j+nr]_{k}}, \tag{3.17}\] with \(\rm gcd(k,r)\) arbitrary constant Grassmann spinors \(\vartheta_{\alpha}^{(j)}\). We conclude that there are \(2\rm gcd(k,r)\) independent zero modes of \(\lambda_{B^{\prime}C^{\prime}}\) and, from the above remarks, of the all "diagonal" components of the undotted fermions considered in this Section. Note that the number of diagonal undotted zero modes is precisely the same as the number of the dotted fermion zero modes of Section 3.3. In particular, the contribution of the zero modes of Sections 3.3 and 3.4.1 to the index cancels out. #### 3.4.2 The "off-diagonal" \(k\times\ell\) and \(\ell\times k\) undotted zero modes. The zero modes most worthy of our attention, the ones which determine the nonabelian instanton solution to leading order in \(\Delta\), are the ones considered in this Section. Finding the off-diagonal undotted zero modes, the ones for \(\lambda_{C^{\prime}B}\) (\(k\times\ell\)) and \(\lambda_{CB^{\prime}}\) (\(\ell\times k\)), is the most important and least trivial part of our study. We find that there are \(r\) zero modes for \(\lambda_{C^{\prime}B}\) and \(r\) zero modes for \(\lambda_{CB^{\prime}}\), in agreement with the index theorem which requires that the number of undotted minus the number of dotted zero modes be \(2r\). The derivation of the results quoted in this Section is technically involved and the details are relegated to Appendix A. Here, we simply give the explicit formulae for the zero modes for \(\lambda_{C^{\prime}B}\), the \(k\times\ell\) ones.7 We find that in the background (3.4, 3.5), only one spinor component has \(r\) normalizable zero modes Footnote 7: Noting that the \(\ell\times k\) zero modes (which come with their own Grassmann parameters) are obtained by hermitean conjugation of \(\Phi^{(p)}\) in (3.18), as per the remark after (3.12). \[\lambda_{C^{\prime}B\;1} =\sum_{p=0}^{\frac{r}{\rm gcd(k,r)}-1}\eta^{[C^{\prime}+pk]_{r}} \;\Phi^{(p)}_{C^{\prime}B}(x,\hat{\phi}),\] \[\lambda_{C^{\prime}B\;2} =0. \tag{3.18}\] Here, \(\eta^{j}\), \(j=0,...,r-1\), are \(r\) Grassmann parameters associated with the zero modes (clearly, \([C^{\prime}+pk]_{r}\) takes \(r\) values). Notice that a given zero mode, proportional to \(\eta^{j}\) with some \(j\in\{0,...,r-1\}\), nontrivially intertwines the gauge indices in (3.18). Before giving the form of the functions \(\Phi^{(p)}\) governing the \(x\)-dependence of the zero modes (3.18), we introduce the notation \(\hat{\phi}^{C^{\prime}}_{\mu}\) to denote the way various gauge field holonomies appear in the equations governing the off diagonal zero modes. These combine the \(U(1)\)-holonomy \(z_{\mu}\) with the extra ones allowed when \(\gcd(k,r)>1\), as per the discussion around (3.5):8 Footnote 8: The reason that \(2\pi N\) (and not \(2\pi\ell\)) appears here is that \(\hat{\phi}^{C^{\prime}}\) encodes the action of the commutator on the off diagonal components \(\lambda_{C^{\prime}B}\). \[\hat{\phi}^{C^{\prime}}_{\mu}\equiv\phi^{C^{\prime}}_{\mu}-2\pi N \frac{z_{\mu}}{L_{\mu}},\ \text{with}\ \hat{\phi}^{C^{\prime}}_{\mu}=\hat{\phi}^{[C^{\prime}-r]_{k}}_{\mu}. \tag{3.19}\] The explicit solution for \(\hat{\phi}^{C^{\prime}}\) obeying the relations above (and from (3.5)) can be written out in a somewhat unwieldy form (which, however, serves to show that there are \(\gcd(k,r)\) independent holonomies for each \(\mu\))9 Footnote 9: We note that this is similar to eqn. (3.17) for the undotted diagonal zero modes of the next Section. \[\hat{\phi}^{C^{\prime}}_{\mu}=\sum_{j=0}^{\gcd(k,r)-1}\varphi^{j}_{\mu}\sum_{ n=0}^{\frac{k}{\gcd(k,r)}-1}\delta^{C^{\prime},[j+nr]_{k}}. \tag{3.20}\] Here, we use the notation (3.6), taking the range of \(C^{\prime}\) to be \(0...k-1\). The sum over \(n\) for each \(j\) simply incorporates the fact that the index \(C^{\prime}\) takes values an "orbit" of size \(\frac{k}{\gcd(k,r)}\). Each of the \(\gcd(k,r)\) "orbits," labelled by \(j\), has the same holonomy \(\phi^{j}_{\mu}\) and contains values of \(C^{\prime}\) jumping by \(r\) units, as required by commutativity of the holonomy with \(P_{k}\). Although (3.20) explicitly shows that, for each \(\mu\), there are \(\gcd(k,r)\) independent holonomies \(\varphi^{j}_{\mu}\), we prefer to further denote them as \(\hat{\phi}^{C^{\prime}}_{\mu}\), remembering the relations they obey. However, we make explicit use of (3.20) later on, see Section 5. The zero modes \(\lambda_{C^{\prime}B\,1}\) of (3.18) depend on \((x,\hat{\phi}^{C^{\prime}},\eta^{j})\). Their \(x\)- and \(\hat{\phi}\)-dependence is through the \(\frac{r}{\gcd(k,r)}\) functions \(\Phi^{(p)}\), given by (for derivation, see Appendix A): \[\Phi^{(p)}_{C^{\prime}B}(x,\hat{\phi}) =\sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)},\,m^{\prime}\in\mathbb{ Z}}\ \ \sum_{n^{\prime}\in\mathbb{Z}}e^{\frac{i2\pi z_{2}}{L_{2}}(m+\frac{2C^{\prime}-1-k}{2 k})}e^{\frac{i2\pi z_{4}}{L_{4}}(n^{\prime}-\frac{2B-1-\ell}{2\ell})}\] \[\ \ \ \ \times e^{-i\frac{\pi(1-k)}{k}}\big{(}C^{\prime}-\frac{1+k(1 -2m)}{2}\big{)}e^{i\frac{\pi(1-\ell)}{\ell}\big{(}B-\frac{1+\ell(2n^{\prime}+1 )}{2}\big{)}}\] \[\ \ \ \ \times e^{-\frac{\pi r}{kL_{1}L_{2}}\left[x_{1}-\frac{kL_{ 1}L_{2}}{2\pi r}(\hat{\phi}^{[C^{\prime}]r}_{2}-i\hat{\phi}^{[C^{\prime}]r}_{1 })-\frac{L_{1}}{r}\big{(}km+\frac{2C^{\prime}-1-k}{2}\big{)}\right]^{2}}\] \[\ \ \ \ \times e^{-\frac{\pi}{L_{3}L_{4}}\left[x_{3}-\frac{\ell L_{ 3}L_{4}}{2\pi}(\hat{\phi}^{[C^{\prime}]r}_{4}-i\hat{\phi}^{[C^{\prime}]r}_{3} )-L_{3}\big{(}\ell n^{\prime}-\frac{2B-1-\ell}{2}\big{)}\right]^{2}}. \tag{3.21}\] The explicit form of the functions \(\Phi^{(p)}\) will be useful later, in our study of the properties of the self-dual fractional instantons on the deformed torus. Eqns. (3.18, 3.19, 3.21) give the general normalizable solution of the massless undotted Weyl equation \(D_{\mu}\bar{\sigma}_{\mu}\lambda=0\) for \(\lambda_{C^{\prime}B\,\alpha}\) in the abelian constant-field strength background (14,15) of topological charge \(Q=\frac{r}{N}\). In summary of Section 3, we found that there is a number of dotted and undotted zero modes in the abelian background of topological charge \(\frac{r}{N}\). The total number is consistent with the index theorem. The solutions for the non-constant fermion zero modes will be used to construct the nonabelian self-dual solution of charge \(\frac{r}{N}\) on the deformed torus. Deforming the self-dual torus: small-\(\Delta\) expansion for the bosonic background with \(Q=\frac{r}{N}\) To remedy the zero modes problem we saw in the previous section, i.e., to lift the dotted zero modes, we now depart from the self-dual torus and search for a self-dual instanton solution with topological charge \(Q=\frac{r}{N}\) on a deformed \(\mathbb{T}^{4}\), following the strategy of [7; 8]. We write the general gauge field on the non-self-dual torus in the form \[A_{\mu}(x)=\hat{A}_{\mu}+\mathcal{S}^{\omega}_{\mu}(x)\;\omega+ \delta_{\mu}(x)\,. \tag{16}\] Here, \(\omega\) is the \(U(1)\) generator (6), \(\hat{A}_{\mu}\) is the abelian gauge field with constant field strength defined previously in (14) and \(\mathcal{S}^{\omega}_{\mu}(x)\) is the nonconstant field component along the \(U(1)\) generator. The non-abelian part \(\delta_{\mu}(x)\) is given by the \(N\times N\) matrix, which, as earlier in (13), (14), (15), is decomposed in a block form:10 Footnote 10: Here \(\mathcal{S}^{k}_{\mu}\) and \(\mathcal{S}^{l}_{\mu}\) are traceless \(su(k)\)- and \(su(l)\)-algebra elements, respectively, while \(\mathcal{W}^{k\times\ell}_{\mu}\) is a complex \(k\times\ell\) matrix with \(\mathcal{W}^{l\ell\times k}_{\mu}\) its hermitean conjugate. In the second (bracketed) term in (15) we have indicated the index notation used earlier in describing the zero modes of the adjoint fermions, recall (20). Here, we find it convenient to use the block matrix notation \(S^{k},S^{\ell},W^{k\times\ell},W^{\dagger\,\ell\times k}\) and will revert to using indices \(B^{\prime}C^{\prime},B^{\prime}C\), etc., when needed. \[\delta_{\mu}=\left[\begin{array}{cc}\mathcal{S}^{k}_{\mu}& \mathcal{W}^{k\times\ell}_{\mu}\\ \mathcal{W}^{l\ell\times k}_{\mu}&\mathcal{S}^{\ell}_{\mu}\end{array}\right] \quad\left(\equiv\left[\begin{array}{cc}||\mathcal{S}^{k}_{\mu\,B^{\prime}C ^{\prime}}||&||\mathcal{W}_{\mu\,B^{\prime}C}||\\ ||(\mathcal{W}^{\dagger}_{\mu})_{CB^{\prime}}||&||\mathcal{S}^{\ell}_{\mu\,BC }||\end{array}\right]\right)\,. \tag{17}\] The boundary conditions (1) with transition functions (13) imply that \(\mathcal{S}^{\omega}_{\mu}\) satisfy periodic boundary conditions in all directions (because \(\hat{A}_{\mu}\) absorbs the inhomogenous part of (1)): \[\mathcal{S}^{\omega}_{\mu}(x+\hat{e}_{\nu}L_{\nu})=\mathcal{S}^{ \omega}_{\mu}(x)\,. \tag{18}\] On the other hand, \(\mathcal{S}^{k}_{\mu}\), \(\mathcal{S}^{\ell}_{\mu}\), \(\mathcal{W}^{k\times\ell}_{\mu}\), and \(\mathcal{W}^{\dagger\ell\times k}_{\mu}\) satisfy exactly the same gaugino-field boundary conditions we discussed in the previous section, and we refrain from repeating (thus, the boundary conditions are given by equations (10), (11), (12), respectively, for \(\mathcal{S}^{k}_{\mu}\), \(\mathcal{S}^{\ell}_{\mu}\), \(\mathcal{W}^{k\times\ell}_{\mu}\), recalling (17) and Footnote 10). The field strength of (4.1), \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+i[A_{\mu},A_{\nu}]\), is given by \[F_{\mu\nu} = \hat{F}_{\mu\nu}+F^{s}_{\mu\nu}\omega+\hat{D}_{\mu}\delta_{\nu}- \hat{D}_{\nu}\delta_{\mu}+i\left[\mathcal{S}^{\omega}_{\mu}\omega,\delta_{\nu} \right]+i\left[\delta_{\mu},\mathcal{S}^{\omega}_{\nu}\omega\right]+i[\delta_{ \mu},\delta_{\nu}]\,, \tag{4.4}\] \[\equiv \hat{F}_{\mu\nu}+F^{s}_{\mu\nu}\omega+\left[\begin{array}{cc}F^ {k}_{\mu\nu}&\mathcal{F}^{k\times\ell}_{\mu\nu}\\ \mathcal{F}^{\ell\times k}_{\mu\nu}&F^{\ell}_{\mu\nu}\end{array}\right]\,,\] where \(\hat{D}_{\mu}=\partial_{\mu}+i[\hat{A}_{\mu},\,]\) is the covariant derivative w.r.t. the gauge field \(\hat{A}_{\mu}\). Using (4.1, 4.2), we obtain: \[F^{s}_{\mu\nu} = \partial_{\mu}\mathcal{S}^{\omega}_{\nu}-\partial_{\nu}\mathcal{ S}^{\omega}_{\mu}\,,\] \[F^{k}_{\mu\nu} = \partial_{\mu}\mathcal{S}^{k}_{\nu}-\partial_{\nu}\mathcal{S}^{ k}_{\mu}+i[\mathcal{S}^{k}_{\mu},\mathcal{S}^{k}_{\nu}]+i\mathcal{W}^{k\times \ell}_{\mu}\mathcal{W}^{\dagger\ell\times k}_{\nu}-i\mathcal{W}^{k\times\ell }_{\nu}\mathcal{W}^{\dagger\ell\times k}_{\mu}\,,\] \[F^{\ell}_{\mu\nu} = \partial_{\mu}\mathcal{S}^{\ell}_{\nu}-\partial_{\nu}\mathcal{S} ^{\ell}_{\mu}+i[\mathcal{S}^{\ell}_{\mu},\mathcal{S}^{\ell}_{\nu}]+i\mathcal{ W}^{\dagger\ell\times k}_{\mu}\mathcal{W}^{k\times\ell}_{\nu}-i\mathcal{W}^{\dagger \ell\times k}_{\nu}\mathcal{W}^{k\times\ell}_{\mu}\,,\] \[\mathcal{F}^{k\times\ell}_{\mu\nu} = \hat{D}_{\mu}\mathcal{W}^{k\times\ell}_{\nu}-\hat{D}_{\nu} \mathcal{W}^{k\times\ell}_{\mu}+i\mathcal{S}^{k}_{\mu}\mathcal{W}^{k\times \ell}_{\nu}-i\mathcal{S}^{k}_{\nu}\mathcal{W}^{k\times\ell}_{\mu}+i \mathcal{W}^{k\times\ell}_{\mu}\mathcal{S}^{\ell}_{\nu}-i\mathcal{W}^{k\times \ell}_{\nu}\mathcal{S}^{\ell}_{\mu} \tag{4.5}\] \[+i2\pi N\left(\mathcal{S}^{\omega}_{\nu}\mathcal{W}^{k\times\ell }_{\nu}-\mathcal{S}^{\omega}_{\nu}\mathcal{W}^{k\times\ell}_{\mu}\right)\,,\] where \(\hat{D}_{\mu}\mathcal{W}^{k\times\ell}_{\nu}\) is understood as \[\hat{D}_{\mu}\mathcal{W}^{k\times\ell}_{\nu}=\left[\partial_{\mu}+i2\pi N\hat {A}^{\omega}_{\mu}\right]\mathcal{W}^{k\times\ell}_{\nu}\,, \tag{4.6}\] and we have written \(\hat{A}_{\mu}=\hat{A}^{\omega}_{\mu}\omega\), for \(\hat{A}_{\mu}\) from (3.4).11 Similarly, Footnote 11: For brevity, the nontrivial holonomies’ (allowed when \(\gcd(k,r)>1\)) are not explicitly shown here. They should, however, be included in the covariant derivatives in (4.6,4.7) and our final solution (4.21) does take these into account. \[\hat{D}_{\mu}\mathcal{W}^{\dagger\ell\times k}_{\nu}=\left[\partial_{\mu}-i2 \pi N\hat{A}^{\omega}_{\mu}\right]\mathcal{W}^{\dagger\ell\times k}_{\nu}\,. \tag{4.7}\] Next, we impose self duality on the background (4.1) on the deformed \(\mathbb{T}^{4}\). Imposing self-duality is equivalent (see e.g. [10]) to imposing the constraint on the field strength \[\bar{\sigma}_{\mu\nu}F_{\mu\nu}=0\,. \tag{4.8}\] where12\(\bar{\sigma}_{\mu\nu}=\frac{1}{2}(\bar{\sigma}_{\mu}\sigma_{\nu}-\bar{\sigma}_{ \nu}\sigma_{\mu})\). Now, we recall \(\hat{F}_{\mu\nu}=\hat{F}^{\omega}_{\mu\nu}\omega\), and use (3.2) to find \(\hat{F}^{\omega}_{12}=-\frac{r}{NkL_{1}L_{2}}\) and \(\hat{F}^{\omega}_{34}=-\frac{1}{N\ell L_{3}L_{4}}\). Recalling the properties of the self-dual \(\mathbb{T}^{4}\), eqn. (3.3), we also define the parameter \(\Delta\), which parametrizes the deviation from the self-dual torus: Footnote 12: Recall that the matrices \(\sigma_{\mu}\), \(\bar{\sigma}_{\mu}\) were defined in Footnote 5. \[\Delta\equiv\frac{r\ell L_{3}L_{4}-kL_{1}L_{2}}{\sqrt{V}}\,. \tag{4.9}\] We assume, without loss of generality, \(\Delta\geq 0\). Thus, we find that \[\hat{F}^{\omega}_{\mu\nu}\bar{\sigma}_{\mu\nu}=-\frac{2i\Delta}{Nk\ell\sqrt{V}} \sigma_{3}\,. \tag{4.10}\] To continue, for every four-vector \(\mathcal{V}_{\mu}\), we define the quaternions \(\mathcal{V}\equiv\sigma_{\mu}\mathcal{V}_{\mu}\) and \(\bar{\mathcal{V}}\equiv\bar{\sigma}_{\mu}\mathcal{V}_{\mu}\). Then, using (4.5) and (4.10), we find that self-duality (4.8) implies that \[\frac{1}{2}\bar{\sigma}_{\mu\nu}F_{\mu\nu}=\left(-\frac{i\Delta}{Nk\ell\sqrt{V} }\sigma_{3}+\bar{\partial}\mathcal{S}^{\omega}-\partial_{\mu}S^{\omega}_{\mu} \right)\omega+\left[\begin{array}{cc}\mathcal{A}^{k}&\mathcal{A}^{k\times \ell}\\ \mathcal{A}^{\dagger\ell\times k}&\mathcal{A}^{\ell}\end{array}\right]=0\,, \tag{4.11}\] where13 Footnote 13: Here and below, the terms that have sums over \(\mu\) should be multiplied by unit quaternion \(\sigma_{4}\), which we have omitted for brevity. Thus, temporarily not denoting explicitly that these are \(k\times\ell\) matrices, we warn the reader to keep in mind the difference between the quaternions, \(\mathcal{W}\equiv\mathcal{W}_{\mu}\sigma_{\mu}\), \(\bar{\mathcal{W}}=\bar{\sigma}_{\mu}W_{\mu}\), and the four-vector \(\mathcal{W}_{\mu}\) and, furthermore, note that \(\mathcal{W}^{\dagger}=\sigma_{\mu}W^{\dagger}_{\mu}\) and \(\bar{\mathcal{W}}^{\dagger}=\bar{\sigma}_{\mu}W^{\dagger}_{\mu}\). \[\mathcal{A}^{k} = \bar{\partial}\mathcal{S}^{k}-\partial_{\mu}\mathcal{S}^{k}_{\mu }-i\bar{\mathcal{S}}^{k}\mathcal{S}^{k}+i\mathcal{S}^{k}_{\mu}\mathcal{S}^{k} _{\mu}+i\bar{\mathcal{W}}^{k\times\ell}\mathcal{W}^{\dagger\ell\times k}-i \mathcal{W}^{k\times\ell}_{\mu}\mathcal{W}^{\dagger\ell\times k}_{\mu}\,,\] \[\mathcal{A}^{k\times\ell} = \bar{\bar{D}}\mathcal{W}^{k\times\ell}-\hat{D}_{\mu}\mathcal{W}^{k \times\ell}_{\mu}+i\bar{\mathcal{S}}^{k}\mathcal{W}^{k\times\ell}-i\mathcal{S }^{k}_{\mu}\mathcal{W}^{k\times\ell}_{\mu}+i\bar{\mathcal{W}}^{k\times\ell} \mathcal{S}^{\ell}-i\mathcal{W}^{k\times\ell}_{\mu}\mathcal{S}^{\ell}_{\mu} \tag{4.12}\] \[+i2\pi N\left(\bar{\mathcal{S}}^{\omega}\mathcal{W}^{k\times\ell} -\mathcal{S}^{\omega}_{\mu}\mathcal{W}^{k\times\ell}_{\mu}\right)\,,\] \[\mathcal{A}^{\ell} = \bar{\partial}\mathcal{S}^{\ell}-\partial_{\mu}\mathcal{S}^{\ell} _{\mu}-i\bar{\mathcal{S}}^{\ell}\mathcal{S}^{\ell}+i\mathcal{S}^{\ell}_{\mu} \mathcal{S}^{\ell}_{\mu}+i\bar{\mathcal{W}}^{\dagger\ell\times k}\mathcal{W}^{ k\times\ell}-i\mathcal{W}^{\dagger\ell\times k}_{\mu}\mathcal{W}^{k\times\ell}_{\mu}\,.\] In order to remove gauge redundancies, we impose the background gauge condition with respect to the field \(\hat{A}_{\mu}\): \[\hat{D}_{\mu}A_{\mu}=0 \tag{4.13}\] which in components reads: \[\partial_{\mu}\mathcal{S}^{\omega}_{\mu}=0\,,\partial_{\mu}\mathcal{S}^{k}_{ \mu}=0\,,\partial_{\mu}\mathcal{S}^{\ell}_{\mu}=0\,,\hat{D}_{\mu}\mathcal{W}^{ k\times\ell}_{\mu}=0\,,\hat{D}_{\mu}\mathcal{W}^{\dagger\ell\times k}_{\mu}=0\,. \tag{4.14}\] Using (4.14) in (4.11), we find the set of equations imposing the self-duality condition on the background (4.1): \[\left(-\frac{i2\pi\Delta}{Nk\sqrt{V}}\sigma_{3}+2\pi\ell\bar{ \partial}\mathcal{S}^{\omega}\right)I_{k}+\bar{\partial}\mathcal{S}^{k}-i\bar {\mathcal{S}}^{k}\mathcal{S}^{k}+i\mathcal{S}^{k}_{\mu}\mathcal{S}^{k}_{\mu}+i \bar{\mathcal{W}}^{k\times\ell}\mathcal{W}^{\dagger\ell\times k}-i\mathcal{W} ^{k\times\ell}_{\mu}\mathcal{W}^{\dagger\ell\times k}_{\mu} = 0\,,\] \[\left(\frac{i2\pi\Delta}{N\ell\sqrt{V}}\sigma_{3}-2\pi k\bar{ \partial}\mathcal{S}^{\omega}\right)I_{\ell}+\bar{\partial}\mathcal{S}^{\ell} -i\bar{\mathcal{S}}^{\ell}\mathcal{S}^{\ell}+i\mathcal{S}^{\ell}_{\mu} \mathcal{S}^{\ell}_{\mu}+i\bar{\mathcal{W}}^{\dagger\ell\times k}\mathcal{W}^{ k\times\ell}-i\mathcal{W}^{\dagger\ell\times k}_{\mu}\mathcal{W}^{k\times\ell}_{\mu} = 0\,,\] \[\bar{\bar{D}}\mathcal{W}^{k\times\ell}+i\bar{\mathcal{S}}^{k} \mathcal{W}^{k\times\ell}-i\mathcal{S}^{k}_{\mu}\mathcal{W}^{k\times\ell}_{\mu}+ i\bar{\mathcal{W}}^{k\times\ell}\mathcal{S}^{\ell}-i\mathcal{W}^{k\times\ell}_{\mu} \mathcal{S}^{\ell}_{\mu}+i2\pi N\left(\bar{\mathcal{S}}^{\omega}\mathcal{W}^{k \times\ell}-\mathcal{S}^{\omega}_{\mu}\mathcal{W}^{k\times\ell}_{\mu}\right) = 0\,.\] We note that here \(\bar{\tilde{D}}\equiv\bar{\sigma}_{\mu}\hat{D}_{\mu}\), precisely the Weyl operator for the undotted fermions, whose zero modes were studied in Section 3.4. The idea of the method introduced in [7] is that a solution of the self-duality conditions (4.15) can be obtained via series expansions in the deformation parameter \(\Delta\) of (4.9). The approximate solution of the self-duality equations thus obtained is then also an approximation to the minimal action solution of the equations of motion, i.e. a fractional instanton with \(Q=\frac{r}{N}\). Comparing the \(\Delta\) scaling of the various terms in (4.15), the \(\Delta\)-expansion is found to have the following form \[\mathcal{W}^{k\times\ell} = \sqrt{\Delta}\sum_{a=0}^{\infty}\Delta^{a}\mathcal{W}^{(a)k\times \ell}\,,\] \[\mathcal{S} = \Delta\sum_{a=0}^{\infty}\Delta^{a}\mathcal{S}^{(a)}\,, \tag{4.16}\] where \(\mathcal{S}\) accounts for \(\mathcal{S}^{\omega}\), \(\mathcal{S}^{k}\), and \(\mathcal{S}^{\ell}\). We proceed to leading order14 in \(\Delta\) by considering solutions of \(\mathcal{W}^{k\times\ell}\) to order \(\sqrt{\Delta}\) and \(\mathcal{S}\) to order \(\Delta\), thus keeping only the terms \(\mathcal{S}^{(0)}\) and \(\mathcal{W}^{(0)}\) in (4.16). Then, to this order, (4.15) gives Footnote 14: The \(\Delta\) expansion was tested to high orders, and found to converge (even to the infinite volume limit) in the two dimensional abelian Higgs model in [12]. Convergence is not well understood for the general case of \(SU(N)\) in four dimensions. For \(SU(2)\), the comparisons with the exact numerical solution (obtained by minimizing the lattice Yang-Mills action) of [7] give evidence for the convergence of the expansion for small \(\Delta\). It should be possible to analytically study the properties of higher orders in the expansion (4.16) of the solutions of (4.15); however, this rather formidable task is left for the future. \[\left(-\frac{i2\pi}{Nk\sqrt{V}}\sigma_{3}+2\pi\ell\bar{\partial }\mathcal{S}^{(0)\omega}\right)I_{k}+\bar{\partial}\mathcal{S}^{(0)k}+i\bar{ \mathcal{W}}^{(0)k\times\ell}\mathcal{W}^{\dagger(0)\ell\times k}-i\mathcal{W }^{(0)k\times\ell}_{\mu}\mathcal{W}^{\dagger(0)\ell\times k}_{\mu}=0\,,\] \[\left(\frac{i2\pi}{N\ell\sqrt{V}}\sigma_{3}-2\pi k\bar{\partial }\mathcal{S}^{(0)\omega}\right)I_{\ell}+\bar{\partial}\mathcal{S}^{(0)\ell}+i \bar{\mathcal{W}}^{\dagger(0)\ell\times k}\mathcal{W}^{(0)k\times\ell}-i \mathcal{W}^{\dagger(0)\ell\times k}_{\mu}\mathcal{W}^{(0)k\times\ell}_{\mu}= 0\,, \tag{4.17}\] and \[\bar{\bar{D}}\mathcal{W}^{(0)k\times\ell}=0\,. \tag{4.18}\] The strategy of solving the leading-order equations (4.17, 4.18) is as follows: 1. Solve (4.18) for the quaternions \(\mathcal{W}^{(0)k\times\ell}\). This equation has the form of two copies of the undotted fermion zero-mode equation, whose general normalizable solutions were already found in Section 3.4.2, recall (3.18). 2. Next, plug the general solution of (4.18) into (4.17). The result is a set of first-order differential equations for the quaternions \(\mathcal{S}^{(0)}\), with periodic boundary conditions for \(\mathcal{S}^{(0)\omega}\) and with \(\mathcal{S}^{(0)k},\mathcal{S}^{(0)\ell}\), obeying (3.10), (3.11), respectively. The resulting equations for \(\mathcal{S}^{(0)}\) have nonvanishing source terms, comprised of a constant piece (the one proportional to \(\sigma_{3}\) in (4.17)) and of terms quadratic in the just-found general solution of (4.18), \(\mathcal{W}^{(0)k\times\ell}\). Consistency of these equations requires that the source term be orthogonal to the zero modes of the differential operator acting on the various components of \(\mathcal{S}^{(0)}\). 3. One then needs to determine the zero modes of \(\bar{\partial}\), the operator acting on \(\mathcal{S}^{(\prime)}\), obeying the appropriate boundary conditions. This task was already accomplished in Section 3.4.1, since \(\bar{\partial}\) is simply the undotted diagonal Weyl operator. We then require orthogonality of these zero modes to the source terms in (4.17). On one hand, this will be shown to provide restrictions on the arbitrary coefficients appearing in the general solution of (4.18), \(\mathcal{W}^{(0)k\times\ell}\). The coefficients left arbitrary determine the moduli space of the multi-fractional instanton. On the other hand, imposing consistency of (4.17) allows one to determine \(\mathcal{S}^{(0)}\) by expanding both sides in a chosen basis of functions and equating the coefficients on both sides. The procedure outlined above can be, in principle, iterated to higher orders. The way this procedure works to higher orders was, in principle, studied in [12]. However, implementing it to determine the higher-order solution becomes technically challenging. Here, we shall only study the leading-order and determine the constraints of the arbitrary coefficients in \(\mathcal{W}^{(0)\;k\times\ell}\), which restrict the moduli space of the multi-fractional instantons. To begin implementing the above steps, we start with (4.18), written explicitly as \[\bar{\sigma}_{\mu}\hat{D}_{\mu}\left[\begin{array}{cc}\mathcal{W}^{(0)k \times\ell}_{4}+i\mathcal{W}^{(0)k\times\ell}_{3}&\mathcal{W}^{(0)k\times\ell }_{2}+i\mathcal{W}^{(0)k\times\ell}_{1}\\ -\mathcal{W}^{(0)k\times\ell}_{2}+i\mathcal{W}^{(0)k\times\ell}_{1}&\mathcal{ W}^{(0)k\times\ell}_{4}-i\mathcal{W}^{(0)k\times\ell}_{3}\end{array}\right]=0\,, \tag{4.19}\] where \(\hat{D}_{\mu}=\partial_{\mu}+i\;[\hat{A}_{\mu},]\) is the covariant derivative in the background (3.4). As already stated, (4.19) represent two copies of the undotted gaugino zero mode equations in the \(\Delta=0\) background \(A^{\omega}\), one for each column of the \(\mathcal{W}\)-quaternion given above. Further, as for the gauginos, one can show that normalizability on \(\mathbb{T}^{4}\) requires normalizability in the infinite \(x_{1},x_{3}\) plane of the simple harmonic oscillator wave functions, the solutions of (4.19). Thus, we borrow the solutions for the gauginos from Section 3.4.2, we find that equations (4.19) have normalizable solutions if and only if \[\mathcal{W}^{(0)k\times\ell}_{4}=i\mathcal{W}^{(0)k\times\ell}_{3}\,,\quad \mathcal{W}^{(0)k\times\ell}_{2}=i\mathcal{W}^{(0)k\times\ell}_{1} \tag{4.20}\] noting that these are nothing but the conditions of vanishing of \(\lambda_{C^{\prime}B\,2}\), recall (3.18). The solutions for \(\mathcal{W}_{4}^{(0)k\times\ell},\mathcal{W}_{2}^{(0)k\times\ell}\) are then borrowed from (3.18):15 Footnote 15: For further use, in (4.21), we also introduced the short-hand notation \(W_{2\;C^{\prime}C}\) and \(W_{4\;C^{\prime}C}\) for the general solutions of (4.18). \[\left(\mathcal{W}_{2}^{(0)k\times\ell}\right)_{C^{\prime}C} =V^{-1/4}\sum_{p=0}^{\frac{r}{\gcd(k,r)}-1}\mathcal{C}_{2}^{[C^{ \prime}+pk]_{r}}\Phi_{C^{\prime}C}^{(p)}(x,\hat{\phi})=:W_{2\;C^{\prime}C}\,,\] \[\left(\mathcal{W}_{4}^{(0)k\times\ell}\right)_{C^{\prime}C} =V^{-1/4}\sum_{p=0}^{\frac{r}{\gcd(k,r)}-1}\mathcal{C}_{4}^{[C^{ \prime}+pk]_{r}}\Phi_{C^{\prime}C}^{(p)}(x,\hat{\phi})=:W_{4\;C^{\prime}C}\,, \tag{4.21}\] where \(\Phi_{C^{\prime}C}^{(p)}(x,\hat{\phi})\) are given by (3.21) and the volume factor is included for future convenience. Thus, there are \(2r\) arbitrary coefficients \(\mathcal{C}_{2}^{[C^{\prime}+pk]_{r}}\) and \(\mathcal{C}_{4}^{[C^{\prime}+pk]_{r}}\), which are now complex bosonic variables. In the following, we shall discuss the physical significance of \(\mathcal{C}_{2,4}\). We now continue with the next step: imposing orthogonality to the various zero modes of \(\bar{\partial}=\bar{\sigma}_{\mu}\partial_{\mu}\), the solutions of the equation \(\bar{\partial}\mathcal{S}^{(0)}=0\). Notice that \(\bar{\partial}\) is precisely the Weyl operator for the diagonal undotted fermions discussed in Section 3.4.1 and that we shall borrow our results from that Section shortly. To continue, however, it is convenient to rewrite (4.17) using the index notation, recalling eqn. (4.2) and Footnote 10. This necessitates using (4.20) and the definition of the quaternions, in order to express everything through the general solutions of (4.18), denoted by \(W_{4\;(\text{or}2)\;C^{\prime}C}\) of (4.21). This produces, from the first equation of (4.17), an equation determining \(\mathcal{S}_{C^{\prime}B^{\prime}}\) (which includes the component \(\mathcal{S}^{\omega}\omega\) from (4.1)): \[\bar{\partial}\mathcal{S}_{C^{\prime}B^{\prime}}= \tag{4.22}\] \[i\left(\begin{array}{ccc}\frac{2\pi}{Nk\sqrt{V}}\delta_{C^{ \prime}B^{\prime}}-2\;(W_{2}W_{2}^{*}-W_{4}W_{4}^{*})_{C^{\prime}B^{\prime}}&4 \;(W_{2}W_{4}^{*})_{C^{\prime}B^{\prime}}\\ 4\;(W_{2}W_{4}^{*})_{C^{\prime}B^{\prime}}&-\frac{2\pi}{Nk\sqrt{V}}\delta_{C^{ \prime}B^{\prime}}+2\;(W_{2}W_{2}^{*}-W_{4}W_{4}^{*})_{C^{\prime}B^{\prime}} \end{array}\right)\;,\] where we introduced the shorthand notation, \((W_{2}W_{4}^{*})_{C^{\prime}B^{\prime}}\equiv W_{2\;C^{\prime}D}W_{4\;B^{ \prime}D}^{*}\), with a sum over \(D\) implied, and similar for the other contractions. Likewise, the equation for \(\mathcal{S}_{CB}\) obtained from the second of eqns. (4.17) reads: \[\bar{\partial}\mathcal{S}_{CB}= \tag{4.23}\] \[i\left(\begin{array}{ccc}-\frac{2\pi}{N\ell\sqrt{V}}\delta_{ CB}+2(W_{2}^{*}W_{2}-W_{4}^{*}W_{4})_{CB}&-4(W_{4}^{*}W_{2})_{CB}\\ -4(W_{2}^{*}W_{4})_{CB}&\frac{2\pi}{N\ell\sqrt{V}}\delta_{CB}-2(W_{2}^{*}W_{2} -W_{4}^{*}W_{4})_{CB}\end{array}\right)\;,\] using a similar shorthand (e.g. \((W_{2}^{*}W_{2})_{CB}\equiv W_{2\;D^{\prime}C}^{*}W_{2\;D^{\prime}B}\) with a sum over \(D^{\prime}\)). Next, we recall that the operator \(\bar{\partial}\) is the Weyl operator for the diagonal undotted fermions, whose zero modes were determined in Section 3.4.1. We also recall that \(\mathcal{S}\) is a quaternion, hence (similar to (4.18)), we can think of \(\mathcal{S}\) as of two sets of Weyl fermions, one for each column of the quaternion matrix. We can thus borrow the results for the zero modes, recalling (3.16) and (3.17), and then impose their orthogonality of the r.h.s. of (4.22, 4.23). As shown there, undotted fermions have \(2\mathrm{gcd}(k,r)\) constant zero modes. This implies that there are \(4\mathrm{gcd}(k,r)\) zero modes of \(\mathcal{S}\), which we label by an arbitrary _quaternionic_ coefficient \(\epsilon^{(j)}\), \(j=0,...,\mathrm{gcd}(k,r)-1\). The corresponding wave functions, which we denote \(s_{B^{\prime}C^{\prime}}\) and \(s_{BC}\), have only diagonal entries \[s_{B^{\prime}C^{\prime}} = \delta_{B^{\prime}C^{\prime}}\sum_{j=0}^{\mathrm{gcd}(k,r)-1} \epsilon^{(j)}\sum_{n=0}^{\frac{k}{\mathrm{gcd}(k,r)}-1}\delta_{B^{\prime},[j +nr]_{k}}\,,\] \[s_{BC} = -\frac{\delta_{BC}}{\ell}\sum_{B^{\prime}=0}^{k-1}s_{B^{\prime}B ^{\prime}}\,\ \forall B=0,...,\ell-1. \tag{4.24}\] The simplest condition is the orthogonality of \(s_{BC}\) (which is simply a constant quaternionic mode) to the source term in the equation for \(\mathcal{S}_{CB}\). Multiplying the source term by the \(s_{BC}\) zero mode, taking the trace, and integrating over \(\mathbb{T}^{4}\), we find that orthogonality implies that the integral of the trace of the r.h.s. over \(\mathbb{T}^{4}\) should vanish for every entry in the quaternion source on the r.h.s. of (4.23). Explicitly, this gives the conditions \[\int_{\mathbb{T}^{4}}(W_{2\,B^{\prime}C}^{*}W_{2\,B^{\prime}C}-W_ {4\,B^{\prime}C}^{*}W_{4\,B^{\prime}C}) = \frac{\pi}{N}\sqrt{V}\,\] \[\int_{\mathbb{T}^{4}}W_{4\,B^{\prime}C}^{*}W_{2\,B^{\prime}C} = 0\, \tag{4.25}\] with a sum over the full range of repeated indices implied. However, the conditions imposed by orthogonality to the \(4\mathrm{gcd}(k,r)\) zero modes \(s_{B^{\prime}B^{\prime}}\) labelled by \(\epsilon^{(j)}\) are more detailed than (4.25). Proceeding similar to the above, we find the \(\mathrm{gcd}(k,r)\) conditions: \[\sum_{B=0}^{\ell-1}\sum_{C^{\prime}=0}^{k-1}\sum_{n=0}^{\frac{k}{ \mathrm{gcd}(k,r)}-1}\delta_{C^{\prime},[j+nr]_{k}}\int_{\mathbb{T}^{4}}(W_{2 \,C^{\prime}B}W_{2\,C^{\prime}B}^{*}-W_{4\,C^{\prime}B}W_{4\,C^{\prime}B}^{*} )=\frac{\pi}{N\mathrm{gcd}(k,r)}\sqrt{V}\] \[\sum_{B=0}^{\ell-1}\sum_{C^{\prime}=0}^{k-1}\sum_{n=0}^{\frac{k}{ \mathrm{gcd}(k,r)}-1}\delta_{C^{\prime},[j+nr]_{k}}\int_{\mathbb{T}^{4}}W_{4 \,C^{\prime}B}^{*}W_{2\,C^{\prime}B}=0,\quad j=0,...,\mathrm{gcd}(k,r)-1\,. \tag{4.26}\] That the above \(\gcd(k,r)\) conditions are more general than (4.25) follows by observing that summing up the \(\gcd(k,r)\) conditions in each line of (4.26) (i.e., summing over \(j\)) we obtain (4.25). The importance of the conditions (4.26) is that they restrict the \(2r\) complex coefficients \(\mathcal{C}_{2}\) and \(\mathcal{C}_{4}\), and thus determine the moduli space of the multifractional instanton. Studying this is the subject of the next Section. ## 5 The moduli of the \(Q=\frac{r}{N}\) bosonic solution: compact vs. noncompact To study the constraints (4.25, 4.26) with \(W_{2}\) and \(W_{4}\) from (4.21), we now define, for each \(j=0,...,\gcd(k,r)-1\) and \(a,b\in\{2,4\}\): \[I^{ab}_{j}=\sum_{C^{\prime}=0}^{k-1}\sum_{n=0}^{\frac{k}{\gcd(k,r)}-1}\delta_ {C^{\prime},[j+nr]_{k}}\sum_{p,p^{\prime}=0}^{\frac{r}{\gcd(k,r)}-1}\frac{ \mathcal{C}_{a}^{[C^{\prime}+pk]_{r}}\ \mathcal{C}_{b}^{*\ [C^{\prime}+p^{\prime}k]_{r}}}{ \sqrt{V}}\int_{\mathbb{T}^{4}}\sum_{B=0}^{\ell-1}\Phi_{C^{\prime}B}^{(p)} \Phi_{C^{\prime}B}^{(p^{\prime})\,*}. \tag{5.1}\] In terms of \(I^{ab}_{j}\), the constraints (4.25, 4.26) take the form: \[I^{22}_{j}-I^{44}_{j} = \frac{\pi\sqrt{V}}{\gcd(k,r)N}, \tag{5.2}\] \[I^{42}_{j} = 0,\quad\text{where}\ \ j=0,...,\gcd(k,r)-1\.\] The expressions (5.1) are evaluated in Appendix B. Substituting \(I^{ab}_{j}\) from (B.6) in, we find the constraints (4.25, 4.26) expressed in terms of the undetermined complex coefficients \(\mathcal{C}_{2}^{A}\) and \(\mathcal{C}_{4}^{A}\) from the solution of the equations for \(\mathcal{W}_{\mu}\) (4.21):16 Footnote 16: We also note that the origin of the \((\varphi_{1,3}^{j})^{2}\)-terms on the r.h.s. is in the imaginary \(\hat{\phi}_{1},\hat{\phi}_{3}\)-terms appearing in the last two lines in \(\Phi^{(p)}\) from (3.21). One can show that they can be absorbed in the definition of the coefficients \(\mathcal{C}^{j}\) (or \(\eta^{j}\)). \[\sum_{A_{j}\in S_{j}}\mathcal{C}_{2}^{A_{j}}\ \mathcal{C}_{2}^{*\ A_{j}}- \mathcal{C}_{4}^{A_{j}}\ \mathcal{C}_{4}^{*\ A_{j}} = \frac{2\pi}{\gcd(k,r)N}\sqrt{\frac{rL_{1}L_{3}}{\ell kL_{2}L_{4}} }\ e^{-\frac{L_{1}L_{2}k}{2\pi r}(\varphi_{1}^{j})^{2}}\ e^{-\frac{L_{3}L_{4} \ell}{2\pi}(\varphi_{3}^{j})^{2}},\] \[\sum_{A_{j}\in S_{j}}\mathcal{C}_{2}^{A_{j}}\ \mathcal{C}_{4}^{*\ A_{j}} = 0. \tag{5.3}\] Here, \(S_{j}\) are \(\gcd(k,r)\) sets of integers (\(\in\{0,...,r-1\}\)), defined in (B.4) and repeated here for convenience: \[S_{j}=\bigg{\{}[[j+nr]_{k}+pk]_{r},\text{for}\ n=0,...\frac{k}{\gcd(k,r)}-1, \text{and}\ p=0,...,\frac{r}{\gcd(k,r)}-1\bigg{\}}.\] Repeated entries in \(S_{j}\) are identified so that each set has \(\frac{r}{\gcd(k,r)}\) elements. The union of all sets \(S_{j}\) is the set \(\{0,...,r-1\}\). As we shall shortly see, the structure of the "moduli space" of \({\cal C}^{A}_{2,4}\) defined by (5.3) is quite rich. Let us, however, first count the number of moduli for general \(k\) and \(r>1\), taking into account the constraints (5.3). First, there are \(4\gcd(k,r)\) Wilson lines \(\varphi^{j}_{\mu}\), as per (3.20). Then, there are \(2r\) real components of \({\cal C}^{A}_{2}\) and \(2r\) real components of \({\cal C}^{A}_{4}\). Thus the total number of real moduli is \(4r+4\gcd(k,r)\). These are subject to the constraints of eqn. (5.3): the \(\gcd(k,r)\) real constraints on the first line and \(2\gcd(k,r)\) real constraints on the second line. Thus, it would appear that the number of moduli minus the number of constraints is \(4r+\gcd(k,r)\). We notice, however, that the gauge conditions (4.14) are invariant under constant gauge transformations in the \(\gcd(k,r)\) Cartan directions, the ones along the allowed holonomies (3.20) (i.e. ones that commute with the transition functions).17 Thus, the total number of bosonic moduli for \(k\neq r>1\) is \(4r\), as required by the index theorem for a selfdual solution. Footnote 17: In the next Section, we shall explicitly see that no gauge invariant characterizing the instanton depends on these phases. We now consider the various cases in detail: 1. **The case \({\bf k}={\bf r}\)**. This case is singled out by the fact that there are \(k\) complex coefficients \({\cal C}^{A}_{2}\) (and \(k\)\({\cal C}^{A}_{4}\)). In addition, the \(r\) sets \(S_{j}\) are such that each contains a single element, one of the \(r\) allowed values of \(A\). Thus the \(r(=k)\) constraints become, with \(c\) a real number, determined by the r.h.s. of (5.3): \[{\cal C}^{A}_{2}\;{\cal C}^{*\,A}_{2}-{\cal C}^{A}_{4}\;{\cal C}^{* \,A}_{4} = c^{2}\;(\mbox{no sum over }A)\,,\] (5.4) \[{\cal C}^{A}_{2}\;{\cal C}^{*\,A}_{4} = 0\qquad\implies{\cal C}^{A}_{4}=0,\;{\cal C}^{A}_{2}=e^{i \alpha_{A}}c,\;\forall\;A\in\{0,...,r-1\}.\] Thus, all "moduli" \({\cal C}^{A}_{2,4}\) are fixed up to \(r\) undetermined phases \(\alpha_{A}\). These phases are unphysical and correspond to the already mentioned ability to perform \(r\) (=gcd\((k,r)\)) constant gauge transformations preserving the gauge conditions (4.14). Thus, the only moduli left are the \(r\) phases \(\varphi^{j}_{\mu}\), \(j=0,...,r\), recall (3.20). Thus the multifractional instanton obtained for \(k=r\), with \(Q=\frac{r}{N}\), has \(4r\) compact moduli, as expected from the index theorem. Further studies of the instantons for \(k=r\) and the interpretation of these moduli will be discussed in the next Section. 2. **The case \({\bf k}\neq{\bf r},{\bf r}>{\bf 1}\).**18 This case is quite different. Here the \(r\) sets \(S_{j}\) contain more than a single number each. Thus, the second equation in (5.4) does not set any modulus to zero (recall that it required that all \({\cal C}_{4}^{A}\) vanish for \(k=r\)). Instead, as we argue below, the constraints permit the moduli \({\cal C}_{2,4}\) to grow without bound, thus making the "moduli" space noncompact. To illustrate the noncompactness for \(k\neq r>1\), we abandon generality and focus on a simple example \(r=2,k=3\), a case with \(\gcd(k,r)=1\) (we shall further use this example in the following). Here, there is only a single set \(S_{j}\), \(S_{0}=\{0,1\}\) and after the following relabeling, with all \(x\)'s and \(y\)'s real,19 Footnote 19: A trivial rescaling setting the r.h.s. of the first equation in (5.3) to unity is not explicitly shown. \[{\cal C}_{2}^{0}\to x_{1}+iy_{1}\,,\quad{\cal C}_{4}^{0}\to x_{2}+iy_{2}\,, \quad{\cal C}_{2}^{1}\to x_{3}+iy_{3}\,,\quad{\cal C}_{4}^{1}\to x_{4}+iy_{4}\,, \tag{5.5}\] we obtain for eqns. (5.3): \[x_{1}^{2}+y_{1}^{2}+x_{3}^{2}+y_{3}^{2}-x_{2}^{2}-y_{2}^{2}-x_{4} ^{2}-y_{4}^{2} = 1\,,\] \[x_{1}x_{2}+y_{1}y_{2}+x_{3}x_{4}+y_{3}y_{4} = 0\,,\] \[x_{2}y_{1}-x_{1}y_{2}+y_{3}x_{4}-x_{3}y_{4} = 0\,. \tag{5.6}\] Conditions (5.6) eliminate 3 out of 8 real parameters, leaving 4 physical parameters that parameterize the moduli space in addition to the single arbitrary unphysical phase mentioned above (recall that here \(\gcd(k,r)\)=1). The moduli space spanned by the hypersurface given by the constraints (5.6) is non-compact. To see this, we set for simplicity \(x_{2}=y_{1}=y_{3}=x_{4}=0\). Then, the constraints become \[x_{1}y_{2}=-x_{3}y_{4}\,,\quad x_{1}^{2}-y_{2}^{2}+x_{3}^{2}-y_{4}^{2}=1\,. \tag{5.7}\] For every \(x_{3}=y_{4}\in(-\infty,\infty)\) we find \[x_{1}^{2}=\frac{x_{3}^{4}}{x_{1}^{2}}+1\,, \tag{5.8}\] which has at least two real solutions of \(x_{1}\). We also find that \(x_{1}\to\infty\) as \(x_{3}=y_{4}\to\infty\). We conclude that the moduli space is non-compact. For a later convenience, we parametrize the asymptotic region (\(u\to\infty\)) of this noncompact direction of the moduli space as \[{\cal C}_{2}^{0}\sim\pm u\,,\quad{\cal C}_{2}^{1}\sim u\,,\quad{\cal C}_{4}^ {0}\sim\mp iu\,,\quad{\cal C}_{4}^{1}\sim iu\,. \tag{5.9}\] It is easy to see, even without following the derivation, that (5.9) obey (5.3) with vanishing r.h.s., i.e. at \(u\to\infty\) The presence of noncompact moduli for the \(k\neq r\) instantons is difficult to interpret in a \(\mathbb{T}^{4}\) geometry. In the later Sections, we shall see that on this noncompact moduli space, \(\mathcal{O}(\Delta)\) gauge invariants characterizing the multifractional instanton grow without bounds--see the end of Section 6.1 for a brief discussion of the blowup and Appendix D for details of its derivation. This blow up clashes with the spirit of the \(\Delta\) expansion. As we mentioned in the Introduction, it would be nice to achieve a deeper understanding of this finding. ## 6 Local gauge invariants of the \(Q=\frac{r}{N}\) solution and its "dissociation" In this Section, we give expressions for local gauge invariant densities characterizing the multifractional instanton to order \(\Delta\). These expressions are evaluated in the Appendices. We use the results to, first, show that \(\mathcal{O}(\Delta)\) local gauge invariants grow without bound along the noncompact moduli directions found for \(k\neq r\), and, second, to argue for the fractionalization of the \(k=r\) multifractional instanton into \(r\) identical lumps located at positions on \(\mathbb{T}^{4}\) determined by the \(r\) distinct holonomies/moduli. ### Gauge-invariant local densities to order \(\Delta\) and their blow up for \(k\neq r\) The gauge-invariant local density of the lowest scaling dimension is \[\operatorname{tr}\left[F_{\mu_{1}\nu_{1}}F_{\mu_{2}\nu_{2}}\right]\,, \tag{6.1}\] where \[F_{\mu\nu}=\left(F_{\mu\nu}^{\omega}+F_{\mu\nu}^{s}\right)\omega+\left[ \begin{array}{cc}F_{\mu\nu}^{k}&\mathcal{F}_{\mu\nu}\\ \mathcal{F}_{\mu\nu}^{\dagger}&F_{\mu\nu}^{\ell}\end{array}\right]\,, \tag{6.2}\] and we recall that the components of (6.2) were already defined in (4.5).20 Footnote 20: For brevity, we have omitted the \(k\times\ell\) and \(\ell\times k\) superscripts in writing (6.2). In Appendix C, we compute the various field strength components appearing in (6.2) to order \(\Delta\) (shown in eqn. (C.13)) as well as the action density and action. Then, following the same steps used in deriving the action density there, we obtain for eqn. (6.1) to order \(\Delta\) \[\text{tr}\left[F_{\mu_{1}\nu_{1}}F_{\mu_{2}\nu_{2}}\right]=\] \[\text{tr}[\omega^{2}]\left\{\hat{F}^{\omega}_{\mu_{1}\nu_{1}}\hat{F }^{\omega}_{\mu_{2}\nu_{2}}+\Delta\hat{F}^{\omega}_{\mu_{1}\nu_{1}}\left( \partial_{\mu_{2}}{\cal S}^{(0)\omega}_{\nu_{2}}-\partial_{\nu_{2}}{\cal S}^{ (0)\omega}_{\mu_{2}}\right)+\Delta\hat{F}^{\omega}_{\mu_{2}\nu_{2}}\left( \partial_{\mu_{1}}{\cal S}^{(0)\omega}_{\nu_{1}}-\partial_{\nu_{1}}{\cal S}^{ (0)\omega}_{\mu_{1}}\right)\right\}\] \[+2\pi\ell\Delta\hat{F}^{\omega}_{\mu_{1}\nu_{1}}\text{tr}_{k} \left[\partial_{\mu_{2}}{\cal S}^{(0)k}_{\nu_{2}}-\partial_{\nu_{2}}{\cal S}^{ (0)k}_{\mu_{2}}\right]+2\pi\ell\Delta\hat{F}^{\omega}_{\mu_{2}\nu_{2}}\text{tr }_{k}\left[\partial_{\mu_{1}}{\cal S}^{(0)k}_{\nu_{1}}-\partial_{\nu_{1}}{\cal S }^{(0)k}_{\mu_{1}}\right]\] \[-2\pi k\Delta\hat{F}^{\omega}_{\mu_{1}\nu_{1}}\text{tr}_{\ell} \left[\partial_{\mu_{2}}{\cal S}^{(0)\ell}_{\nu_{2}}-\partial_{\nu_{2}}{\cal S }^{(0)\ell}_{\mu_{2}}\right]-2\pi k\Delta\hat{F}^{\omega}_{\mu_{2}\nu_{2}}\text {tr}_{\ell}\left[\partial_{\mu_{1}}{\cal S}^{(0)\ell}_{\nu_{1}}-\partial_{\nu_ {1}}{\cal S}^{(0)\ell}_{\mu_{1}}\right]\] \[+i2\pi N\Delta\hat{F}^{\omega}_{\mu_{1}\nu_{1}}\text{tr}_{k} \left[{\cal W}_{\mu_{2}}{\cal W}^{\dagger}_{\nu_{2}}-{\cal W}_{\nu_{2}}{\cal W }^{\dagger}_{\mu_{2}}\right]+i2\pi N\Delta\hat{F}^{\omega}_{\mu_{2}\nu_{2}} \text{tr}_{k}\left[{\cal W}_{\mu_{1}}{\cal W}^{\dagger}_{\nu_{1}}-{\cal W}_{ \nu_{1}}{\cal W}^{\dagger}_{\mu_{1}}\right]\] \[+\Delta\text{tr}_{k}\left({\cal F}_{\mu_{1}\nu_{1}}{\cal F}^{ \dagger}_{\mu_{2}\nu_{2}}\right)+\Delta\text{tr}_{\ell}\left({\cal F}^{ \dagger}_{\mu_{1}\nu_{1}}{\cal F}_{\ \mu_{2}\nu_{2}}\right)\,. \tag{6.3}\] Using \(\text{tr}_{\ell}{\cal S}^{(0\ell)}_{\mu}=\text{tr}_{k}{\cal S}^{(0k)}_{\mu}=0\), we obtain \[\text{tr}\left[F_{\mu_{1}\nu_{1}}F_{\mu_{2}\nu_{2}}\right]=\] \[\text{tr}[\omega^{2}]\left\{\hat{F}^{\omega}_{\mu_{1}\nu_{1}}\hat{ F}^{\omega}_{\mu_{2}\nu_{2}}+\Delta\hat{F}^{\omega}_{\mu_{1}\nu_{1}}\left( \partial_{\mu_{2}}{\cal S}^{(0)\omega}_{\nu_{2}}-\partial_{\nu_{2}}{\cal S}^{ (0)\omega}_{\mu_{2}}\right)+\Delta\hat{F}^{\omega}_{\mu_{2}\nu_{2}}\left( \partial_{\mu_{1}}{\cal S}^{(0)\omega}_{\nu_{1}}-\partial_{\nu_{1}}{\cal S}^ {(0)\omega}_{\mu_{1}}\right)\right\}\] \[+i2\pi N\Delta\hat{F}^{\omega}_{\mu_{1}\nu_{1}}\text{tr}_{k} \left[{\cal W}_{\mu_{2}}{\cal W}^{\dagger}_{\nu_{2}}-{\cal W}_{\nu_{2}}{\cal W }^{\dagger}_{\mu_{2}}\right]+i2\pi N\Delta\hat{F}^{\omega}_{\mu_{2}\nu_{2}} \text{tr}_{k}\left[{\cal W}_{\mu_{1}}{\cal W}^{\dagger}_{\nu_{1}}-{\cal W}_{ \nu_{1}}{\cal W}^{\dagger}_{\mu_{1}}\right]\] \[+\Delta\text{tr}_{k}\left({\cal F}_{\mu_{1}\nu_{1}}{\cal F}^{ \dagger}_{\mu_{2}\nu_{2}}\right)+\Delta\text{tr}_{\ell}\left({\cal F}^{ \dagger}_{\mu_{1}\nu_{1}}{\cal F}_{\ \mu_{2}\nu_{2}}\right)\,. \tag{6.4}\] In Appendix D, we compute (for definiteness) the gauge invariant density \(\text{tr}\left[F_{34}F_{34}\right]\) for the \(k\neq r\) solution and show that it grows without bounds along the noncompact moduli direction of (5.9). This local gauge invariant, from (6.4), is given by \[\text{tr}\left[F_{34}F_{34}\right]=\] \[\text{tr}[\omega^{2}]\left\{\hat{F}^{\omega}_{34}\hat{F}^{\omega}_ {34}+2\Delta\hat{F}^{\omega}_{34}\left(\partial_{3}{\cal S}^{(0)\omega}_{4}- \partial_{4}{\cal S}^{(0)\omega}_{3}\right)\right\}+i4\pi N\Delta\hat{F}^{ \omega}_{34}\text{tr}_{k}\left[{\cal W}_{3}{\cal W}^{\dagger}_{4}-{\cal W}_{4}{ \cal W}^{\dagger}_{3}\right]=\] \[\text{tr}[\omega^{2}]\left\{\hat{F}^{\omega}_{34}\hat{F}^{\omega}_ {34}+2\Delta\hat{F}^{\omega}_{34}\left(\partial_{3}{\cal S}^{(0)\omega}_{4}- \partial_{4}{\cal S}^{(0)\omega}_{3}\right)\right\}+8\pi N\Delta\hat{F}^{ \omega}_{34}\text{tr}_{k}\left[{\cal W}_{4}{\cal W}^{\dagger}_{4}\right]\,, \tag{6.5}\] and we used \({\cal W}_{3}=-i{\cal W}_{4}\). To show the blow up, we use the example \(r=2\), \(k=3\) studied in Section 5. In Appendix D, we show that in the noncompact direction (5.9) the \({\cal O}(\Delta)\) gauge invariant blows up as \(u\to\infty\). This runaway behaviour of local gauge invariant densities along the noncompact moduli space runs counter the spirit of the \(\Delta\)-expansion. Thus, in what follows, we concentrate on the properties of the \(k=r\) solutions with compact moduli space. ### Fractionalization of solutions with topological charges \(r>1\) #### 6.2.1 Bosonic gauge invariant densities In this section, we use the results for the local gauge invariants to argue that instantons with topological charges \(r>1\) dissociate into \(r\) identical components. It is clear from the discussion in the previous section that unless one takes \(k=r\), one faces the undesired runaway behavior of the gauge-invariant densities. Thus, we limit our discussion to the case \(k=r\), where we show that the gauge-invariant densities take the form of a sum over \(r\) independent lumps centered around \(r\) distinct holonomies. To this end, consider (108) taking \(\mu_{1}=\mu_{3}=1,\mu_{2}=\mu_{4}=2\). Thus, one obtains \[\operatorname{tr}\left[F_{12}F_{12}\right]=\operatorname{tr}[\omega^{2}]\left\{ \hat{F}_{12}^{\omega}\hat{F}_{12}^{\omega}+2\Delta\hat{F}_{12}^{\omega}\left( \partial_{1}\mathcal{S}_{2}^{(0)\omega}-\partial_{2}\mathcal{S}_{1}^{(0) \omega}\right)\right\}+8\pi N\Delta\hat{F}_{12}^{\omega}\text{tr}_{k}\left[ \mathcal{W}_{2}\mathcal{W}_{2}^{\dagger}\right]\,, \tag{109}\] where, using (110), we find \[\left(\partial_{1}\mathcal{S}_{2}^{(0)\omega}-\partial_{2} \mathcal{S}_{1}^{(0)\omega}\right) = -\left(\pi\ell k\Box\right)^{-1}\left(\partial_{1}^{2}+\partial_ {2}^{2}\right)\text{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{2}^{ \dagger(0)}\right]\,.\] Here, \[\mathcal{W}_{2\,C^{\prime},C}^{(0)}(x) = V^{-1/4}\mathcal{C}_{2}^{C^{\prime}}\Phi_{C^{\prime},C}^{(0)}(x, \hat{\phi})\,,\quad C^{\prime}=1,2,...,k=r\,,\quad C=1,2,..,\ell\,. \tag{110}\] It is more convenient to express \(\Phi_{C^{\prime},C}^{(0)}(x,\hat{\phi})\) in the form given in (107) \[\Phi_{C^{\prime},C}^{(0)}(x,\hat{\phi}) = e^{\frac{kL_{1}L_{2}}{2\pi r}\hat{\phi}_{1}^{C^{\prime}}\left(i \hat{\phi}_{2}^{C^{\prime}}+\hat{\phi}_{1}^{C^{\prime}}/2\right)}e^{\frac{L_{ 2}L_{4}}{2\pi}\hat{\phi}_{3}^{C^{\prime}}\left(i\hat{\phi}_{4}^{C^{\prime}}+ \hat{\phi}_{3}^{C^{\prime}}/2\right)}e^{-i\hat{\phi}_{1}^{C^{\prime}}x_{1}}e^{ -i\hat{\phi}_{3}^{C^{\prime}}x_{3}} \tag{111}\] \[\times\sum_{m^{\prime}\in\mathbb{Z}}\sum_{n^{\prime}\in\mathbb{Z} }e^{i\left(\frac{2\pi x_{2}}{L_{2}}+L_{1}\hat{\phi}_{1}^{C^{\prime}}\right) \left(m^{\prime}+\frac{2C^{\prime}-1-k}{2k}\right)}e^{i\left(\frac{2\pi x_{4} }{L_{4}}+\ell L_{3}\hat{\phi}_{3}^{C^{\prime}}\right)\left(n^{\prime}-\frac{2 C-1-\ell}{2\ell}\right)}\] \[\times e^{-i\frac{\pi(1-k)}{k}\left(C^{\prime}-\frac{1+k(1-2m)}{2 }\right)}e^{i\frac{\pi(1-\ell)}{\ell}\left(C-\frac{1+\ell(2n^{\prime}+1)}{2} \right)}\] \[\times e^{-\frac{\pi r}{L_{1}L_{2}}\left[x_{1}-\frac{L_{1}L_{2}}{2 \pi}\hat{\phi}_{2}^{C^{\prime}}-\frac{L_{1}}{k}\left(km^{\prime}+\frac{2C^{ \prime}-1-k}{2}\right)\right]^{2}}\] \[\times e^{-\frac{\pi}{L_{3}L_{4}}\left[x_{3}-\frac{\ell L_{3}L_{4}}{2 \pi}\hat{\phi}_{4}^{C^{\prime}}-L_{3}\left(\ell n^{\prime}-\frac{2C-1-\ell}{2} \right)\right]^{2}}\,.\] The above eqns. (109, 110) imply that the computation of the gauge-invariant density \(\operatorname{tr}\left[F_{12}F_{12}\right]\) requires finding the quantity \[\text{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{2}^{\dagger(0)}\right]= \sum_{C^{\prime}=1}^{r}\left(\sum_{C=1}^{\ell}|\mathcal{C}_{2}^{C^{\prime}}|^{ 2}|\Phi_{C^{\prime},C}^{(0)}(x,\hat{\phi})|^{2}\right)\,. \tag{112}\] To further study (112), we need to take into account the fact that the \(r\) coefficients \(\mathcal{C}_{2}\) are determined by the top equation in (107), as described in (107). It is important that \(\mathcal{C}_{2}\) do depend on the holonomies, which were absorbed into the coefficient \(c\) in (107). Taking this into account,21 we find, after some rearrangement, that the expression (6.10), which determines \(\mathrm{tr}\left[F_{12}F_{12}\right]\) to order \(\Delta\) has the following form:22 Footnote 21: The \(\hat{\phi}_{1,3}\)-dependence of \(\mathcal{C}_{2}\) cancels the \((\hat{\phi}_{1})^{2}\) and \((\hat{\phi}_{3})^{2}\) terms in the exponent on the first line of (6.9). This ensures that gauge invariant quantities have periodic dependence on the holonomies. Footnote 22: Up to an inessential \(L_{\mu},r,\ell,N\)-dependent constant which can be easily determined. \[\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{2}^{(0)} \right]\sim\] \[\sum_{C^{\prime}=1}^{r}\bigg{|}\sum_{m^{\prime}\in\mathbb{Z}}\ e^{i \left(\frac{2\pi x_{2}}{L_{2}}+L_{1}\hat{\phi}_{1}^{C^{\prime}}\right)m^{ \prime}-\frac{\pi}{L_{1}L_{2}}\left[x_{1}-\frac{L_{1}L_{2}}{2\pi}\hat{\phi}_{2 }^{C^{\prime}}-\frac{L_{1}C^{\prime}}{r}-L_{1}(m^{\prime}-\frac{1+r}{2r}) \right]^{2}}\bigg{|}^{2}\] \[\qquad\times\bigg{|}\sum_{n^{\prime}\in\mathbb{Z}}\ e^{i\left( \frac{2\pi x_{4}}{L_{4}}+L_{3}\hat{\phi}_{3}^{C^{\prime}}\right)n^{\prime}- \frac{\pi}{L_{3}L_{4}}\left[x_{3}-\frac{\ell L_{3}L_{4}}{2\pi}\hat{\phi}_{4}^{ C^{\prime}}-L_{3}\left(\pi n^{\prime}+\frac{1+\ell}{2}\right)\right]^{2}}\bigg{|}^{2}\] \[=:\sum_{C^{\prime}=1}^{r}F(x_{1}-\frac{L_{1}L_{2}}{2\pi}\hat{ \phi}_{2}^{C^{\prime}}-\frac{L_{1}C^{\prime}}{r},\ x_{2}+\frac{L_{1}L_{2}}{2 \pi}\hat{\phi}_{1}^{C^{\prime}},\ x_{3}-\frac{\ell L_{3}L_{4}}{2\pi}\hat{\phi} _{4}^{C^{\prime}},\ x_{4}+\frac{\ell L_{3}L_{4}}{2\pi}\hat{\phi}_{3}^{C^{ \prime}})\,. \tag{6.11}\] As indicated on the last line above, for every \(C^{\prime}=1,2,..,r\), the summand is given by the same function \(F(x_{1},x_{2},x_{3},x_{4})\), implicitly defined above, but centered at a different point \(x_{\mu}\) on \(\mathbb{T}^{4}\). The position of each lump is determined by the moduli \(\hat{\phi}_{\mu}^{C^{\prime}}\), \(\mu=1,2,3,4\), \(C^{\prime}=1,...,r\). The size of the lumps is, of course, set by the size of \(\mathbb{T}^{4}\), the only scale of the problem. Thus, the "lumps" we find are not well isolated, but strongly overlapping, rather like a liquid than a dilute gas (see Figure 1 for an illustration). #### 6.2.2 Fermionic zero modes and their localization The conclusion of the above analysis is that the local gauge invariant density of the multifractional instanton, \(\mathrm{tr}\left[F_{12}F_{12}\right]\), takes the form of a sum of \(r\) identical lumps, each centered at \(r\) distinct holonomies. Thus, the solution of topological charge \(r/N\) can be thought of as composed of \(r\) distinct lumps. Each lump is expected to contribute \(1/N\)-th of the total topological charge. This expectation is strengthened by considering the fermion zero modes in the \(Q=\frac{r}{N}\) self-dual solution. In Appendix E, we show that there are \(2r\) zero modes, labeled by a 2-spinor \(\bar{\eta}_{\alpha}^{C^{\prime}}\), with \(C^{\prime}=1,...r\). To order \(\mathcal{O}(\sqrt{\Delta})\), the \(x\)-dependence of the zero modes appears in the off-diagonal components: \[\lambda_{1\,C^{\prime}D} \sim \bar{\eta}_{2}^{C^{\prime}}(\partial_{3}+i\hat{\phi}_{3}^{C^{ \prime}})\Phi_{C^{\prime},C}^{(0)}(x,\hat{\phi}))\equiv\bar{\eta}_{2}^{C^{ \prime}}\mathcal{G}_{3\,C^{\prime}D}^{(0)}(x,\hat{\phi}^{C^{\prime}}),\] \[\lambda_{2\,C^{\prime}D} = 0. \tag{6.12}\] with the expression for \({\cal G}^{(0)}_{3\,C^{\prime}D}(x,\hat{\phi}^{C^{\prime}})\) given in Appendix C, see (C.9). Likewise, the zero mode wave function in the other off-diagonal component is \[\lambda_{1\;DC^{\prime}} = 0,\] \[\lambda_{2\;DC^{\prime}} \sim \bar{\eta}_{1}^{C^{\prime}}{\cal G}^{*\;(0)}_{3\;C^{\prime}D}(x, \hat{\phi}^{C^{\prime}}). \tag{111}\] Even without consulting the explicit expression, it is clear that the \(C^{\prime}\)-th zero mode only depends on \(\hat{\phi}^{C^{\prime}}_{\mu}\), which, therefore, governs its \(x_{\mu}\)-dependence, similar to (110) above. Explicitly, one can construct \({\cal O}(\Delta)\) gauge invariants formed from the zero modes, to find, as for the bosonic invariants, that they are governed by a "lumpy" structure, with each of the \(r\) lumps supporting 2 zero modes located at a position governed by the moduli \(\hat{\phi}^{C^{\prime}}_{\mu}\). Explicitly, we find that the order-\(\Delta\) gauge invariants built from the fermion zero modes contain terms like \[\sum_{C^{\prime},D}\lambda_{1\;C^{\prime}D}\lambda_{2\;DC^{\prime }}\sim\] \[\sum_{C^{\prime}}\bar{\eta}_{1}^{C^{\prime}}\bar{\eta}_{2}^{C^{ \prime}}\bigg{|}\sum_{m}e^{i\frac{2\pi m}{L_{2}}(x_{2}+\frac{L_{1}L_{2}}{2\pi} \hat{\phi}^{C^{\prime}}_{1})-\frac{\pi}{L_{1}L_{2}}\left[x_{1}-\frac{L_{1}L_{ 2}}{2\pi}\hat{\phi}^{C^{\prime}}_{2}-\frac{L_{1}C^{\prime}}{r}+L_{1}\frac{1+r }{2r}-L_{1}m\right]^{2}}\bigg{|}^{2}\times\] \[\bigg{|}\sum_{n}\left(x_{3}-\frac{\ell L_{3}L_{4}}{2\pi}\hat{\phi }^{C^{\prime}}_{4}-L_{3}\ell n-L_{3}\frac{1+\ell}{2}\right)e^{i\frac{2\pi n}{ \ell L_{4}}(x_{4}+\frac{\ell L_{3}L_{4}}{2\pi}\hat{\phi}^{C^{\prime}}_{3})- \frac{\pi}{\ell L_{3}L_{4}}\left[x_{3}-\frac{\ell L_{3}L_{4}}{2\pi}\hat{\phi} ^{C^{\prime}}_{4}-L_{3}\left(\ell n+\frac{1+\ell}{2}\right)\right]^{2}}\bigg{|} ^{2}\,.\] This expression shows the same "localization" properties (determined by the holonomies \(\hat{\phi}^{C^{\prime}}\)) of the fermion zeromodes that were made evident for the bosonic solution in (110). It is also clear that the \(C^{\prime}\)th fermion zero mode vanishes at the position determined by the \(C^{\prime}\)th holonomy. **Acknowledgements:** We would like to thank F. David Wandler for comments on the manuscript. M.A. acknowledges the hospitality of the University of Toronto, where this work was completed. M.A. is supported by STFC through grant ST/T000708/1. E.P. is supported by a Discovery Grant from NSERC. ## Appendix A Derivation of the off-diagonal fermion zero modes ### The zero modes at zero holonomy Within this appendix, we present the derivation of one of the main results in the main text, denoted as Eq. (3.2). Our objective revolves around solving the off-diagonal fermion zero modes of the Dirac equation \(D_{\mu}\bar{\sigma}^{\mu}\lambda=0\). This equation pertains to the 't Hooft flux background, wherein the covariant derivative takes the form \(D_{\mu}=\partial_{\mu}+i[A_{\mu},\,]\). To streamline our approach, we commence by deactivating the holonomies. Subsequently, we can reintroduce them once we have obtained a general solution. Using (10) and writing \(A_{\mu}\equiv A_{\mu}^{\omega}\omega\), we find the commutator \[[A_{\mu},\lambda]=2\pi A_{\mu}^{\omega}\left[\begin{array}{cc}0&N||\lambda_ {C^{\prime}C}||\,\\ -N||\lambda_{CC^{\prime}}||&0\end{array}\right]\,, \tag{120}\] In this appendix we take the range of \(C\) and \(C^{\prime}\) to be \(C=1,2,...,\ell\) and \(C^{\prime}=1,2,...,k\). Substituting the above result into the Dirac equation, \(D_{\mu}\bar{\sigma}^{\mu}\lambda=0\), we obtain for \(\lambda_{C^{\prime}C}\) (and similarly for \(\lambda_{CC^{\prime}}\) after replacing \(+i2\pi N\rightarrow-i2\pi N\)): \[\bar{\sigma}^{\mu}\left[\partial_{\mu}\lambda_{C^{\prime}C}+i2\pi NA_{\mu}^{ \omega}\lambda_{C^{\prime}C}\right]=0\,. \tag{121}\] Writing \(\lambda_{C^{\prime}C}\) in terms of its two spinor components \(\lambda_{C^{\prime}C\;1}\) and \(\lambda_{C^{\prime}C\;2}\), the Dirac equation reads: \[\left(\partial_{1}-i\partial_{2}-\frac{2\pi rx_{1}}{kL_{1}L_{2}} \right)\lambda_{C^{\prime}C\;2}+\left(\partial_{3}+i\partial_{4}+\frac{2\pi x _{3}}{\ell L_{3}L_{4}}\right)\lambda_{C^{\prime}C\;1} = 0\,,\] \[\left(\partial_{1}+i\partial_{2}+\frac{2\pi rx_{1}}{kL_{1}L_{2}} \right)\lambda_{C^{\prime}C\;1}+\left(-\partial_{3}+i\partial_{4}+\frac{2\pi x _{3}}{\ell L_{3}L_{4}}\right)\lambda_{C^{\prime}C\;2}^{\boldsymbol{\beta}} = 0\,. \tag{122}\] A normalizable solution to the above equations can be found provided that we set \(\lambda_{C^{\prime}C\;2}=0\); an assertion that will be revisited below in the light of the most general normalizable solution on \(\mathbb{T}^{4}\) we shall construct. We proceed further by defining the functions \(U_{C^{\prime}C}\) via: \[\lambda_{C^{\prime}C\;1}\equiv e^{-\frac{\pi rx_{1}^{2}}{kL_{1}L_{2}}}e^{- \frac{\pi x_{3}^{2}}{\ell L_{3}L_{4}}}U_{C^{\prime}C}\,, \tag{123}\] which reduces (122) to the two simple equations \[\left(\partial_{1}+i\partial_{2}\right)U_{C^{\prime}C}=0\,,\quad\left( \partial_{3}+i\partial_{4}\right)U_{C^{\prime}C}=0\,. \tag{124}\] By defining the complex variables \(w_{1}\equiv x_{1}+ix_{2}\) and \(w_{2}\equiv x_{3}+ix_{4}\), we can cast (124) in the form \[\frac{\partial U_{C^{\prime}C}}{\partial\bar{w}_{1}}=0\,,\quad\frac{\partial U _{C^{\prime}C}}{\partial\bar{w}_{2}}=0\,, \tag{125}\] and, thus, we conclude that \(U_{C^{\prime}C}\) is an analytic function of \(w_{1}\) and \(w_{2}\): \[U_{C^{\prime}C}=U_{C^{\prime}C}(w_{1},w_{2})\,. \tag{126}\] We can also write the boundary conditions (3.12) as (the cyclic nature of the matrix elements, i.e., \(U_{C^{\prime}C}\equiv U_{C^{\prime}+k;C+\ell}\) will be imposed below): \[U_{C^{\prime}C}(w_{1}+L_{1},w_{2}) = \gamma_{k}^{-r}e^{\frac{\pi rL_{1}}{kL_{2}}+\frac{2\pi rw_{1}}{kL _{2}}}U_{C^{\prime}-r\,C}(w_{1},w_{2})\,,\] \[U_{C^{\prime}C}(w_{1}+iL_{2},w_{2}) = \gamma_{k}e^{i\frac{2\pi(C^{\prime}-1)}{k}}U_{C^{\prime}C}(w_{1}, w_{2})\,,\] \[U_{C^{\prime}C}(w_{1},w_{2}+L_{3}) = \gamma_{\ell}^{-1}e^{\frac{\pi L_{3}}{\ell L_{4}}+\frac{2\pi w_{ 2}}{\ell L_{4}}}U_{C^{\prime}\,C+1}(w_{1},w_{2})\,,\] \[U_{C^{\prime}C}(w_{1},w_{2}+iL_{4}) = \gamma_{\ell}^{-1}e^{-i\frac{2\pi(C-1)}{\ell}}U_{C^{\prime}C}(w_ {1},w_{2})\,.\] (A.8) We notice that the transformation properties under imaginary shifts of \(w_{1}\) by \(iL_{2}\) and \(w_{2}\) by \(iL_{4}\) are satisfied by writing \(U_{C^{\prime}C}(w_{1},w_{2})\) as the phase factor \[e^{\frac{w_{1}}{L_{2}}\frac{\pi}{k}(2C^{\prime}-1-k)-\frac{w_{2} }{L_{4}}\frac{\pi}{\ell}(2C-\ell-1)}\] (A.9) times an analytic function which is periodic w.r.t. these imaginary shifts, i.e., is a linear combination of functions \(e^{2\pi n\frac{w_{1}}{L_{2}}+2\pi m\frac{w_{2}}{L_{4}}}\) where \(n,m\in\mathbb{Z}\).23 Thus, the expression for \(U_{C^{\prime}C}\) has the form Footnote 23: The periodicity in imaginary shifts requires the exponential dependence, while the rest follows by holomorphy. The functions \(e^{2\pi n\frac{w_{2}}{L_{4}}}\) are normalizable on \(\mathbb{T}^{2}\), and the ones with different \(n\)’s are orthogonal, as enforced by the imaginary part of integrals over \(x_{2}\). \[U_{C^{\prime}C}(w_{1},w_{2})=e^{\frac{\pi w_{1}(2C^{\prime}-1-k) }{kL_{2}}}e^{-\frac{\pi w_{2}(2C-1-\ell)}{\ell L_{4}}}\sum_{m,n\in\mathbb{Z}} d_{C^{\prime},C,m,n}e^{2\pi m\frac{w_{1}}{L_{2}}+2\pi n\frac{w_{2}}{L_{4}}}\,.\] (A.10) Our next task is determining the coefficients \(d_{C^{\prime},C,m,n}\). Using the first and third BCs in (A.8), we obtain the recurrence relations \[d_{C^{\prime},C,m,n} = e^{-i\frac{\pi r(1-k)}{k}}e^{-\frac{\pi L_{1}(2C^{\prime}-1-k) }{kL_{2}}-\frac{2\pi mL_{1}}{L_{2}}+\frac{\pi rL_{1}}{kL_{2}}}d_{C^{\prime}-r,C,m,n}\,,\] (A.11) and \[d_{C^{\prime},C+1,m,n} = e^{i\frac{\pi(1-\ell)}{\ell}}e^{\frac{\pi(-2C+(2n+1)\ell)L_{3}} {\ell L_{4}}}d_{C^{\prime},C,m,n}\,.\] (A.12) These recurrence relations must be supplemented with boundary conditions that need to be satisfied to guarantee the cyclic nature of the solution, i.e., \(U_{C^{\prime}C}(w_{1},w_{2})=U_{C^{\prime}+k\,C}(w_{1},w_{2})=U_{C^{\prime}\, C+\ell}(w_{1},w_{2})\). First, using \(U_{C^{\prime}1}(w_{1},w_{2})=U_{C^{\prime}\,1+\ell}(w_{1},w_{2})\) along with the third equation in (A.8), we obtain the following relationship between the elements \(C=1\) and \(C=\ell\) in \(SU(\ell)\): \[U_{C^{\prime}\,C=\ell}(w_{1},w_{2}+L_{3})=\gamma_{\ell}^{-1}e^{ \frac{\pi L_{3}}{\ell L_{4}}}e^{\frac{2\pi w_{2}}{\ell L_{4}}}U_{C^{\prime}\, C=1}(w_{1},w_{2})\,,\] (A.13) which yields via (A.10): \[d_{C^{\prime},\ell,m,n}=e^{-i\frac{\pi(\ell-1)}{\ell}}e^{\frac{\pi( 1-2n)L_{2}}{L_{4}}}d_{C^{\prime},1,m,n-1}\,.\] (A.14) We can generalize (A.12) and (A.14) to \[d_{C^{\prime},C,m,n} = e^{-i\frac{\pi(1-\ell)}{\ell}}e^{\frac{-\pi(-2C+(2n+1)\ell)L_{ 3}}{\ell L_{4}}}d_{C^{\prime},C+1,m,n}\,,\quad\text{if $C+1<\ell$}\] \[d_{C^{\prime},C,m,n} = e^{-i\frac{\pi(1-\ell)}{\ell}}e^{\frac{-\pi(-2C+(2n+1)\ell)L_{ 3}}{\ell L_{4}}}d_{C^{\prime},C_{\text{new}},m,n-1}\,,\quad C_{\text{new}}=C+1 -\ell\quad\text{if $C+1>\ell$}\,.\] We must also find the boundary condition for the recurrence relation (A.11). Using \(U_{1C}(w_{1},w_{2})=U_{1+k\;C}(w_{1},w_{2})\) along with the first equation in (A.8), we obtain the following relationship between the elements \(C^{\prime}=1\) and \(C^{\prime}=k-(r-1)\) in \(SU(k)\): \[U_{C^{\prime}=1\;C}(w_{1}+L_{1},w_{2})=\gamma_{k}^{-r}e^{\frac{ \pi rL_{1}}{kL_{2}}}e^{\frac{2\pi r}{kL_{2}}w_{1}}U_{C^{\prime}=k-(r-1)\;C}(w_{ 1},w_{2})\,,\] (A.16) which yields via (A.10): \[d_{1,C,m,n}=e^{-i\frac{\pi r(1-k)}{k}}e^{\frac{\pi(r-1+k-2mk)}{k }\frac{L_{1}}{L_{2}}}d_{k-(r-1),C,m-1,n}\,.\] (A.17) Notice that we had to shift \(m\) by one unit since, according to the first equation in (A.8), a shift in the \(L_{1}\) direction relates the element \(C^{\prime}=1\) to the element \(C^{\prime}=1-r\). However, since \(1-r\leq 0\), we needed to replace \(C^{\prime}=1-r\) by a new \(C^{\prime}_{\text{new}}=k-(r-1)\). This replacement forces us to shift \(m\to m-1\) to obey the boundary condition (A.8) in the \(L_{1}\) direction. This shift in \(m\) always happens whenever the matrix elements have \(C^{\prime}-r\leq 0\). We may generalize (A.17) for any \(C^{\prime}\) whenever the first condition (A.11) yields \(d_{C^{\prime}=C-r,C,m,n}\) with \(C^{\prime}<0\). Demanding the cyclicity \(U_{C^{\prime}+k\;C}(x)=U_{C^{\prime}\;C}(x)\), we replace (A.11) and (A.17) with \[d_{C^{\prime},C,m,n} = e^{-i\frac{\pi r(1-k)}{k}}e^{-\frac{\pi L_{1}(2C^{\prime}-1-k)}{ kL_{2}}-\frac{2\pi mL_{1}}{L_{2}}+\frac{\pi rL_{1}}{kL_{2}}}d_{C^{\prime}-r,C,m,n} \,,\quad\text{if $C^{\prime}-r>0$}\,,\] \[d_{C^{\prime},C,m,n} = e^{-i\frac{\pi r(1-k)}{k}}e^{-\frac{\pi L_{1}(2C^{\prime}-1-k)}{ kL_{2}}-\frac{2\pi mL_{1}}{L_{2}}+\frac{\pi rL_{1}}{kL_{2}}}d_{C^{\prime}_{\text{ new}},C,m-1,n}\,,\] \[C^{\prime}_{\text{new}}=C^{\prime}-r+k\,,\quad\text{if $C^{\prime}-r\leq 0$}\,.\] (A.18) Now we come to the solution of the difference equation (A.15). This is a first-order difference equation, and thus, it yields a single solution. To this end, we substitute the following form \[d_{C^{\prime},C,m,n}=F(C^{\prime},m)e^{-\frac{\pi L_{3}}{\ell L _{4}}[C+S(n)]^{2}}\] (A.19) into the first equation in (115), to obtain \[S(n)=-\frac{1+(2n+1)\ell}{2}\,. \tag{119}\] Thus, \[d_{C^{\prime},C,m,n}=F(C^{\prime},m)e^{-\frac{\pi L_{3}}{\ell L_{4}} \left(C-i\frac{L_{4}(1-\ell)}{2L_{3}}-\frac{1+\ell(2n+1)}{2}\right)^{2}}\,. \tag{120}\] It is straightforward to check that the solution (120) obeys (115). On the other hand, the recurrence relation (118) is a difference equation of order \(r\), and thus, it should yield \(r\) independent solutions. To solve it, we parameterize it as \[d_{C^{\prime},C,m,n}=e^{-\frac{\pi L_{1}}{k\ell L_{2}}\left(C^{ \prime}+i\frac{L_{2}r(1-k)}{2L_{1}}+S^{\prime}(m)\right)^{2}}\,, \tag{121}\] and, inserting into the first equation in (118), we find \[S^{\prime}(m)=-\frac{1+k(1-2m)}{2}\,. \tag{122}\] We can check that (121, 122) satisfy (118). Combining (120) and (121), we obtain the final answer \[d_{C^{\prime},C,m,n}=e^{-\frac{\pi L_{3}}{\ell L_{4}}\left(C-i \frac{(1-\ell)L_{4}}{2L_{3}}-\frac{1+\ell(2n+1)}{2}\right)^{2}}e^{-\frac{\pi L _{1}}{k\ell L_{2}}\left(C^{\prime}+i\frac{r(1-k)L_{2}}{2L_{1}}-\frac{1+k(1-2m )}{2}\right)^{2}}\,. \tag{123}\] Notice that \(d_{C^{\prime},C,m,n}\to e^{-\frac{\pi L_{3}}{L_{4}}\ell n^{2}}e^{-\frac{\pi L _{1}}{rL_{2}}km^{2}}\) as \(n,m\to\infty\), and thus, the series (117) rapidly converges. Had we not set \(\lambda_{C^{\prime}C\;2}=0\) in (113), we would have obtained a divergent series in \(m,n\), and thus, no normalizable zero modes could be found. What is not immediately clear from (123) is that there are \(r\) independent solutions of \(U_{C^{\prime}C}\); this should be expected since (118) is a difference equation of order \(r\). The \(r\) independent solutions of \(U_{C^{\prime}C}\) follow from two distinct cases. 1. If \(\gcd(r,k)=r\), we can show that there are \(r\) independent coefficients \[d_{C^{\prime}=1,C,m,n},\;d_{C^{\prime}=2,C,m,n},\;...,d_{C^{ \prime}=r,C,m,n}\,,\] (124) and the sums over \(m,n\) in (117) are over all integers. The \(r\) independent coefficients yield \(r\) independent solutions. 2. If \(\gcd(r,k)=1\) and \(r>1\), then the set of integers \(m\) in (117) bifurcates into \(r\) sets such that the sum over \(m\in\mathbb{Z}\) in (117) is divided into \(m_{j}=n_{j}r+n\), \(n_{j}\in\mathbb{Z}\), \(n=0,1,..,r-1\). These form \(r\) independent orbits that correspond to \(r\) independent solutions. The general situation, \(1<\gcd(r,k)<r\), is a combination of both cases. To ease our discussion, we consider a few examples to understand the essence of each case. First, consider case 1 above, and take as an example \(k=6,r=2\), where \(\gcd(6,2)=2\). Using (A.18), we see that the coefficients \(d_{C^{\prime},C,m,n}\) are related via (here we ignore \(C\) and \(n\) since they do not play a role. Also the arrow indicates the relations between the coefficients as we traverse the \(L_{1}\) direction, without caring about the pre-coefficients): \[d_{1,m}\to d_{5,m-1}\to d_{3,m-1}\to d_{1,m-1}\to d_{5,m-2}\to...\,,\] \[d_{2,m}\to d_{6,m-1}\to d_{4,m-1}\to d_{2,m-1}\to d_{6,m-2}\to...\,.\] (A.26) Each line depicts a set of coefficients, and the coefficients of line 1 and line 2 are independent in that a coefficient in line 1 cannot be reached via a coefficient in line 2 and vice versa. Notice also, for example, as we start from \(d_{1,m}\) and traverse the \(L_{1}\) direction 3 times, we obtain the shifted \(d_{1,m-1}\) by one unit. Thus, we need to sum over all integers \(m\) in every line. This gives the two independent solutions. Next, consider case 2. For example, take \(k=6,r=5\), where \(\gcd(k,r)=1\). Applying (A.18) we find \[d_{1,m}\to d_{2,m-1}\to d_{3,m-2}\to d_{4,m-3}\to d_{5,m-4}\to d_{6,m-5}\to d _{1,m-5}\to d_{2,m-6}....\] (A.27) Thus, the fact that \(d_{1,m}\) shifts to \(d_{1,m-5}\) and \(d_{2,m-1}\) to \(d_{2,m-6}\), etc. means that the set of integers \(m\) bifurcates into 5 sets: \(m=5m^{\prime}+p\), \(p=0,1,2,3,4\) and \(m^{\prime}\in\mathbb{Z}\). Thus, we obtain 5 independent orbits corresponding to 5 independent solutions. Finally, consider the general case \(1<\gcd(r,k)<r\), and take, for example, \(k=6,r=4\), where \(\gcd(6,4)=2\). Here, we find \[d_{1,m}\to d_{3,m-1}\to d_{5,m-2}\to d_{1,m-2}\to....\,,\] \[d_{2,m}\to d_{4,m-1}\to d_{6,m-2}\to d_{2,m-2}\to...\,.\] (A.28) The two lines depict independent solutions. However, we also find that there are independent orbits within each line. For example, \(d_{1,m}\) shifts to \(d_{1,m-2}\), etc. Thus, the integers are divided into two sets, odd and even. We conclude that there are two orbits in each line, and in total, we have 4 independent solutions, as expected. In this general case, we find that a simple relation gives the \(r\) solutions: \[r=\underbrace{\gcd(k,r)}_{\text{number of vertical lines, case \eqref{eq:1}}}\times\underbrace{r}_{\text{gcd}(k,r)} \,.\] (A.29) It is best to cast the above findings in a more effective compact notation. To this end, we define the functions: \[\tilde{\Phi}^{(p)}_{C^{\prime}C}(x) \equiv e^{-\frac{\pi rx_{2}^{2}}{kL_{1}L_{2}}}e^{-\frac{\pi x_{2}^{2}}{ kL_{3}L_{4}}}\sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)},\,m^{\prime}\in\mathbb{Z}} \sum_{n^{\prime}\in\mathbb{Z}} \tag{111}\] \[\times e^{-\frac{\pi L_{4}}{kL_{4}}\left(C-i\frac{(1-\ell)L_{4}}{2 L_{3}}-\frac{1+\ell(2n^{\prime}+1)}{2}\right)^{2}}e^{-\frac{\pi L_{1}}{kL_{2}} \left(C^{\prime}+i\frac{r(1-k)L_{2}}{2L_{1}}-\frac{1+k(1-2m)}{2}\right)^{2}}\] \[\times e^{\frac{2\pi v_{1}}{2}(m+\frac{2C^{\prime}-1-k}{2k})}e^{ \frac{2\pi v_{2}}{kL_{4}}(n^{\prime}-\frac{2C-1-\ell}{2\ell})}\,,\] for \(p=0,1,...,\frac{r}{\gcd(k,r)}-1\). Thus, there are \(\frac{r}{\gcd(k,r)}\) independent solutions correspond to \(\frac{r}{\gcd(k,r)}\) independent orbits. We can also rewrite \(\tilde{\Phi}^{(p)}_{C^{\prime},C}\) conveniently as \[\tilde{\Phi}^{(p)}_{C^{\prime}C}(x) = \sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)},\,m^{\prime}\in\mathbb{Z} }\sum_{n^{\prime}\in\mathbb{Z}}\left\{e^{i\frac{2\pi x_{2}}{L_{2}}(m+\frac{2C^{ \prime}-1-k}{2k})}e^{i\frac{2\pi x_{4}}{L_{4}}(n^{\prime}-\frac{2C-1-\ell}{2\ell })}\right. \tag{112}\] \[\times e^{\frac{\pi r(1-k)^{2}L_{2}}{4kL_{1}}-i\frac{\pi(1-k)}{k} \left(C^{\prime}-\frac{1+k(1-2m)}{2}\right)}\times e^{\frac{\pi(1-\ell)^{2}L_ {3}}{4\ell L_{4}}+i\frac{\pi(1-\ell)}{\ell}\left(C-\frac{1+\ell(2n^{\prime}+1 )}{2}\right)}\] \[\left.\times e^{-\frac{\pi r}{kL_{1}L_{2}}\left(x_{1}-\frac{L_{1} (2mk+2C^{\prime}-1-k)}{2r}\right)^{2}}e^{-\frac{\pi}{L_{3}L_{4}}\left(x_{3}- \frac{L_{3}((2n^{\prime}+1)\ell-(2C-1))}{2}\right)^{2}}\right\}\,.\] Since the terms \(e^{\frac{\pi r(1-k)^{2}L_{2}}{4kL_{1}}}\) and \(e^{\frac{\pi(1-\ell)^{2}L_{3}}{4\ell L_{4}}}\) are independent of \(m,n,C,C^{\prime}\), we may drop them and define the function \(\Phi^{(p)}_{C^{\prime}C}(x)\) as: \[\Phi^{(p)}_{C^{\prime}C}(x) \equiv \sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)},\,m^{\prime}\in\mathbb{Z} }\sum_{n^{\prime}\in\mathbb{Z}}\left\{e^{i\frac{2\pi x_{2}}{L_{2}}(m+\frac{2C^ {\prime}-1-k}{2k})}e^{i\frac{2\pi x_{4}}{L_{4}}(n^{\prime}-\frac{2C-1-\ell}{2 \ell})}\right. \tag{113}\] \[\left.\times e^{-i\frac{\pi(1-k)}{k}\left(C^{\prime}-\frac{1+k(1-2m )}{2}\right)}\times e^{i\frac{\pi(1-\ell)}{\ell}\left(C-\frac{1+\ell(2n^{\prime }+1)}{2}\right)}\right.\] \[\left.\times e^{-\frac{\pi r}{kL_{1}L_{2}}\left(x_{1}-\frac{L_{1} (2mk+2C^{\prime}-1-k)}{2r}\right)^{2}}e^{-\frac{\pi}{L_{3}L_{4}}\left(x_{3}- \frac{L_{3}((2n^{\prime}+1)\ell-(2C-1))}{2}\right)^{2}}\right\}\,.\] The functions \(\Phi^{(p)}_{C^{\prime}C}(x)\) solve the equation \[\bar{\sigma}^{\mu}\left[\partial_{\mu}\Phi^{(p)}_{C^{\prime}C}+i2\pi NA^{\omega }_{\mu}\Phi^{(p)}_{C^{\prime}C}\right]=0\,, \tag{114}\] and satisfy the boundary conditions \[\Phi^{(p)}_{C^{\prime}C}(x+\hat{e}_{1}L_{1}) = e^{-i\frac{\pi r(1-k)}{k}}e^{i\frac{2\pi rx_{2}}{kL_{2}}}\Phi^{( p)}_{[C^{\prime}-r]_{k}\,C}(x)\,,\] \[\Phi^{(p)}_{C^{\prime}C}(x+\hat{e}_{2}L_{2}) = e^{i\frac{2\pi(2C^{\prime}-1-k)}{2k}}\Phi^{(p)}_{C^{\prime}C}(x)\,,\] \[\Phi^{(p)}_{C^{\prime}C}(x+\hat{e}_{3}L_{3}) = e^{-i\frac{\pi(1-\ell)}{\ell}}e^{i\frac{2\pi x_{4}}{\ell L_{4}}} \Phi^{(p)}_{C^{\prime}\,[C+1]_{\ell}}(x)\,,\] \[\Phi^{(p)}_{C^{\prime}C}(x+\hat{e}_{4}L_{4}) = e^{-i\frac{2\pi(2C-1-\ell)}{2\ell}}\Phi^{(p)}_{C^{\prime}C}(x)\,, \tag{115}\] which are the exact same boundary conditions (3.12). The entries with \(C^{\prime}=j,j+\gcd(k,r),j+2\gcd(k,r),...,j+k-\gcd(k,r)\), for every \(j=1,2,...,\gcd(k,r)\), are shuffled to each other as we traverse the \(L_{1}\) direction. Thus, the rows with \(C^{\prime}=1,2,...,\gcd(k,r)\) are independent. In total, there are \(\gcd(k,r)\times\frac{r}{\gcd(k,r)}=r\) independent solutions. In addition, \(\Phi^{(p)}_{C^{\prime}C}\) satisfy the cyclic properties: \[\Phi^{(p)}_{C^{\prime}+k\;C}(x)=\Phi^{(p+1)}_{C^{\prime}C}(x)\,,\] \[\Phi^{(p)}_{C^{\prime}C}(x)=\Phi^{\left(p+\frac{r}{\gcd(k,r)} \right)}_{C^{\prime}C}(x)\,.\] (A.35) Notice the intertwining between the shift in \(p\) by \(1\) and \(C^{\prime}\) by \(k\). We can use (A.35), noticing the intertwining between the shift in \(p\) and \(C^{\prime}\), to write the \(r\) independent zero modes of the Dirac equation as \[\lambda_{C^{\prime}C}(x)=\sum_{p=0}^{\frac{r}{\gcd(k,r)}-1}\left[ \begin{array}{c}\eta^{[C^{\prime}+pk]_{r}}\\ 0\end{array}\right]\Phi^{(p)}_{C^{\prime}C}(x)\,,\] (A.36) where \([x]_{r}\equiv x\,\text{mod}\,r\), and it is obvious that \(\eta^{[C^{\prime}+pk]_{r}}\) yields \(r\) independent coefficients. This is the desired equation (3.18) without holonomies. ### Turning on holonomies Next, we turn on the \(SU(k)\) space holonomies. In particular, the gauge field is now given by \[A_{\mu}=-\left[\hat{A}_{\mu}^{\omega}+\phi_{\mu}\right]\omega+H^{ a^{\prime}}\phi_{\mu}^{a^{\prime}}\,,\] (A.37) where \(\phi_{\mu}=z_{\mu}/L_{\mu}\) are the abelian holonomies, \(H^{a^{\prime}}\), \(a^{\prime}=1,2,...,k-1\) are the \(k-1\) Cartan generators of the \(su(k)\) algebra, and \(\phi_{\mu}^{a^{\prime}}\) are \(k-1\) holonomies in every direction \(\mu=1,2,3,4\). Next, we need to compute the commutator: \[[H^{a^{\prime}}\phi_{\mu}^{a^{\prime}},||\lambda||_{C^{\prime}C} ]=\left(H^{a^{\prime}}\phi_{\mu}^{a^{\prime}}\right)_{C^{\prime}C^{\prime}} \lambda_{C^{\prime}C}\equiv\phi_{\mu}^{C^{\prime}}\lambda_{C^{\prime}C}\,.\] (A.38) Recalling (A.1), we find it convenient to define \[\hat{\phi}_{\mu}^{C^{\prime}}=\phi_{\mu}^{C^{\prime}}-2\pi N\phi_ {\mu}\,.\] (A.39) Noticing that \(A_{\mu}\) has to commute with the transition functions, then out of \(k\) holonomies, there are at most \(\gcd(k,r)\) holonomies in every spacetime direction. Thus, we find that \(\hat{\phi}_{\mu}^{C^{\prime}}=\hat{\phi}_{\mu}^{C^{\prime}+r}\), or we can express this fact as \[\hat{\phi}_{\mu}^{C^{\prime}}=\hat{\phi}_{\mu}^{[C^{\prime}]_{r} }\,.\] (A.40) Using the above information in the Dirac equation \(\bar{\sigma}^{\mu}D_{\mu}\lambda=0\), we find (compare with (A.3)) \[\left(\partial_{3}+i\hat{\phi}_{3}^{C^{\prime}}+i\partial_{4}-\hat{ \phi}_{4}^{[C^{\prime}]_{r}}+\frac{2\pi x_{3}}{\ell L_{3}L_{4}}\right)\lambda_{ 1C^{\prime},C}=0\,,\] \[\left(\partial_{1}+i\hat{\phi}_{1}^{C^{\prime}}+i\partial_{2}- \hat{\phi}_{2}^{[C^{\prime}]_{r}}+\frac{2\pi rx_{1}}{kL_{1}L_{2}}\right) \lambda_{1C^{\prime},C}=0\,.\] (A.41) and we have set \(\lambda_{C^{\prime}C\;2}=0\), as in (A.3). Next, we use the field redefinition \[\lambda_{C^{\prime}C\;1}=e^{-\frac{\pi rx_{1}^{2}}{kL_{1}L_{2}}}e^{-\frac{\pi x _{3}^{2}}{L_{3}L_{4}}}e^{-ix_{\mu}\hat{\phi}_{\mu}^{[C^{\prime}]_{r}}}U_{C^{ \prime}C}\] (A.42) in (A.41) to find that \(U_{C^{\prime}C}\) obeys the equations \[\left(\partial_{1}+i\partial_{2}\right)U_{C^{\prime}C}=0\,,\quad\left(\partial _{3}+i\partial_{4}\right)U_{C^{\prime}C}=0\,.\] (A.43) These equations, as before, imply that \(U_{C^{\prime}C}\) is an analytic function of \(w_{1}\equiv x_{1}+ix_{2}\) and \(w_{2}\equiv x_{3}+ix_{4}\). The BCS (3.12) can be rewritten in terms of the functions \(U_{C^{\prime}C}\): \[U_{C^{\prime}C}(w_{1}+L_{1},w_{2}) = \gamma_{k}^{-r}e^{\frac{\pi rL_{1}}{kL_{2}}+\frac{2\pi r}{kL_{2} }w_{1}+iL_{1}\hat{\phi}_{1}^{[C^{\prime}]_{r}}}\;U_{C^{\prime}-r\;C}(w_{1},w_{ 2})\,,\] \[U_{C^{\prime}C}(w_{1}+iL_{2},w_{2}) = e^{i\frac{\pi}{k}(2C^{\prime}-1-k)+iL_{2}\hat{\phi}_{2}^{[C^{ \prime}]_{r}}}\;U_{C^{\prime}C}(w_{1},w_{2})\,,\] \[U_{C^{\prime}C}(w_{1},w_{2}+L_{3}) = \gamma_{\ell}^{-1}e^{\frac{\pi L_{3}}{\ell L_{4}}+\frac{2\pi}{ \ell L_{4}}w_{2}+iL_{3}\hat{\phi}_{3}^{[C^{\prime}]_{r}}}\;U_{C^{\prime}\;C+1 }(w_{1},w_{2})\,,\] \[U_{C^{\prime}C}(w_{1},w_{2}+iL_{4}) = e^{-i\frac{\pi}{\ell}(2C-\ell-1)+iL_{4}\hat{\phi}_{4}^{[C^{ \prime}]_{r}}}\;U_{C^{\prime}C}(w_{1},w_{2})\,.\] (A.44) Similar to (A.10), the transformation properties under imaginary shifts of \(w_{1}\) by \(iL_{2}\) and \(w_{2}\) by \(iL_{4}\) are satisfied by writing \(U_{C^{\prime}C}(w_{1},w_{2})\) as the phase factor \[e^{\frac{w_{1}}{L_{2}}\frac{\pi}{k}(2C^{\prime}-1-k)+w_{1}\hat{\phi}_{2}^{[C^{ \prime}]_{r}}-\frac{w_{2}}{L_{4}}\frac{\pi}{\ell}(2C-\ell-1)+w_{2}\hat{\phi}_{4 }^{[C^{\prime}]_{r}}}\] (A.45) times an analytic function which is periodic w.r.t. these imaginary shifts. Thus, the expression for \(U_{C^{\prime}C}\) takes the form \[\bar{U}_{C^{\prime},C}(w_{1},w_{2})=e^{w_{1}\hat{\phi}_{2}^{[C^{ \prime}]_{r}}+w_{2}\hat{\phi}_{4}^{[C^{\prime}]_{r}}+\frac{\pi w_{1}}{kL_{2}}( 2C^{\prime}-1-k)-\frac{\pi w_{2}}{\ell L_{4}}(2C-1-\ell)}\sum_{m,n\in\mathbb{Z }}d_{C^{\prime},C,m,n}\;e^{2\pi m\frac{w_{1}}{L_{2}}+2\pi n\frac{w_{2}}{L_{4}}}\,,\] (A.46) which differs from (A.10) by the prefactor \(e^{w_{1}\hat{\phi}_{2}^{[C^{\prime}]_{r}}+w_{2}\hat{\phi}_{4}^{[C^{\prime}]_{r}}}\). As we proceed in the absence of holonomies, our next step involves determining the coefficients \(d_{C^{\prime},C,m,n}\) by utilizing the first and third boundary conditions in (A.44). These conditions lead to the following recurrence relations: \[d_{C^{\prime},C,m,n} = e^{-i\frac{\pi r(1-k)}{k}}e^{-\frac{\pi L_{1}}{kL_{2}}(2C^{ \prime}-r-1+(2m-1)k)}e^{iL_{1}(\hat{\phi}_{1}^{[C^{\prime}]_{r}}+i\hat{\phi}_{ 2}^{[C^{\prime}]_{r}})}\ d_{C^{\prime}-r,C,m,n}\,,\] (A.47) and \[d_{C^{\prime},C,m,n} = e^{-i\frac{\pi(1-\ell)}{\ell}}e^{\frac{\pi L_{3}}{\ell L_{4}}(2 C-(2n+1)\ell)}e^{iL_{3}(\hat{\phi}_{3}^{[C^{\prime}]_{r}}+i\hat{\phi}_{4}^{[C^{ \prime}]_{r}})}\ d_{C^{\prime},C+1,m,n}\,.\] (A.48) We observe that (A.47) and (A.48) become identical to (A.11) and (A.12) respectively, when we replace: \[m \longrightarrow m-\frac{iL_{2}}{2\pi}\left(\hat{\phi}_{1}^{[C^{\prime}]_{r}}+i \hat{\phi}_{2}^{[C^{\prime}]_{r}}\right)\,,\] \[n \longrightarrow n-\frac{iL_{4}}{2\pi}\left(\hat{\phi}_{3}^{[C^{\prime}]_{r}}+i \hat{\phi}_{4}^{[C^{\prime}]_{r}}\right)\,,\] (A.49) in (A.11) and (A.12). Consequently, the solution to (A.47) and (A.48) is identical to (A.24) after making the replacement (A.49): \[d_{C^{\prime},C,m,n} =\] \[e^{-\frac{\pi L_{3}}{\ell L_{4}}\left[C-i\frac{(1-\ell)L_{4}}{2L _{3}}-\frac{1+\ell(2n+1)}{2}+i\frac{\ell L_{4}}{2\pi}\left(\hat{\phi}_{3}^{[C^ {\prime}]_{r}}+i\hat{\phi}_{4}^{[C^{\prime}]_{r}}\right)\right]^{2}}\] \[\times e^{-\frac{\pi L_{1}}{k\ell L_{2}}\left[C^{\prime}+i\frac{r(1 -k)L_{2}}{2L_{1}}-\frac{1+k(1-2m)}{2}-i\frac{kL_{2}}{2\pi}\left(\hat{\phi}_{1} ^{[C^{\prime}]_{r}}+i\hat{\phi}_{2}^{[C^{\prime}]_{r}}\right)\right]^{2}}\,,\] and we used the fact that \(\phi_{\mu}^{[C^{\prime}]_{r}}=\phi_{\mu}^{[C^{\prime}-r]_{r}}\). Then, all the analyses in the absence of holonomies repeat precisely, with \(\tilde{\Phi}_{C^{\prime}C}^{(p)}(x,\hat{\phi})\) now given by the expression \[\tilde{\Phi}_{C^{\prime}C}^{(p)}(x,\hat{\phi}) \equiv e^{-ix_{\mu}\hat{\phi}_{\mu}^{[C^{\prime}]_{r}}}e^{w_{1}\hat{ \phi}_{2}^{[C^{\prime}]_{r}}}e^{w_{2}\hat{\phi}_{4}^{[C^{\prime}]_{r}}}e^{- \frac{\pi x_{1}^{2}}{kL_{1}L_{2}}}e^{-\frac{\pi x_{2}^{2}}{L_{3}L_{4}}}\] (A.51) \[e^{\frac{2\pi w_{1}}{L_{2}}(m+\frac{2C^{\prime}-1-k}{2k})}e^{ \frac{2\pi w_{2}}{L_{4}}(n^{\prime}-\frac{2C-1-\ell}{2\ell})}\] \[\times e^{-\frac{\pi L_{3}}{\ell L_{4}}\left[C-i\frac{(1-\ell)L_{4} }{2L_{3}}-\frac{1+\ell(2n^{\prime}+1)}{2}+i\frac{\ell L_{4}}{2\pi}\left(\hat{ \phi}_{3}^{[C^{\prime}]_{r}}+i\hat{\phi}_{4}^{[C^{\prime}]_{r}}\right)\right]^ {2}}\] \[\times e^{-\frac{\pi L_{1}}{k\ell L_{2}}\left[C^{\prime}+i\frac{r(1 -k)L_{2}}{2L_{1}}-\frac{1+k(1-2m)}{2}-i\frac{kL_{2}}{2\pi}\left(\hat{\phi}_{1} ^{[C^{\prime}]_{r}}+i\hat{\phi}_{2}^{[C^{\prime}]_{r}}\right)\right]^{2}}\,,\] where the tilde service as a reminder that these are not precisely the functions we define in the bulk of the paper. The latter will be defined momentarily. Manipulating, we can rewrite \(\tilde{\Phi}^{(p)}_{C^{\prime}C}(x,\hat{\phi})\) in the more convenient form (easier form for taking derivatives) \[\tilde{\Phi}^{(p)}_{C^{\prime}C}(x,\hat{\phi}) = \sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)}}\sum_{m^{\prime}\in\mathbb{ Z}}e^{\frac{i2\pi x_{2}}{L_{2}}(m+\frac{2C^{\prime}-1-k}{2k})}e^{\frac{i2\pi x_{4}}{ L_{4}}(n^{\prime}-\frac{2C-1-\ell}{2\ell})} \tag{113}\] \[\times e^{\frac{\pi r(1-k)^{2}L_{2}}{4kL_{1}}-i\frac{\pi(1-k)}{k} \left(C^{\prime}-\frac{1+k(1-2m)}{2}-i\frac{kL_{2}}{2\pi}\left(\hat{\phi}^{[C^{ \prime}]r}_{1}+i\hat{\phi}^{[C^{\prime}]r}_{2}\right)\right)}\] \[\times e^{\frac{\pi(1-k)^{2}L_{3}}{4\ell L_{4}}+i\frac{\pi(1-\ell )}{\ell}\left(B-\frac{1+\ell(2n^{\prime}+1)}{2}+i\frac{\ell L_{4}}{2\pi}\left( \hat{\phi}^{[C^{\prime}]r}_{3}+i\hat{\phi}^{[C^{\prime}]r}_{4}\right)\right)}\] \[\times e^{-\frac{\pi r}{kL_{1}L_{2}}}\Big{[}x_{1-\frac{kL_{1}L_{2 }}{2\pi r}}(\hat{\phi}^{[C^{\prime}]r}_{2}-i\hat{\phi}^{[C^{\prime}]r}_{1})- \frac{L_{1}}{r}\Big{(}km+\frac{2C^{\prime}-1-k}{2}\Big{)}\Big{]}^{2}\] \[\times e^{-\frac{\pi}{L_{3}L_{4}}}\Big{[}x_{3-\frac{\ell L_{3}L_{4 }}{2\pi}}(\hat{\phi}^{[C^{\prime}]r}_{4}-i\hat{\phi}^{[C^{\prime}]r}_{3})-L_{3 }\big{(}\ell n^{\prime}-\frac{2C-1-\ell}{2}\big{)}\Big{]}^{2}\.\] The terms \(e^{\frac{\pi r(1-k)^{2}L_{2}}{4kL_{1}}}\), \(e^{-i\frac{\pi(1-k)}{k}\left(-i\frac{kL_{2}}{2\pi}\left(\hat{\phi}^{[C^{\prime }]r}_{1}+i\hat{\phi}^{[C^{\prime}]r}_{2}\right)\right)}\), \(e^{\frac{\pi(1-\ell)^{2}L_{3}}{4\ell L_{4}}}\), and \(e^{i\frac{\pi(1-\ell)}{\ell}\left(i\frac{\ell L_{4}}{2\pi}\left(\hat{\phi}^{[C ^{\prime}]r}_{3}+i\hat{\phi}^{[C^{\prime}]r}_{4}\right)\right)}\) do not explicitly depend on \(C,C^{\prime},m,n^{\prime}\), and thus, it is convenient to drop them24 and define the function \(\Phi^{(p)}_{C^{\prime}C}(x,\hat{\phi})\) as: Footnote 24: One can show that they can be absorbed into the coefficients \(\eta^{[C^{\prime}+pk]r}\) of the general solution of the Dirac equation, see (111) or (113) below. \[\Phi^{(p)}_{C^{\prime}C}(x,\hat{\phi}) \equiv \sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)}}\sum_{m^{\prime}\in \mathbb{Z}}\sum_{n^{\prime}\in\mathbb{Z}}e^{\frac{i2\pi x_{2}}{L_{2}}(m+\frac{ 2C^{\prime}-1-k}{2k})}e^{\frac{i2\pi x_{4}}{L_{4}}(n^{\prime}-\frac{2C-1-\ell} {2\ell})} \tag{114}\] \[\times e^{-i\frac{\pi(1-k)}{k}\left(C^{\prime}-\frac{1+k(1-2m)}{ 2}\right)}e^{i\frac{\pi(1-\ell)}{\ell}\left(C-\frac{1+\ell(2n^{\prime}+1)}{2} \right)}\] \[\times e^{-\frac{\pi r}{kL_{1}L_{2}}}\Big{[}x_{1-\frac{kL_{1}L_{2 }}{2\pi r}}(\hat{\phi}^{[C^{\prime}]r}_{2}-i\hat{\phi}^{[C^{\prime}]r}_{1})- \frac{L_{1}}{r}\left(km+\frac{2C^{\prime}-1-k}{2}\right)\Big{]}^{2}\] \[\times e^{-\frac{\pi}{kL_{3}L_{4}}}\Big{[}x_{3-\frac{\ell L_{3}L_ {4}}{2\pi}(\hat{\phi}^{[C^{\prime}]r}_{4}-i\hat{\phi}^{[C^{\prime}]r}_{3})-L_{3 }\left(\ell n^{\prime}-\frac{2C-1-\ell}{2}\right)\Big{]}^{2}}.\] We may also write \(\Phi^{(p)}_{C^{\prime}C}(x,\hat{\phi})\) in the form \[\Phi^{(p)}_{C^{\prime}C}(x,\hat{\phi}) = e^{\frac{kL_{1}L_{2}}{2\pi r}\hat{\phi}^{[C^{\prime}]r}_{1}\left( i\hat{\phi}^{[C^{\prime}]r}_{2}+\hat{\phi}^{[C^{\prime}]r}_{1}/2\right)}e^{ \frac{\ell L_{3}L_{4}}{2\pi}\hat{\phi}^{[C^{\prime}]r}_{3}\left(i\hat{\phi}^{[C^ {\prime}]r}_{4}+\hat{\phi}^{[C^{\prime}]r}_{3}/2\right)}e^{-i\hat{\phi}^{[C^{ \prime}]r}_{1}x_{1}}e^{-i\hat{\phi}^{[C^{\prime}]r}_{3}}x_{3} \tag{115}\] \[\times\sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)}}\sum_{m^{\prime}\in \mathbb{Z}}e^{i\left(\frac{2\pi x_{2}}{L_{2}}+\frac{L_{1}k}{r}\hat{\phi}^{[C^{ \prime}]r}_{1}\right)}(m+\frac{2C^{\prime}-1-k}{2k})}e^{i\left(\frac{2\pi x_{4} }{L_{4}}+\ell L_{3}\hat{\phi}^{[C^{\prime}]r}_{3}\right)(n^{\prime}-\frac{2C-1- \ell}{2\ell})}\] \[\times e^{-i\frac{\pi(1-k)}{k}\left(C^{\prime}-\frac{1+k(1-2m)}{ 2}\right)}e^{i\frac{\pi(1-\ell)}{\ell}\left(C-\frac{1+\ell(2n^{\prime}+1)}{2} \right)}\] \[\times e^{-\frac{\pi}{kL_{1}L_{2}}}\Big{[}x_{1-\frac{kL_{1}L_{2}}{2 \pi}}\hat{\phi}^{[C^{\prime}]r}_{2}-\frac{L_{1}}{r}\left(km+\frac{2C^{\prime}-1- k}{2}\right)\Big{]}^{2}\] \[\times e^{-\frac{\pi}{\ell L_{3}L_{4}}}\Big{[}x_{3-\frac{\ell L_{3}L_ {4}}{2\pi}}\hat{\phi}^{[C^{\prime}]r}_{4}-L_{3}\big{(}\ell n^{\prime}-\frac{2C-1- \ell}{2\ell}\big{)}\Big{]}^{2}\.\] Finally, the fermion zero modes are given by (compare with (A.36)) \[\lambda_{C^{\prime}C}(x)=\sum_{p=0}^{\frac{r}{\gcd(k,r)}-1}\left[\eta^{[C^{ \prime}+pk]_{r}}\atop 0\right]\Phi^{(p)}_{C^{\prime}C}(x,\hat{\phi})\,.\] (A.55) ## Appendix B A useful identity Here, we evaluate the expression \(I^{ab}_{j}\) defined in (5.1), \(j=0,...,\gcd(k,r)-1\), repeated here \[I^{ab}_{j}=\sum_{C^{\prime}=0}^{k-1}\sum_{n=0}^{\frac{k}{\gcd(k,r)}-1}\delta_{ C^{\prime},[j+nr]_{k}}\sum_{p,p^{\prime}=0}^{\frac{r}{\gcd(k,r)}-1}\frac{{ \cal C}_{a}^{[C^{\prime}+pk]_{r}}\ {\cal C}_{b}^{*\ [C^{\prime}+p^{\prime}k]_{r}}}{\sqrt{V}}\int_{ \mathbb{T}^{4}}\sum_{B=0}^{\ell-1}\Phi^{(p)}_{C^{\prime}B}\Phi^{(p^{\prime})}_ {C^{\prime}B}{}^{*}.\] (B.1) For convenience, we also repeat the expression for \(\Phi^{(p)}\) (3.21): \[\Phi^{(p)}_{C^{\prime}B}(x,\hat{\phi}) = \sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)},\,m^{\prime}\in\mathbb{ Z}}\ \ \sum_{n^{\prime}\in\mathbb{Z}}e^{\frac{i2\pi x_{2}}{L_{2}}(m+\frac{2C^{\prime}-1-k}{2 k})}e^{\frac{i2\pi x_{4}}{L_{4}}(n^{\prime}-\frac{2B-1-\ell}{2\ell})}\] (B.2) \[\times\ e^{-i\frac{\pi(1-k)}{k}\left(C^{\prime}-\frac{1+k(1-2m)}{ 2}\right)}e^{i\frac{\pi(1-\ell)}{\ell}\left(B-\frac{1+\ell(2n^{\prime}+1)}{2} \right)}\] \[\times\ e^{-\frac{\pi r}{kL_{1}L_{2}}\left[x_{1}-\frac{kL_{1}L_{ 2}}{2\pi r}(\hat{\phi}_{2}^{[C^{\prime}]_{r}}-i\hat{\phi}_{1}^{[C^{\prime}]_{r }})-\frac{L_{1}}{r}\left(km+\frac{2C^{\prime}-1-k}{2}\right)\right]^{2}}\] \[\times\ e^{-\frac{\pi}{lL_{3}L_{4}}\left[x_{3}-\frac{\ell L_{3}L_{ 4}}{2\pi}(\hat{\phi}_{4}^{[C^{\prime}]_{r}}-i\hat{\phi}_{3}^{[C^{\prime}]_{r}} )-L_{3}\left(\ell n^{\prime}-\frac{2B-1-\ell}{2}\right)\right]^{2}}\,.\] To calculate \(I^{ab}_{j}\), we now make a few observations, which help evaluate (B.1): 1. The integral over \(x_{4}\) can be taken, yielding a factor of \(L_{4}\) and the condition \(\delta_{n^{\prime},\tilde{n}^{\prime}}\), where \(n^{\prime}\) is the index of summation from \(\Phi^{(p)}\) and \(\tilde{n}^{\prime}\) coming from \(\Phi^{(p^{\prime})}\)\({}^{*}\). 2. The sum over \(B=0,...,\ell-1\) allows to extend the range of the \(x_{3}\) integral from \(-\infty,+\infty\), implying that the \(\hat{\phi}_{4}\)-dependence disappears.25 Footnote 25: However, some factor of \(\hat{\phi}_{3}\) remains which we will have to keep track of when evaluating the Gaussian integral over \(x_{3}\). 3. The integral over \(x_{2}\) can also be taken, yielding an overall factor of \(L_{2}\) and the constraint \(\delta_{m,\tilde{m}}\), where \(m\) is from \(\Phi^{(p)}\) and \(\tilde{m}\) is from \(\Phi^{(p)}\)\({}^{*}\). Note, in view of the definition of \(m\) (\(\tilde{m}\)) in (B.2), \(m=\tilde{m}\) implies, recalling the range of \(p,p^{\prime}\), that \(p=p^{\prime}\) and \(m^{\prime}=\tilde{m}^{\prime}\). Thus, in the end of this step, we are left with an expression that contains only sums over \(C^{\prime}\), \(n\), \(p\), and \(m^{\prime}\), and only an integral over the \(x_{1}\) direction of \(\mathbb{T}^{4}\). 4. We also note that, for each \(j\), only values of \(C^{\prime}\) equal to \([j+nr]_{k}\) enter the sum (B.1) defining \(I^{ab}_{j}\), with \(n\) taking values in the range given. Now is time to recall the relation (3.20) defining the independent holonomies. It shows that all these have the same \(\hat{\phi}^{C^{\prime}}_{\mu}\) and thus \(I^{ab}_{j}\) only depends on the \(\gcd(k,r)\) independent \(\varphi^{j}_{\mu}\)--as we explicitly indicate in (B.3) below. Explicitly performing the steps outlined in the above list, we obtain an intermediate result for (B.1), \[I^{ab}_{j} = \sqrt{V}\sqrt{\frac{\ell L_{4}}{2L_{3}}}\ e^{\frac{L_{1}L_{2}k}{2 \pi r}(\varphi^{j}_{1})^{2}}\ e^{\frac{L_{3}L_{4}\ell}{2\pi}(\varphi^{j}_{3}) ^{2}}\] (B.3) \[\times\sum_{n=0}^{\frac{k}{\gcd(k,r)}-1}\sum_{p=0}^{r}({\cal C}_ {a}{\cal C}^{*}_{b})^{[j+nr]_{k}+pk]_{r}}\] \[\times\sum_{m^{\prime}\in\mathbb{Z}}\ \int\limits_{0}^{1}dx\ e^{-\frac{2\pi rL_{1}}{kL_{2}} \left(x-\frac{kp+[j+nr]_{k}}{r}+\frac{1+k}{2r}-\frac{k}{\gcd(k,r)}m^{\prime}- \frac{kL_{2}}{2\pi rL_{1}}\varphi^{j}_{2}\right)^{2}}\,,\] which only contains a single integral over \(x_{1}\), rescaled by \(L_{1}\) and denoted by \(x\). For brevity, we also denote \(({\cal C}_{a}{\cal C}^{*}_{b})^{A}\equiv{\cal C}^{A}_{a}\ {\cal C}^{*\ A}_{b}\). The next step is to rearrange the sum (B.3) for \(I^{ab}_{j}\) by grouping together terms where the "moduli" product \(({\cal C}_{a}{\cal C}^{*}_{b})^{A}\) has the same index. Recall that apriori \(A\) can take values in the range \(A\in 0,...,r-1\). However, it is important to realize not all allowed values of \(A\) appear in the sum defining \(I^{ab}_{j}\) for a given \(j\). One numerically finds that for any given \(j\), the index \(A\equiv[[j+nr]_{k}+pk]_{r}\) takes only \(\frac{r}{\gcd(k,r)}\) of its possible \(r\) values as \(n\) and \(p\) scan their possible values in the sum in (B.3). To proceed further, we denote by \(S_{j}\) each of the \(\gcd(k,r)\) sets of \(\frac{r}{\gcd(k,r)}\) values that \(A\) can take for a given \(j\): \[S_{j} = \bigg{\{}[[j+nr]_{k}+pk]_{r},\text{for }n=0,...\frac{k}{\gcd(k,r)} -1,\text{and }p=0,...,\frac{r}{\gcd(k,r)}-1\bigg{\}},\] \[|S_{j}| = \frac{r}{\gcd(k,r)}\,\] (B.4) where we stress that repeated values of \([[j+nr]_{k}+pk]_{r}\) are identified in \(S_{j}\) and that the set has \(r/\gcd(k,r)\) elements. The sets \(S_{j}\) are straightforward to generate numerically in each case (we have used numerics extensively to obtain our final answer (B.6) below). A few examples might be useful: \[k=5,r=4\ (\gcd(k,r)=1):\ S_{0}=\{0,1,2,3\},\] \[k=6,r=4\ (\gcd(k,r)=2):\ S_{0}=\{0,2\},\ S_{1}=\{1,3\},\] \[k=4,r=4\ (\gcd(k,r)=4):\ S_{0}=\{0\},\ S_{1}=\{1\},\ S_{2}=\{2\},\ S _{3}=\{3\}, \tag{114}\] \[k=15,r=9\ (\gcd(k,r)=3):\ S_{0}=\{0,3,6\},\ S_{1}=\{1,4,7\},\ S_{2}= \{2,5,8\},\] while, e.g., for \(k=9\), \(r=9\) (\(\gcd(k,r)=9\)), all 9 sets \(S_{j}\) have a single element, similar to the \(k=r=4\) case above. This illustrates a general feature of the \(k=r\) case, which will be important in our studies of the moduli space. The next step is the most important to obtain our final answer. For each different value of \(A\in S_{j}\) that appears in \(I^{ab}_{j}\), one also finds that \((\mathcal{C}_{a}\mathcal{C}_{b}^{*})^{A}\) is multiplied by an integral \(\int\limits_{0}^{1}dx\). The integral is, however, summed over \(\frac{k}{\gcd(k,r)}\) times, each time with a different constant term appearing in the exponent in the integrand, due to the \((kp+[j+nr]_{k})/r\) term. Remarkably, in each case one finds that, together with the sum over \(m^{\prime}\), these constant terms take precisely the values needed to extend the range of the integration over \(x\) to the entire real line.26 Performing the Gaussian integral over \(x\), the final answer for \(I^{ab}_{j}\) is then remarkably simple Footnote 26: Admittedly, we have only numerical checks of this claim rather than an analytic proof. However, the checks are fairly easy to automate and the result is the same in each of the many cases we have studied. \[I^{ab}_{j} = \frac{\sqrt{V}}{2}\sqrt{\frac{\ell kL_{2}L_{4}}{rL_{1}L_{3}}}\ e^ {\frac{L_{1}L_{2}k}{2\pi r}(\varphi_{1}^{j})^{2}}\ \ e^{\frac{L_{2}L_{4}\ell}{2\pi}(\varphi_{3}^{j})^{2}}\sum_{A_{j}\in S_{j}}\ ( \mathcal{C}_{a}\mathcal{C}_{b}^{*})^{A_{j}}. \tag{115}\] The complexity is, of course, hidden away in the definition of the \(S_{j}\) sets from (113). ## Appendix C Field strength and action of the multifractional instanton Here, we compute the field strength \(F_{\mu\nu}\), which we shall use to compute the action density and to verify that the action of the self-dual solution satisfies the relation \(S=\frac{8\pi^{2}|Q|}{g^{2}}\). The non-zero components of \(\mathcal{F}^{(0)}_{\mu\nu}\) are \[\mathcal{F}^{(0)}_{13\,C^{\prime},C}=-i\hat{D}_{1}\mathcal{W}_{4C^{\prime},C} +i\hat{D}_{3}\mathcal{W}_{2\,C^{\prime},C}\,,\quad\mathcal{F}^{(0)}_{14,C^{ \prime},C}=\hat{D}_{1}\mathcal{W}_{4\,C^{\prime},C}+i\hat{D}_{4}\mathcal{W}_{2 \,C^{\prime},C}\,, \tag{116}\] where \(\mathcal{W}^{(0)}_{2\,C^{\prime},C}\) and \(\mathcal{W}^{(0)}_{4\,C^{\prime},C}\) are from (4.21). The covariant derivatives \(\hat{D}_{\mu}\) are given by \[\hat{D}_{\mu}=\partial_{\mu}+i2\pi N\hat{A}_{\mu}+i\hat{\phi}_{\mu}^{[C^{ \prime}]_{r}}\,, \tag{117}\] or in terms of the components, with \(\hat{\phi}_{\mu}^{[C^{\prime}]_{r}}\) from (3.20), \[\hat{D}_{1} = \partial_{1}+i\hat{\phi}_{1}^{[C^{\prime}]_{r}}\,,\quad\hat{D}_{2}= \partial_{2}-i\frac{2\pi rx_{1}}{kL_{1}L_{2}}+i\hat{\phi}_{2}^{[C^{\prime}]_{r}}\] \[\hat{D}_{3} = \partial_{3}+i\hat{\phi}_{3}^{[C^{\prime}]_{r}}\,,\quad\hat{D}_{4 }=\partial_{4}-i\frac{2\pi x_{3}}{\ell L_{3}L_{4}}+i\hat{\phi}_{4}^{[C^{\prime }]_{r}}\,.\] (C.3) One can check that the following identities hold \[i\hat{D}_{1}\Phi_{C^{\prime},C}^{(p)}=\hat{D}_{2}\Phi_{C^{\prime},C}^{(p)}\,, \quad i\hat{D}_{3}\Phi_{C^{\prime},C}^{(p)}=\hat{D}_{4}\Phi_{C^{\prime},C}^{(p) }\,.\] (C.4) Then, one finds \[-i\mathcal{F}_{14\,C^{\prime},C}^{(0)} = \mathcal{F}_{13\,C^{\prime},C}^{(0)}=iV^{-1/4}\sum_{p=0}^{\frac{ r}{\gcd(k,r)}-1}\Big{\{}-\mathcal{C}_{4}^{[C^{\prime}+pk]_{r}}\mathcal{G}_{1,C^{ \prime},C}^{(p)}(x,\hat{\phi})+\mathcal{C}_{2}^{[C^{\prime}+pk]_{r}}\mathcal{G }_{3,C^{\prime},C}^{(p)}(x,\hat{\phi})\Big{\}}\,\] \[\mathcal{F}_{12\,C^{\prime},C}^{(0)} = \mathcal{F}_{34\,C^{\prime},C}^{(0)}=0\,,\] (C.5) where the functions \(\mathcal{G}_{1,C^{\prime},C}^{(p)}(x,\hat{\phi})\) and \(\mathcal{G}_{3,C^{\prime},C}^{(p)}(x,\hat{\phi})\) are defined as \[\mathcal{G}_{1,C^{\prime},C}^{(p)}(x,\hat{\phi}) = \hat{D}_{1}\Phi_{C^{\prime},C}^{(p)}(x,\hat{\phi})\] (C.6) \[= -\frac{2\pi r}{kL_{1}L_{2}}\sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r )},\,m^{\prime}\in\mathbb{Z}}\sum_{n^{\prime}\in\mathbb{Z}}e^{\frac{i2\pi x_{2 }}{L_{2}}(m+\frac{2C^{\prime}-1-k}{2k})}e^{\frac{i2\pi x_{4}}{L_{4}}(n^{ \prime}-\frac{2C-1-\ell}{2t})}\] \[\times e^{-i\frac{\pi(1-k)}{k}\big{(}C^{\prime}-\frac{1+k(1-2m)}{ 2}\big{)}}e^{i\frac{\pi(1-\ell)}{\ell}\left(C-\frac{1+\ell(2n^{\prime}+1)}{2 }\right)}\] \[\times\left(x_{1}-\frac{kL_{1}L_{2}\hat{\phi}_{2}^{[C^{\prime}]_{ r}}}{2\pi r}-\frac{L_{1}}{r}\left(km+\frac{2C^{\prime}-1-k}{2}\right)\right)\] \[\times e^{-\frac{\pi r}{kL_{1}L_{2}}\left[x_{1}-\frac{kL_{1}L_{2 }}{2\pi r}(\hat{\phi}_{2}^{[C^{\prime}]_{r}}-i\hat{\phi}_{1}^{[C^{\prime}]_{ r}})-\frac{L_{1}}{r}\left(km+\frac{2C^{\prime}-1-k}{2}\right)\right]^{2}}\] \[\times e^{-\frac{\pi}{L_{3}L_{4}}\left[x_{3}-\frac{\ell L_{2}L_{4 }}{2\pi}(\hat{\phi}_{4}^{[C^{\prime}]_{r}}-i\hat{\phi}_{3}^{[C^{\prime}]_{r}}) -L_{3}\left(\ell n^{\prime}-\frac{2C-1-\ell}{2}\right)\right]^{2}}\,,\] (C.7) and \[\mathcal{G}^{(p)}_{3,C^{\prime},C}(x,\hat{\phi}) = \hat{D}_{3}\Phi^{(p)}_{C^{\prime},C}(x,\hat{\phi}) \tag{122}\] \[= -\frac{2\pi}{\ell L_{3}L_{4}}\sum_{m=p+\frac{rm^{\prime}}{\rm gcd (k,r)}\,,\,m^{\prime}\in\mathbb{Z}}\sum_{n^{\prime}\in\mathbb{Z}}e^{\frac{i2\pi x _{2}}{L_{2}}(m+\frac{2C^{\prime}-1-k}{2k})}e^{\frac{i2\pi x_{4}}{L_{4}}(n^{ \prime}-\frac{2C-1-\ell}{2\ell})}\] \[\times e^{-i\frac{\pi(1-k)}{k}\left(C^{\prime}-\frac{1+k(1-2m)}{2 }\right)}e^{i\frac{\pi(1-\ell)}{\ell}\left(C-\frac{1+\ell(2n^{\prime}+1)}{2} \right)}\] \[\times\left(x_{3}-\frac{\ell L_{3}L_{4}\hat{\phi}_{4}^{[C^{ \prime}]r}}{2\pi}-L_{3}\left(\ell n^{\prime}-\frac{2C-1-\ell}{2}\right)\right)\] \[\times e^{-\frac{\pi r}{L_{3}L_{4}}\left[x_{3}-\frac{L_{3}L_{4}}{2 \pi}\left(\hat{\phi}_{4}^{[C^{\prime}]r}-i\hat{\phi}_{3}^{[C^{\prime}]r})-L_{ 3}\left(\ell n^{\prime}-\frac{2C-1-\ell}{2}\right)\right]^{2}}\,.\] Owing to the self-duality of the solution, we also have: \[\mathcal{F}^{(0)}_{23\,C^{\prime},C}=\mathcal{F}^{(0)}_{14\,C^{\prime},C}\,, \quad\mathcal{F}^{(0)}_{24\,C^{\prime},C}=-\mathcal{F}^{(0)}_{13\,C^{\prime},C}\,. \tag{123}\] In the following, we calculate the action density \(\operatorname{tr}\left[F_{\mu\nu}F_{\mu\nu}\right]\) of the twisted solution. Using (109), the square of the field strength is \[F_{\mu\nu}F_{\mu\nu} = \omega^{2}\left(\hat{F}^{\omega}_{\mu\nu}+F^{s}_{\mu\nu}\right)^ {2}+4\pi\left(\hat{F}^{\omega}_{\mu\nu}+F^{s}_{\mu\nu}\right)\left[\begin{array} []{cc}\ell F^{k}_{\mu\nu}&\mathcal{F}^{k\times\ell}_{\mu\nu}\\ \mathcal{F}^{\dagger\ell\times k}_{\mu\nu}&-kF^{\ell}_{\mu\nu}\end{array}\right] \tag{124}\] \[+\left[\begin{array}{cc}F^{k}_{\mu\nu}F^{k}_{\mu\nu}+\mathcal{ F}^{k\times\ell}_{\mu\nu}\mathcal{F}^{\dagger\ell\times k}_{\mu\nu}&F^{k}_{\mu\nu} \mathcal{F}^{k\times\ell}_{\mu\nu}+\mathcal{F}^{\dagger\chi\ell}_{\mu\nu} \mathcal{F}^{\ell\ell}_{\mu\nu}\\ \mathcal{F}^{\dagger\ell\times k}_{\mu\nu}F^{k}_{\mu\nu}+F^{\ell}_{\mu\nu} \mathcal{F}^{\dagger\ell\times k}_{\mu\nu}&F^{\ell}_{\mu\nu}F^{\ell}_{\mu\nu} +\mathcal{F}^{\dagger\ell\times k}_{\mu\nu}\mathcal{F}^{\dagger\chi\ell}_{\mu \nu}\end{array}\right]\,.\] Then, the action density is given by the trace \[\operatorname{tr}\left[F_{\mu\nu}F_{\mu\nu}\right] = \operatorname{tr}\left[\omega^{2}\right]\left(\hat{F}^{\omega}_{ \mu\nu}+F^{s}_{\mu\nu}\right)^{2}\] \[+4\pi\ell\left(\hat{F}^{\omega}_{\mu\nu}+F^{s}_{\mu\nu}\right) \operatorname{tr}_{k}\left[F^{k}_{\mu\nu}\right]-4\pi k\left(\hat{F}^{\omega} _{\mu\nu}+F^{s}_{\mu\nu}\right)\operatorname{tr}_{\ell}\left[F^{\ell}_{\mu\nu}\right]\] \[+\operatorname{tr}_{k}\left[F^{k}_{\mu\nu}F^{k}_{\mu\nu}+\mathcal{ F}^{k\times\ell}_{\mu\nu}\mathcal{F}^{\dagger\ell\times k}_{\mu\nu}\right]+ \operatorname{tr}_{\ell}\left[F^{\ell}_{\mu\nu}F^{\ell}_{\mu\nu}+\mathcal{F}^ {\dagger\ell\times k}_{\mu\nu}\mathcal{F}^{\dagger\chi\ell}_{\mu\nu}\right]\,.\] To leading order in \(\Delta\) we have: \[F^{s}_{\mu\nu} = \Delta\left(\partial_{\mu}S^{\omega(0)}_{\nu}-\partial_{\nu}S^{ \omega(0)}_{\mu}\right)\,,\] \[F^{k}_{\mu\nu} = \Delta\left(\partial_{\mu}S^{k(0)}_{\nu}-\partial_{\nu}S^{k(0)}_{ \mu}+i\mathcal{W}^{(0)k\times\ell}_{\mu}\mathcal{W}^{\dagger(0)\ell\times k}_{ \nu}-i\mathcal{W}^{(0)k\times\ell}_{\nu}\mathcal{W}^{\dagger(0)\ell\times k}_{ \mu}\right)\,,\] \[F^{\ell}_{\mu\nu} = \Delta\left(\partial_{\mu}S^{\ell(0)}_{\nu}-\partial_{\nu}S^{\ell (0)}_{\mu}+i\mathcal{W}^{\dagger(0)\ell\times k}_{\mu}\mathcal{W}^{(0)k\times \ell}_{\nu}-i\mathcal{W}^{(0)\ell\times k}_{\nu}\mathcal{W}^{(0)k\times\ell}_{ \mu}\right)\,,\] \[\mathcal{F}^{k\times\ell}_{\mu\nu} = \sqrt{\Delta}\mathcal{F}^{(0)k\times\ell}_{\mu\nu}=\sqrt{\Delta} \left(\hat{D}_{\mu}\mathcal{W}^{(0)k\times\ell}_{\nu}-\hat{D}_{\nu}\mathcal{W}^ {(0)k\times\ell}_{\mu}\right)\,. \tag{125}\] Substituting (C.13) into (C.12), we find to \({\cal O}(\Delta)\): \[{\rm tr}\left[F_{\mu\nu}F_{\mu\nu}\right] = {\rm tr}\left[\omega^{2}\right]\left(\hat{F}^{\omega}_{\mu\nu}\hat{ F}^{\omega}_{\mu\nu}+2\Delta(\partial_{\mu}{\cal S}^{\omega(0)}_{\nu}-\partial_{\nu}{ \cal S}^{\omega(0)}_{\mu})\hat{F}^{\omega}_{\mu\nu}\right)\] (C.14) \[+4\pi\ell\Delta\hat{F}^{\omega}_{\mu\nu}{\rm tr}_{k}\left[ \partial_{\mu}{\cal S}^{k(0)}_{\nu}-\partial_{\nu}{\cal S}^{k(0)}_{\mu}\right] -4\pi k\Delta\hat{F}^{\omega}_{\mu\nu}{\rm tr}_{\ell}\left[\partial_{\mu}{\cal S }^{\ell(0)}_{\nu}-\partial_{\nu}{\cal S}^{\ell(0)}_{\mu}\right]\] \[+i4\pi\ell\Delta\hat{F}^{\omega}_{\mu\nu}{\rm tr}_{k}\left[{\cal W }^{(0)k\times\ell}_{\mu}{\cal W}^{\dagger(0)\ell\times k}_{\nu}-{\cal W}^{(0) k\times\ell}_{\nu}{\cal W}^{\dagger(0)\ell\times k}_{\mu}\right]\] \[-i4\pi k\Delta\hat{F}^{\omega}_{\mu\nu}{\rm tr}_{\ell}\left[{\cal W }^{\dagger(0)\ell\times k}_{\mu}{\cal W}^{(0)k\times\ell}_{\nu}-{\cal W}^{ \dagger(0)\ell\times k}_{\nu}{\cal W}^{(0)k\times\ell}_{\mu}\right]\] \[+\Delta{\rm tr}_{k}\left[{\cal F}^{(0)k\times\ell}_{\mu\nu}{\cal F }^{\dagger(0)\ell\times k}_{\mu\nu}\right]+\Delta{\rm tr}_{\ell}\left[{\cal F }^{(0)\ell\times k}_{\mu\nu}{\cal F}^{(0)k\times\ell}_{\mu\nu}\right]\;.\] Then, using the trace properties \({\rm tr}_{k}[{\cal S}^{(0)k}_{\mu}]={\rm tr}_{\ell}[{\cal S}^{(0)\ell}_{\mu}]=0\), along with \[{\rm tr}_{k}\left[{\cal F}^{(0)k\times\ell}_{\mu\nu}{\cal F}^{ \dagger(0)\ell\times k}_{\mu\nu}\right] = {\rm tr}_{\ell}\left[{\cal F}^{\dagger(0)\ell\times k}_{\mu\nu}{ \cal F}^{(0)k\times\ell}_{\mu\nu}\right]\,,\] \[{\rm tr}_{k}\left[{\cal W}^{(0)k\times\ell}_{\mu}{\cal W}^{ \dagger(0)\ell\times k}_{\nu}-{\cal W}^{(0)k\times\ell}_{\nu}{\cal W}^{ \dagger(0)\ell\times k}_{\mu}\right] = -{\rm tr}_{\ell}\left[{\cal W}^{\dagger(0)\ell\times k}_{\mu}{\cal W }^{(0)k\times\ell}_{\nu}-{\cal W}^{\dagger(0)\ell\times k}_{\nu}{\cal W}^{(0) k\times\ell}_{\mu}\right]\,,\] we find to \({\cal O}(\Delta)\) \[{\rm tr}\left[F_{\mu\nu}F_{\mu\nu}\right] = {\rm tr}\left[\omega^{2}\right]\left(\hat{F}^{\omega}_{\mu\nu} \hat{F}^{\omega}_{\mu\nu}+2\Delta(\partial_{\mu}{\cal S}^{\omega(0)}_{\nu}- \partial_{\nu}{\cal S}^{\omega(0)}_{\mu})\hat{F}^{\omega}_{\mu\nu}\right)\] (C.16) \[+i4\pi N\Delta\hat{F}^{\omega}_{\mu\nu}{\rm tr}_{k}\left[{\cal W} ^{(0)k\times\ell}_{\mu}{\cal W}^{\dagger(0)\ell\times k}_{\nu}-{\cal W}^{(0) k\times\ell}_{\nu}{\cal W}^{\dagger(0)\ell\times k}_{\mu}\right]\] \[+2\Delta{\rm tr}_{k}\left[{\cal F}^{(0)k\times\ell}_{\mu\nu}{\cal F }^{\dagger(0)\ell\times k}_{\mu\nu}\right]\,.\] In the following, we perform the calculation of the action setting \({\cal C}^{[C^{\prime}]_{r}}_{4}=0\). Thus, recalling (5.4), we are particularly interested in the cases \(r=1\) and \(r=k,k>1\). However, the conclusion should hold in the general case. Keeping only the non-zero entries and using \(-i{\cal F}^{(0)\mathbf{\beta}}_{14}={\cal F}^{(0)\mathbf{\beta}}_{13}\) along with the self-duality property, we arrive at \[{\rm tr}_{k}\left[{\cal F}^{(0)k\times\ell}_{\mu\nu}{\cal F}^{ \dagger(0)\ell\times k}_{\mu\nu}\right] = 2{\rm tr}_{k}\left[{\cal F}^{(0)k\times\ell}_{13}{\cal F}^{ \dagger(0)\ell\times k}_{13}\right]+2{\rm tr}_{k}\left[{\cal F}^{(0)k\times \ell}_{14}{\cal F}^{\dagger(0)\ell\times k}_{14}\right]\] (C.17) \[+ 2{\rm tr}_{k}\left[{\cal F}^{(0)k\times\ell}_{23}{\cal F}^{ \dagger(0)\ell\times k}_{23}\right]+2{\rm tr}_{k}\left[{\cal F}^{(0)k\times \ell}_{24}{\cal F}^{\dagger(0)\ell\times k}_{24}\right]\] \[= 8{\rm tr}_{k}\left[{\cal F}^{(0)k\times\ell}_{13}{\cal F}^{ \dagger(0)\ell\times k}_{13}\right]\,.\] Likewise: \[\hat{F}^{\omega}_{\mu\nu}{\rm tr}_{k}\left[{\cal W}^{(0)k\times\ell}_{\mu}{ \cal W}^{\dagger(0)\ell\times k}_{\nu}-{\cal W}^{(0)k\times\ell}_{\nu}{\cal W}^ {\dagger(0)\ell\times k}_{\mu}\right]=-4i\hat{F}^{\omega}_{12}{\rm tr}_{k} \left[{\cal W}^{(0)k\times\ell}_{2}{\cal W}^{\dagger(0)\ell\times k}_{2}\right]\,.\] (C.18) Thus, the action density is given by the expression \[{\rm tr}\left[F_{\mu\nu}F_{\mu\nu}\right] = {\rm tr}\left[\omega^{2}\right]\left(\hat{F}^{\omega}_{\mu\nu}\hat{ F}^{\omega}_{\mu\nu}+2\Delta(\partial_{\mu}{\cal S}^{\omega}_{\nu}-\partial_{\nu}{ \cal S}^{\omega}_{\mu})\hat{F}^{\omega}_{\mu\nu}\right)\] (C.19) \[+16\pi N\Delta\hat{F}^{\omega}_{12}{\rm tr}_{k}\left[{\cal W}^{(0) k\times\ell}_{2}{\cal W}^{\dagger(0)\ell\times k}_{2}\right]\] \[+16\Delta{\rm tr}_{k}\left[{\cal F}^{(0)k\times\ell}_{13}{\cal F}^{ \dagger(0)\ell\times k}_{13}\right]\,.\] The action is \[S=\frac{2}{g^{2}}\int_{\mathbb{T}^{4}}\mathrm{tr}\left[F_{\mu\nu}F_{\mu\nu} \right]\,, \tag{102}\] and upon integrating, the term \(\partial_{\mu}\mathcal{S}_{\nu}^{(0)\omega}\) drops out because \(\mathcal{S}_{\nu}^{(0)\omega}\) satisfies periodic boundary conditions. Thus, we finally have to \(\mathcal{O}(\Delta)\): \[S = S_{0}+\frac{\Delta}{2g^{2}}\int_{\mathbb{T}^{4}}16\pi N\hat{F}_{ 12}^{\omega}\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)k\times\ell}\mathcal{W}_{ 2}^{(0)\ell\times k}\right]+\frac{\Delta}{2g^{2}}\int_{\mathbb{T}^{4}}16 \mathrm{tr}_{k}\left[\mathcal{F}_{13}^{(0)k\times\ell}\mathcal{F}_{13}^{\dagger (0)\ell\times k}\right]\,,\] where \[S_{0} = \frac{1}{2g^{2}}\int_{\mathbb{T}^{4}}\mathrm{tr}\left[\omega^{2} \right]\left(\hat{F}_{\mu\nu}^{\omega}\hat{F}_{\mu\nu}^{\omega}\right)=\frac{ 1}{2g^{2}}\int_{\mathbb{T}^{4}}\mathrm{tr}\left[\omega^{2}\right]\left\{2 \left(\hat{F}_{12}^{\omega}\hat{F}_{12}^{\omega}+\hat{F}_{34}^{\omega}\hat{F} _{34}^{\omega}\right)\right\} \tag{103}\] \[= (4\pi^{2}Nk\ell)\frac{1}{g^{2}N^{2}}\left(\frac{r^{2}}{k^{2}} \frac{L_{3}L_{4}}{L_{1}L_{2}}+\frac{1}{\ell^{2}}\frac{L_{1}L_{2}}{L_{3}L_{4}} \right)\,.\] Using the definition of \(\Delta\) (101) we readily find \[S_{0}=\frac{8\pi^{2}r}{Ng^{2}}+\mathcal{O}(\Delta^{2})\,. \tag{104}\] Then, using \(\hat{F}_{12}^{\omega}=-\frac{r}{kNL_{1}L_{2}}\), we have \[S=S_{0}+\frac{\Delta}{g^{2}}\left(-\frac{8\pi r}{kL_{1}L_{2}} \int_{\mathbb{T}^{4}}\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)k\times\ell} \mathcal{W}_{2}^{\dagger(0)\ell\times k}\right]+8\int_{\mathbb{T}^{4}} \mathrm{tr}_{k}\left[\mathcal{F}_{13}^{(0)k\times\ell}\mathcal{F}_{13}^{\dagger (0)\ell\times k}\right]\right)\,.\] Finally, the remaining integrals are given by (we set all holonomies to \(0\), as the final answer will not depend on them): \[\int_{\mathbb{T}^{4}}\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)k \times\ell}\mathcal{W}_{2}^{\dagger(0)\ell\times k}\right] = \sqrt{L_{1}L_{2}L_{3}L_{4}}\sum_{C=1}^{\ell}\sum_{C^{\prime}=1}^{ k}|\mathcal{C}_{2}^{[C^{\prime}]_{r}}|^{2} \tag{105}\] \[\sum_{m=p+\frac{rm^{\prime}}{\gcd(k,r)},\,m^{\prime}\in\mathbb{ Z}}\int_{0}^{1}d\tilde{x}_{1}e^{-\frac{2\pi rL_{1}}{kL_{2}}\left(\tilde{x}_{1}- \frac{2mk+2(j+nr)-1-k}{2r}\right)^{2}}\] \[\times\sum_{n^{\prime}\in\mathbb{Z}}\int_{0}^{1}d\tilde{x}_{3}e^ {-\frac{2\pi L_{3}}{\ell L_{4}}\left(\tilde{x}_{3}-\frac{(2n^{\prime}+1)\ell-( 2C-1)}{2}\right)^{2}}\,,\] and \[\int_{\mathbb{T}^{4}}\text{tr}_{k}\left[\mathcal{F}_{13}^{(0)k \times\ell}\mathcal{F}_{13}^{\dagger(0)\ell\times k}\right]\] \[= \frac{4\pi^{2}}{\ell^{2}}\sqrt{\frac{L_{1}L_{2}L_{3}}{L_{4}^{3}}} \sum_{C=1}^{\ell}\sum_{C^{\prime}=1}^{k}|\mathcal{C}_{2}^{[C^{\prime}]_{r}}|^{2}\] \[\sum_{m=p+\frac{\tau m^{\prime}}{\text{gcd}(k,r)},\,m^{\prime} \in\mathbb{Z}}\int_{0}^{1}d\tilde{x}_{1}e^{-\frac{2\pi rL_{1}}{kL_{2}}\left( \tilde{x}_{1}-\frac{2mk+2(j+nr)-1-k}{2r}\right)^{2}}\] \[\times\sum_{n^{\prime}\in\mathbb{Z}}\int_{0}^{1}d\tilde{x}_{3} \left(\tilde{x}_{3}-\frac{\left((2n^{\prime}+1)\ell-(2C-1)\right)}{2}\right)^ {2}e^{-\frac{2\pi L_{3}}{\ell L_{4}}\left(\tilde{x}_{3}-\frac{(2n^{\prime}+1) \ell-(2C-1)}{2}\right)^{2}}\,.\] Now, collecting terms of \(\mathcal{O}(\Delta)\) and using \(r\ell L_{3}L_{4}=kL_{1}L_{2}\), thus ignoring corrections \(\mathcal{O}(\Delta^{2})\), we find: \[S =S_{0}+8\pi\sqrt{\frac{r}{\ell k}}\frac{\Delta}{g^{2}}\sum_{C=1}^ {\ell}\sum_{C^{\prime}=1}^{k}|\mathcal{C}_{2}^{[C^{\prime}]_{r}}|^{2}\] \[\sum_{m=p+\frac{\tau m^{\prime}}{\text{gcd}(k,r)},\,m^{\prime} \in\mathbb{Z}}\int_{0}^{1}d\tilde{x}_{1}e^{-\frac{2\pi rL_{1}}{kL_{2}}\left( \tilde{x}_{1}-\frac{2mk+2(j+nr)-1-k}{2r}\right)^{2}}\] \[\times\sum_{n^{\prime}}\int_{0}^{1}d\tilde{x}_{3}\left\{-1+\frac{ 4\pi}{\ell}\frac{L_{3}}{L_{4}}\left(\tilde{x}_{3}-\frac{\left((2n^{\prime}+1) \ell-(2C-1)\right)}{2}\right)^{2}\right\}e^{-\frac{2\pi L_{3}}{\ell L_{4}} \left(\tilde{x}_{3}-\frac{(2n^{\prime}+1)\ell-(2C-1)}{2}\right)^{2}}\,.\] One can check (using Mathematica) that27: Footnote 27: One can show that (C.27) is true by converting the combined infinite sum and the integral over the unit interval to an infinite integral. \[\sum_{C=1}^{\ell}\sum_{n}\int_{0}^{1}d\tilde{x}_{3}\left\{-1+\frac{4\pi}{\ell }\frac{L_{3}}{L_{4}}\left(\tilde{x}_{3}-\frac{\left((2n+1)\ell-(2C-1)\right)}{ 2}\right)^{2}\right\}e^{-\frac{2\pi L_{3}}{\ell L_{4}}\left(\tilde{x}_{3}- \frac{(2n+1)\ell-(2C-1)}{2}\right)^{2}}=0\,,\] (C.27) and thus, we conclude that, as expected \[S =S_{0}+\mathcal{O}(\Delta^{2})=\frac{r}{N}\frac{8\pi^{2}}{g^{2}}+ \mathcal{O}(\Delta^{2})\,,\] (C.28) i.e. the action of the multifractional instanton is, to the order in \(\Delta\) we are working on, equal to \(\frac{r}{N}\) times the BPST instanton action. Blow up of the gauge invariant local densities along the noncompact moduli of the \(k\neq r\) solution To determine the gauge invariant density (100), we need to solve for \(\mathcal{S}_{\nu}^{(0)\omega}\). To this end, we use (102) (or the equivalent forms (104, 105)). Acting on these equations with \(\partial=\sigma^{\nu}\partial_{\nu}\) and using the identity \(\sigma^{\nu}\bar{\sigma}^{\mu}+\sigma^{\mu}\bar{\sigma}^{\nu}=2\delta_{\mu\nu}\), we find the expression \[\Box\mathcal{S}^{(0)\omega}=-\frac{i}{\pi\ell k}\sigma^{\nu}\partial_{\nu} \mathcal{Y}\,, \tag{106}\] where (once more, for brevity, we omit the \(k\times\ell\) and \(\ell\times k\) superscripts) \[\mathcal{Y}=\left[\begin{array}{cc}\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0 )}\mathcal{W}_{2}^{\dagger(0)}-\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger (0)}\right]&-2\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{4}^{ \dagger(0)}\right]\\ -2\mathrm{tr}_{k}\left[\mathcal{W}_{4}^{(0)}\mathcal{W}_{2}^{\dagger(0)} \right]&-\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{2}^{\dagger( 0)}-\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]\end{array} \right]\,. \tag{107}\] Equating the components of (106), we arrive at the following set of equations: \[i\pi\ell k\Box\left(\mathcal{S}_{4}^{(0)\omega}+i\mathcal{S}_{3} ^{(0)\omega}\right)=\] \[\left(\partial_{4}+i\partial_{3}\right)\mathrm{tr}_{k}\left[ \mathcal{W}_{2}^{(0)}\mathcal{W}_{2}^{\dagger(0)}-\mathcal{W}_{4}^{(0)} \mathcal{W}_{4}^{\dagger(0)}\right]-2\left(i\partial_{1}+\partial_{2}\right) \mathrm{tr}_{k}\left[\mathcal{W}_{4}^{(0)}\mathcal{W}_{2}^{\dagger(0)}\right]\,,\] \[i\pi\ell k\Box\left(\mathcal{S}_{4}^{(0)\omega}-i\mathcal{S}_{3 }^{(0)\omega}\right)=\] \[-\left(\partial_{4}-i\partial_{3}\right)\mathrm{tr}_{k}\left[ \mathcal{W}_{2}^{(0)}\mathcal{W}_{2}^{\dagger(0)}-\mathcal{W}_{4}^{(0)} \mathcal{W}_{4}^{\dagger(0)}\right]-2\left(i\partial_{1}-\partial_{2}\right) \mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]\,,\] \[i\pi\ell k\Box\left(i\mathcal{S}_{1}^{(0)\omega}+\mathcal{S}_{2 }^{(0)\omega}\right)=\] \[-2\left(\partial_{4}+i\partial_{3}\right)\mathrm{tr}_{k}\left[ \mathcal{W}_{2}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]-\left(i\partial_{1}+ \partial_{2}\right)\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{2} ^{\dagger(0)}-\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]\,,\] \[i\pi\ell k\Box\left(i\mathcal{S}_{1}^{(0)\omega}-\mathcal{S}_{2 }^{(0)\omega}\right)=\] \[-2\left(\partial_{4}-i\partial_{3}\right)\mathrm{tr}_{k}\left[ \mathcal{W}_{4}^{(0)}\mathcal{W}_{2}^{\dagger(0)}\right]+\left(i\partial_{1}- \partial_{2}\right)\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{2} ^{\dagger(0)}-\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]\,. \tag{108}\] Thus, we find \[\pi\ell k\Box\mathcal{S}_{4}^{(0)\omega}=\] \[\partial_{3}\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W }_{2}^{\dagger(0)}-\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]- \left(\partial_{1}-i\partial_{2}\right)\mathrm{tr}_{k}\left[\mathcal{W}_{4}^ {(0)}\mathcal{W}_{2}^{\dagger(0)}\right]-\left(\partial_{1}+i\partial_{2} \right)\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{4}^{\dagger(0) }\right]\,,\] \[-\pi\ell k\Box\mathcal{S}_{3}^{(0)\omega}=\] \[\partial_{4}\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W }_{2}^{\dagger(0)}-\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]- \left(i\partial_{1}+\partial_{2}\right)\mathrm{tr}_{k}\left[\mathcal{W}_{4}^ {(0)}\mathcal{W}_{2}^{\dagger(0)}\right]+\left(i\partial_{1}-\partial_{2} \right)\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{4}^{\dagger(0) }\right]\,. \tag{109}\] and \[\left(\partial_{3}\mathcal{S}_{4}^{(0)\omega}-\partial_{4}\mathcal{S }_{3}^{(0)\omega}\right) = \left(\pi\ell k\Box\right)^{-1}\left\{\left(\partial_{3}^{2}+ \partial_{4}^{2}\right)\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{ 2}^{\dagger(0)}-\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]\right.\] \[+ \left.\left(-\partial_{1}\partial_{3}-\partial_{2}\partial_{4}+i \partial_{2}\partial_{3}-i\partial_{1}\partial_{4}\right)\mathrm{tr}_{k} \left[\mathcal{W}_{4}^{(0)}\mathcal{W}_{2}^{\dagger(0)}\right]\right.\] \[+ \left.\left(-\partial_{1}\partial_{3}-\partial_{2}\partial_{4}-i \partial_{2}\partial_{3}+i\partial_{1}\partial_{4}\right)\mathrm{tr}_{k} \left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]\right\}\,.\] We are interested in the case \(r>1\) and \(\gcd(k,r)=1\). Let us consider the example \(r=2,k=3\). Then, using the parameterization of (5.9), taking the upper sign for definiteness, \[\mathcal{C}_{2}^{0}=u\,,\quad\mathcal{C}_{2}^{1}=u\,,\quad \mathcal{C}_{4}^{0}=-iu\,,\quad\mathcal{C}_{4}^{1}=iu\,.\] (D.6) we find28 Footnote 28: The sums over \(C^{\prime}\) and \(C\) should be really thought of as being over \(0,...,k-1\) and \(0,...,\ell-1\), respectively, to be consistent with the main body of the paper. We apologize to the reader for this slight mismatch. \[\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{2}^{ \dagger(0)}\right] = u^{2}\sum_{C=1}^{\ell}\sum_{C^{\prime}=1}^{k}\left[\Phi_{C^{ \prime},C}^{0}+\Phi_{C^{\prime},C}^{1}\right]\left[\Phi_{C^{\prime},C}^{*0}+ \Phi_{C^{\prime},C}^{*1}\right]\] \[= u^{2}\sum_{C=1}^{\ell}\sum_{C^{\prime}=1}^{k}|\Phi_{C^{\prime}, C}^{0}|^{2}+|\Phi_{C^{\prime},C}^{1}|+\Phi_{C^{\prime},C}^{0}\Phi_{C^{\prime},C}^{ *1}+\Phi_{C^{\prime},C}^{*0}\Phi_{C^{\prime},C}^{1}\,,\] \[\mathrm{tr}_{k}\left[\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{ \dagger(0)}\right] = u^{2}\sum_{C=1}^{\ell}\sum_{C^{\prime}=1}^{k}\left[\Phi_{C^{ \prime},C}^{0}-\Phi_{C^{\prime},C}^{1}\right]\left[\Phi_{C^{\prime},C}^{*0}- \Phi_{C^{\prime},C}^{*1}\right]\] \[= u^{2}\sum_{C=1}^{\ell}\sum_{C^{\prime}=1}^{k}|\Phi_{C^{\prime}, C}^{0}|^{2}+|\Phi_{C^{\prime},C}^{1}|-\Phi_{C^{\prime},C}^{0}\Phi_{C^{\prime},C}^{ *1}-\Phi_{C^{\prime},C}^{*0}\Phi_{C^{\prime},C}^{1}\,,\] and \[\mathrm{tr}_{k}\left[\mathcal{W}_{4}^{(0)}\mathcal{W}_{2}^{ \dagger(0)}\right] = u^{2}\sum_{C=1}^{\ell}\left[\left(-\Phi_{1,C}^{0}+\Phi_{1,C}^{1} \right)\left(\Phi_{1,C}^{*0}+\Phi_{1,C}^{*1}\right)\right.\] (D.9) \[+\left.\left(\Phi_{2,C}^{0}-\Phi_{2,C}^{1}\right)\left(\Phi_{2,C} ^{*0}+\Phi_{2,C}^{*1}\right)\right.\] \[+\left.\left(-\Phi_{3,C}^{0}+\Phi_{3,C}^{1}\right)\left(\Phi_{3,C} ^{*0}+\Phi_{C^{\prime},C}^{*1}\right)\right]\,,\] \[\mathrm{tr}_{k}\left[\mathcal{W}_{2}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]= -iu^{2}\sum_{C=1}^{\ell}\left[\left(\Phi_{1,C}^{0}+\Phi_{1,C}^{1} \right)\left(-\Phi_{1,C}^{*0}+\Phi_{1,C}^{*1}\right)\right.\] \[+\left(\Phi_{2,C}^{0}+\Phi_{2,C}^{1}\right)\left(\Phi_{2,C}^{*0}- \Phi_{2,C}^{*1}\right)\] \[+\left(\Phi_{3,C}^{0}+\Phi_{3,C}^{1}\right)\left(-\Phi_{3,C}^{*0}+ \Phi_{3,C}^{*1}\right)\right]\,. \tag{100}\] It is not hard, using (101), to check that the combinations \[|\Phi_{C^{\prime},C}^{0}(x)|^{2},|\Phi_{C^{\prime},C}^{1}(x)|^{2}, \left(-\Phi_{1,C}^{0}+\Phi_{1,C}^{1}\right)(x)\left(\Phi_{1,C}^{*0}+\Phi_{1,C}^ {*1}\right)(x),\] \[\left(\Phi_{2,C}^{0}-\Phi_{2,C}^{1}\right)(x)\left(\Phi_{2,C}^{*0 }+\Phi_{2,C}^{*1}\right)(x),\left(-\Phi_{3,C}^{0}+\Phi_{3,C}^{1}\right)(x) \left(\Phi_{3,C}^{*0}+\Phi_{C^{\prime},C}^{*1}\right)(x) \tag{101}\] satisfy periodic boundary conditions.29 Then, we use the Fourier transform of these combinations, namely, Footnote 29: However, the component that carries the subscript \(C^{\prime},C\) is sent to one with subscript \(C^{\prime}-r,C+1\). Nevertheless, the combinations that give the gauge invariant density are periodic. Also, from the linearity of the Fourier analysis of the Fourier-transformed components below, the superposition of the various terms makes sense. The difficulty in the analysis below is that numerical convergence is hard to achieve. \[|\Phi_{C^{\prime},C}^{0}(x)|^{2} = \sum_{p_{\mu}\in\mathbb{Z}}e^{i\frac{2\pi p_{\mu}x_{\mu}}{L_{\mu} }}\mathcal{X}_{0;C^{\prime},C}(p)\,,\] \[|\Phi_{C^{\prime},C}^{1}(x)|^{2} = \sum_{p_{\mu}}e^{i\frac{2\pi p_{\mu}x_{\mu}}{L_{\mu}}}\mathcal{X} _{1;C^{\prime},C}(p)\,,\] \[\Phi_{C^{\prime},C}^{0}(x)\Phi_{C^{\prime},C}^{*1}(x) = \sum_{p_{\mu}}e^{i\frac{2\pi p_{\mu}x_{\mu}}{L_{\mu}}}\mathcal{X} _{2;C^{\prime},C}(p)\,,\] \[\left(-\Phi_{1,C}^{0}+\Phi_{1,C}^{1}\right)(x)\left(\Phi_{1,C}^{* 0}+\Phi_{1,C}^{*1}\right)(x) = \sum_{p_{\mu}}e^{i\frac{2\pi p_{\mu}x_{\mu}}{L_{\mu}}}\mathcal{X} _{3;C}(p)\,,\] \[\left(\Phi_{2,C}^{0}-\Phi_{2,C}^{1}\right)(x)\left(\Phi_{2,C}^{*0 }+\Phi_{2,C}^{*1}\right)(x) = \sum_{p_{\mu}}e^{i\frac{2\pi p_{\mu}x_{\mu}}{L_{\mu}}}\mathcal{X} _{4;C}(p)\,,\] \[\left(-\Phi_{3,C}^{0}+\Phi_{3,C}^{1}\right)(x)\left(\Phi_{3,C}^{*0 }+\Phi_{3,C}^{*1}\right)(x) = \sum_{p_{\mu}}e^{i\frac{2\pi p_{\mu}x_{\mu}}{L_{\mu}}}\mathcal{X} _{5;C}(p)\,. \tag{102}\] to find \[\mathrm{tr}_{k}\left[\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger (0)}\right]= \ u^{2}\sum_{p_{\mu}}e^{i\frac{2\pi p_{\mu}x_{\mu}}{L_{\mu}}} \mathcal{H}(p)\] \[\equiv u^{2}\sum_{p_{\mu},C,C^{\prime}}e^{i\frac{2\pi p_{\mu}x_{ \mu}}{L_{\mu}}}\left(\mathcal{X}_{0;C^{\prime},C}(p)+\mathcal{X}_{1;C^{\prime},C}(p)-\mathcal{X}_{2;C^{\prime},C}(p)-\mathcal{X}_{2;C^{\prime},C}^{*}(p) \right)\,.\] The function \({\cal H}(p)\), the Fourier transform of \({\rm tr}_{k}\left[{\cal W}_{4}^{(0)}{\cal W}_{4}^{\dagger(0)}\right]\) modulo \(u^{2}\), will play an important role below. In addition, we find \[\left(\pi\ell k\Box\right)^{-1}\left\{\left(\partial_{3}^{2}+ \partial_{4}^{2}\right){\rm tr}_{k}\left[{\cal W}_{2}^{(0)}{\cal W}_{2}^{ \dagger(0)}-{\cal W}_{4}^{(0)}{\cal W}_{4}^{\dagger(0)}\right]\right\}\] \[=\frac{2u^{2}}{\pi\ell k}\sum_{p_{\mu},C,C^{\prime}}\frac{\left[ \frac{p_{3}^{2}}{L_{3}^{2}}+\frac{p_{4}^{2}}{L_{4}^{2}}\right]e^{i\frac{2\pi p _{\mu}x_{\mu}}{L_{\mu}}}}{\frac{p_{1}^{2}}{L_{1}^{2}}+\frac{p_{2}^{2}}{L_{2}^ {2}}+\frac{p_{3}^{2}}{L_{3}^{2}}+\frac{p_{4}^{2}}{L_{4}^{2}}}\left({\cal X}_{2 ;C^{\prime},C}(p)+{\cal X}_{2;C^{\prime},C}^{*}(p)\right)\,,\] and \[\left(\pi\ell k\Box\right)^{-1}\left\{\left(-\partial_{1} \partial_{3}-\partial_{2}\partial_{4}+i\partial_{2}\partial_{3}-i\partial_{1} \partial_{4}\right){\rm tr}_{k}\left[{\cal W}_{4}^{(0)}{\cal W}_{2}^{\dagger(0 )}\right]\right.\] \[= \frac{u^{2}}{\pi\ell k}\sum_{p_{\mu},C}\frac{e^{i\frac{2\pi p_{ \mu}x_{\mu}}{L_{\mu}}}}{\frac{p_{1}^{2}}{L_{1}^{2}}+\frac{p_{2}^{2}}{L_{2}^{2} }+\frac{p_{3}^{2}}{L_{3}^{2}}+\frac{p_{4}^{2}}{L_{4}^{2}}}\left\{\left(-i\frac {p_{1}p_{3}}{L_{1}L_{3}}-i\frac{p_{2}p_{4}}{L_{2}L_{4}}-\frac{p_{2}p_{3}}{L_{2} L_{3}}+\frac{p_{1}p_{4}}{L_{1}L_{4}}\right)\right.\] \[\left.\times({\cal X}_{3;C}(p)+{\cal X}_{4;C}(p)+{\cal X}_{5;C}(p))\right.\] \[\left.+\left(i\frac{p_{1}p_{3}}{L_{1}L_{3}}+i\frac{p_{2}p_{4}}{L_ {2}L_{4}}-\frac{p_{2}p_{3}}{L_{2}L_{3}}+\frac{p_{1}p_{4}}{L_{1}L_{4}}\right)\right.\] \[\left.\times({\cal X}_{3;C}^{*}(p)+{\cal X}_{4;C}^{*}(p)+{\cal X} _{5;C}^{*}(p))\right\}\,.\] (D.15) Finally, one can also define the Fourier components of \({\rm tr}[F_{34}F_{34}](x)\): \[{\rm tr}[F_{34}F_{34}](x)=\sum_{p_{\mu}\in\mathbb{Z}}e^{i\frac{2\pi p_{\mu}x _{\mu}}{L_{\mu}}}{\cal Q}(p)\,.\] (D.16) Using (6.5), we find, apart from an additive constant: \[\frac{{\cal Q}(p)}{u^{2}\hat{F}_{34}^{\omega}\Delta} = 8\pi N{\cal H}(p)+\frac{4}{\pi\ell k}\sum_{C,C^{\prime}}\frac{ \left[\frac{p_{3}^{2}}{L_{3}^{2}}+\frac{p_{4}^{2}}{L_{4}^{2}}\right]}{\frac{p_ {1}^{2}}{L_{1}^{2}}+\frac{p_{2}^{2}}{L_{2}^{2}}+\frac{p_{3}^{2}}{L_{3}^{2}}+ \frac{p_{4}^{2}}{L_{4}^{2}}}\left({\cal X}_{2;C^{\prime},C}(p)+{\cal X}_{2;C^{ \prime},C}^{*}(p)\right)\] (D.17) \[+\frac{2}{\pi\ell k}\sum_{C}\frac{1}{\frac{p_{1}^{2}}{L_{1}^{2}} +\frac{p_{2}^{2}}{L_{2}^{2}}+\frac{p_{3}^{2}}{L_{3}^{2}}+\frac{p_{4}^{2}}{L_{4} ^{2}}}\left\{\left(-i\frac{p_{1}p_{3}}{L_{1}L_{3}}-i\frac{p_{2}p_{4}}{L_{2}L_ {4}}-\frac{p_{2}p_{3}}{L_{2}L_{3}}+\frac{p_{1}p_{4}}{L_{1}L_{4}}\right)\right.\] \[\left.\times({\cal X}_{3;C}(p)+{\cal X}_{4;C}(p)+{\cal X}_{5;C}(p))\right.\] \[+\left(i\frac{p_{1}p_{3}}{L_{1}L_{3}}+i\frac{p_{2}p_{4}}{L_{2}L_{ 4}}-\frac{p_{2}p_{3}}{L_{2}L_{3}}+\frac{p_{1}p_{4}}{L_{1}L_{4}}\right)\] \[\left.\times({\cal X}_{3;C}^{*}(p)+{\cal X}_{4;C}^{*}(p)+{\cal X} _{5;C}^{*}(p))\right\}\,.\] We need to check whether the expression on the R.H.S. vanishes for all values of \(p_{\mu}\). The easiest check to perform is to choose \(p_{\mu}=(0,p_{2},0,0)\). With this choice, all terms vanish except \(\mathcal{H}(p)\), the Fourier transform of \(\text{tr}_{k}\left[\mathcal{W}_{4}^{(0)}\mathcal{W}_{4}^{\dagger(0)}\right]\) modulo \(u^{2}\). One can check numerically that \(\mathcal{H}(p)\) is non-vanishing, indicating that the gauge-invariant density \(\text{tr}[F_{34}F_{34}](x)\) increases indefinitely as \(u\to\infty\). ## Appendix E Fermion zero modes on the deformed-\(\mathbb{T}^{4}\), for \(k=r\) In this Appendix, we solve for the fermion zero modes in the background (4.1), which we rewrite in the familiar \(k/\ell\) block matrix form, using the notation of (3.5): \[A_{\mu}=\left(\begin{array}{cc}||\left(2\pi\ell\;(A_{\mu}^{ \omega}-\frac{z_{\mu}}{L_{\mu}})+\phi_{\mu}^{C^{\prime}}\right)\delta_{C^{ \prime}B^{\prime}}+\epsilon^{2}\;\mathcal{S}_{\mu\;C^{\prime}B^{\prime}}||&|| \epsilon\;\mathcal{W}_{\mu\;C^{\prime}B}||\\ ||\epsilon\;(\mathcal{W}_{\mu}^{\dagger})_{CB^{\prime}}||&||-2\pi k\;(A_{\mu} ^{\omega}-\frac{z_{\mu}}{L_{\mu}})\delta_{CB}+\epsilon^{2}\;\mathcal{S}_{\mu\; CB}||\end{array}\right)\.\] (E.1) Here we consider exclusively the \(k=r\) case, where: 1. \(A_{\mu}^{\omega}\) is the constant flux background \(A_{1}^{\omega}=A_{3}^{\omega}=0\), \(A_{2}^{\omega}=-\frac{x_{1}}{NL_{1}L_{2}}\), \(A_{4}^{\omega}=-\frac{x^{3}}{N\ell L_{3}L_{4}}\). 2. \(\phi_{\mu}^{C^{\prime}}\) are the \(r-1\) allowed holonomies in \(SU(k=r)\) (thus obeying \(\sum_{C^{\prime}}\phi_{\mu}^{C^{\prime}}=0\)) from (3.5) and \(z_{\mu}\) are the holonomies in the \(U(1)\)-direction \(\omega\), eqn. (2.6). We also recall that these are, after computing the commutator in the Weyl equation, combined into the \(r\) independent \(\hat{\phi}_{\mu}^{C^{\prime}}\) of eqns. (3.19, 3.20) with no constraint on the trace. 3. \(\mathcal{W}_{\mu}\) is leading order \(k=r\) solution. Thus, \(\mathcal{W}_{3}=\mathcal{W}_{4}=0\) and \(\mathcal{W}_{1}=-i\mathcal{W}_{2}\), and with \(\mathcal{W}_{2}\) given by (4.21), with the \(r\) coefficients \(\mathcal{C}_{2}^{A}\) fixed by solving eqn. (5.4). 4. The components of \(\mathcal{S}_{\mu}\) are obtained by solving (4.22, 4.23) (recall that they obey the tracelessness condition \(\sum_{C^{\prime}}S_{\mu\;C^{\prime}C^{\prime}}+\sum_{C}S_{\mu\;CC}=0\)). 5. Finally, to remind us of the powers of \(\sqrt{\Delta}\) appearing in the leading order solution for \(\mathcal{W}_{\mu}\) and \(\mathcal{S}_{\mu}\), we introduced a parameter \(\epsilon\) (\(\equiv 1\)). Our goal is to solve the Weyl equation \(\partial_{\mu}\bar{\sigma}^{\mu}\lambda+i\bar{\sigma}^{\mu}[A_{\mu},\lambda]=0\) in the \(k=r\) background (E.1), using a series expansion in \(\epsilon\), to leading order. We take \(\lambda\) also in the block-diagonal form (3.8), obeying (3.9): \[\lambda=\left[\begin{array}{cc}||\lambda_{C^{\prime}B^{\prime}}||&||\lambda_ {C^{\prime}B}||\\ ||\lambda_{C^{\prime}}||&||\lambda_{CB}||\end{array}\right]\,\ C^{\prime},B^{\prime}\in\{0,...k-1\},\ C,B\in\{0,... \ell-1\}\.\] (E.2) Newt, write the Weyl equation, using the quaternionic notation of Section 4: \(\bar{\partial}=\partial_{\mu}\bar{\sigma}_{\mu}\) and \(\bar{A}=\bar{\sigma}_{\mu}A_{\mu}\) (and similar for all other vectors in (E.1), with \(\bar{\sigma}_{\mu}\) defined in Footnote 5) and obtain the following equations for the components of \(\lambda\) of (E.2), with a sum over repeated indices \(B,B^{\prime}\) implied: \[\bar{\partial}\lambda_{C^{\prime}D^{\prime}} = -i\epsilon\bar{\sigma}_{\mu}({\cal W}_{\mu\;C^{\prime}B}\lambda_{ BD^{\prime}}-\lambda_{C^{\prime}B}({\cal W}^{\dagger})_{\mu\;BD^{\prime}})-i \epsilon^{2}\bar{\sigma}_{\mu}({\cal S}_{\mu\;C^{\prime}B^{\prime}}\lambda_{B^ {\prime}D^{\prime}}-\lambda_{C^{\prime}B^{\prime}}{\cal S}_{\mu\;B^{\prime}D^ {\prime}}),\] \[\bar{\partial}\lambda_{CD} = -i\epsilon\bar{\sigma}_{\mu}(({\cal W}^{\dagger})_{\mu\;CB^{ \prime}}\lambda_{B^{\prime}D}-\lambda_{CB^{\prime}}{\cal W}_{\mu\;B^{\prime}D })-i\epsilon^{2}\bar{\sigma}_{\mu}({\cal S}_{\mu\;CB}\lambda_{BD}-\lambda_{ CB}{\cal S}_{\mu\;BD}),\] \[\bar{\partial}\lambda_{C^{\prime}D} = -i(2\pi N\bar{A}^{\omega}+\bar{\phi}^{C^{\prime}})\lambda_{C^{ \prime}D}-i\epsilon\bar{\sigma}_{\mu}({\cal W}_{\mu\;C^{\prime}B}\lambda_{BD} -\lambda_{C^{\prime}B^{\prime}}{\cal W}_{\mu\;B^{\prime}D})\] \[-i\epsilon^{2}\bar{\sigma}_{\mu}({\cal S}_{\mu\;C^{\prime}B^{ \prime}}\lambda_{B^{\prime}D}-\lambda_{C^{\prime}B}{\cal S}_{\mu\;BD}),\] \[\bar{\partial}\lambda_{CD^{\prime}} = i(2\pi N\bar{A}^{\omega}+\bar{\bar{\phi}}^{C^{\prime}})\lambda_{ CD^{\prime}}-i\epsilon\bar{\sigma}_{\mu}(({\cal W}^{\dagger})_{\mu\;CB^{ \prime}}\lambda_{B^{\prime}D^{\prime}}-\lambda_{CB}({\cal W}^{\dagger})_{\mu\; BD^{\prime}})\] (E.3) \[-i\epsilon^{2}\bar{\sigma}_{\mu}({\cal S}_{\mu\;CB}\lambda_{BD^{ \prime}}-\lambda_{CB^{\prime}}{\cal S}_{\mu\;B^{\prime}D^{\prime}})\,.\] We now observe that we can consistently solve (E.3) in a series expansion in \(\epsilon\), assigning the following (leading-order only shown) \(\epsilon\)-scaling of the various components of \(\lambda\): \[\lambda_{C^{\prime}D^{\prime}} = \epsilon^{0}\lambda_{C^{\prime}D^{\prime}}+{\cal O}(\epsilon^{2})\,,\] \[\lambda_{CD} = \epsilon^{0}\lambda_{CD}+{\cal O}(\epsilon^{2})\,,\] \[\lambda_{C^{\prime}D} = \epsilon\;\lambda_{C^{\prime}D}+{\cal O}(\epsilon^{3})\,,\] \[\lambda_{CD^{\prime}} = \epsilon\;\lambda_{CD^{\prime}}+{\cal O}(\epsilon^{3})\,.\] (E.4) Substituting into (E.3) and keeping only the leading terms in \(\epsilon\) in each equation, we find the following equations for the leading order (in \(\sqrt{\Delta}\)) fermion zero modes in the background (E.1): \[\bar{\partial}\lambda_{C^{\prime}D^{\prime}} = 0,\] \[\bar{\partial}\lambda_{CD} = 0,\] \[(\bar{\partial}+i(2\pi N\bar{A}^{\omega}+\bar{\phi}^{C^{\prime}})) \lambda_{C^{\prime}D} = -i({\cal W}_{\mu\;C^{\prime}B}\;\bar{\sigma}_{\mu}\lambda_{BD}- \bar{\sigma}_{\mu}\lambda_{C^{\prime}B^{\prime}}{\cal W}_{\mu\;B^{\prime}D}),\] \[(\bar{\partial}-i(2\pi N\bar{A}^{\omega}+\bar{\bar{\phi}}^{C^{ \prime}}))\lambda_{CD^{\prime}} = -i({\cal W}_{\mu\;B^{\prime}C}^{*}\;\bar{\sigma}_{\mu}\lambda_{B^ {\prime}D^{\prime}}-\bar{\sigma}_{\mu}\lambda_{CB}{\cal W}_{\mu\;D^{\prime}B}^{* }).\] (E.5) Now, we recall that the first two equations were already solved in Section 3.4.1. From eqn. (3.17), taken with \(k=r\), we have the diagonal zero mode solutions \[\lambda_{\alpha\;B^{\prime}C^{\prime}} = \delta_{B^{\prime}C^{\prime}}\;\theta_{\alpha}^{C^{\prime}},\] \[\lambda_{\alpha\;BC} = -\delta_{BC}\;\frac{1}{\ell}\sum_{C^{\prime}}\theta_{\alpha}^{C^{ \prime}},\] (E.6) where we momentarily restored the spinor index \(\alpha\). We first define the spinor \[\eta^{C^{\prime}}\equiv\theta^{C^{\prime}}+\frac{1}{\ell}\sum_{B^{ \prime}=0}^{k-1}\theta^{B^{\prime}},\] (E.7) and then plug (E.6) into the last two equations in (E.5) to obtain: \[(\bar{\partial}+i(2\pi N\bar{A}^{\omega}+\bar{\bar{\phi}}^{C^{\prime} }))\lambda_{C^{\prime}D} = i{\cal W}_{\mu\;C^{\prime}D}\;\bar{\sigma}_{\mu}\;\eta^{C^{\prime}},\] \[(\bar{\partial}-i(2\pi N\bar{A}^{\omega}+\bar{\bar{\phi}}^{C^{ \prime}}))\lambda_{C^{\prime}D} = -i{\cal W}^{*}_{\mu\;D^{\prime}C}\;\bar{\sigma}_{\mu}\;\eta^{D^{ \prime}}.\] (E.8) We now recall that for the \(k=r\) solution, \({\cal W}_{3}={\cal W}_{4}=0\) and \({\cal W}_{1}=-i{\cal W}_{2}\), hence \[{\cal W}_{\mu\;C^{\prime}D}\;\bar{\sigma}_{\mu} = (-i\bar{\sigma}_{1}+\bar{\sigma}_{2}){\cal W}_{2\;C^{\prime}D}={ \cal W}_{2\;C^{\prime}D}\left(\begin{array}{cc}0&-2\\ 0&0\end{array}\right)\,,\] \[{\cal W}^{*}_{\mu\;D^{\prime}C}\;\bar{\sigma}_{\mu} = (i\bar{\sigma}_{1}+\bar{\sigma}_{2}){\cal W}^{*}_{2\;D^{\prime}C }={\cal W}^{*}_{2\;D^{\prime}C}\left(\begin{array}{cc}0&0\\ 2&0\end{array}\right)\,,\] (E.9) and that \({\cal W}_{2\;C^{\prime},C}=V^{-1/4}{\cal C}_{2}^{C^{\prime}}\Phi^{(0)}_{C^{ \prime},C}(x,\hat{\phi})\), where \({\cal C}_{2}^{C^{\prime}}\) is as determined in Section 5. The equation for \(\lambda_{C^{\prime}D}\) then takes the form, using the derivatives from (C.3) and noting that the equations for each \(C^{\prime}=1,...,r\) decouple: \[(\hat{D}_{4}-i\hat{D}_{3})\lambda_{1\;C^{\prime}D}-(i\hat{D}_{1}+ \hat{D}_{2})\lambda_{2\;C^{\prime}D} = \bar{\eta}_{2}^{C^{\prime}}\;\Phi^{(0)}_{2\;C^{\prime}D}\,,\] \[(-i\hat{D}_{1}+\hat{D}_{2})\lambda_{1\;C^{\prime}D}+(\hat{D}_{4}+ i\hat{D}_{3})\lambda_{2\;C^{\prime}D} = 0\,\] (E.10) where we absorb various inessential constants in the redefined \(\bar{\eta}_{2}^{C^{\prime}}\) coefficient. The solution of these equations is given by the function \({\cal G}^{(0)}_{3\;C^{\prime}D}\) defined in (C.9), explicitly \[\lambda_{1\;C^{\prime}D} = \bar{\eta}_{2}^{C^{\prime}}{\cal G}^{(0)}_{3\;C^{\prime}D},\] \[\lambda_{2\;C^{\prime}D} = 0\.\] (E.11) Similarly, one finds that the other zero mode is \[\lambda_{1\;CD^{\prime}} = 0,\] \[\lambda_{2\;CD^{\prime}} = \bar{\eta}_{1}^{D^{\prime}}{\cal G}^{*\;(0)}_{3\;D^{\prime}C}.\] (E.12) Thus, there are in total \(2r\) zero modes labeled by \(\bar{\eta}_{1,2}^{C^{\prime}}\), with \(C^{\prime}=1,...,r\). The \(x\)-dependence of the zero mode labeled by a given \(C^{\prime}\) is governed only by the holonomies \(\hat{\phi}_{\mu}^{C^{\prime}}\), similar to the bosonic case discussed earlier.
2310.15257
Chemical Doppelgangers in GALAH DR3: the Distinguishing Power of Neutron-Capture Elements Among Milky Way Disk Stars
The observed chemical diversity of Milky Way stars places important constraints on Galactic chemical evolution and the mixing processes that operate within the interstellar medium. Recent works have found that the chemical diversity of disk stars is low. For example, the APOGEE "chemical doppelganger rate," or the rate at which random pairs of field stars appear as chemically similar as stars born together, is high, and the chemical distributions of APOGEE stars in some Galactic populations are well-described by two-dimensional models. However, limited attention has been paid to the heavy elements (Z > 30) in this context. In this work, we probe the potential for neutron-capture elements to enhance the chemical diversity of stars by determining their effect on the chemical doppelganger rate. We measure the doppelganger rate in GALAH DR3, with abundances rederived using The Cannon, and find that considering the neutron-capture elements decreases the doppelganger rate from 2.2% to 0.4%, nearly a factor of 6, for stars with -0.1 < [Fe/H] < 0.1. While chemical similarity correlates with similarity in age and dynamics, including neutron-capture elements does not appear to select stars that are more similar in these characteristics. Our results highlight that the neutron-capture elements contain information that is distinct from that of the lighter elements and thus add at least one dimension to Milky Way abundance space. This work illustrates the importance of considering the neutron-capture elements when chemically characterizing stars and motivates ongoing work to improve their atomic data and measurements in spectroscopic surveys.
Catherine Manea, Keith Hawkins, Melissa K. Ness, Sven Buder, Sarah L. Martell, Daniel B. Zucker
2023-10-23T18:06:14Z
http://arxiv.org/abs/2310.15257v1
Chemical Doppelgangers in GALAH DR3: the Distinguishing Power of Neutron-Capture Elements Among Milky Way Disk Stars ###### Abstract The observed chemical diversity of Milky Way stars places important constraints on Galactic chemical evolution and the mixing processes that operate within the interstellar medium. Recent works have found that the chemical diversity of disk stars is low. For example, the APOGEE "chemical doppelganger rate," or the rate at which random pairs of field stars appear as chemically similar as stars born together, is high, and the chemical distributions of APOGEE stars in some Galactic populations are well-described by two-dimensional models. However, limited attention has been paid to the heavy elements (Z > 30) in this context. In this work, we probe the potential for neutron-capture elements to enhance the chemical diversity of stars by determining their effect on the chemical doppelganger rate. We measure the doppelganger rate in GALAH DR3, with abundances rederived using _The Cannon_, and find that considering the neutron-capture elements decreases the doppelganger rate from \(\sim\)2.2% to 0.4%, nearly a factor of 6, for stars with -0.1 < [Fe/H] < 0.1. While chemical similarity correlates with similarity in age and dynamics, including neutron-capture elements does not appear to select stars that are _more_ similar in these characteristics. Our results highlight that the neutron-capture elements contain information that is distinct from that of the lighter elements and thus add at least one dimension to Milky Way abundance space. This work illustrates the importance of considering the neutron-capture elements when chemically characterizing stars and motivates ongoing work to improve their atomic data and measurements in spectroscopic surveys. 0000-0002-3818-8088]Catherine Manea 0000-0002-1883-788X]Keith Hawkins 0000-0002-4882-7887]Melissa K. Ness 0000-0001-8870-7888]Sven Buder 0000-0002-0783-3888]Sarah L. Martell ## 1 Introduction The recent decade has brought forth an exponential increase in available stellar spectroscopic data, enabling population-level analyses of the chemical compositions of Milky Way stars at unprecedented scale. Massive spectroscopic surveys such as Apache Point Observatory Galactic Evolution Experiment (APOGEE, Abdurro'uf et al., 2022), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST, Cui et al., 2012), Gaia-European Southern Observatory (Gaia-ESO, Gilmore et al., 2022), Hectochelle in the Halo at High Resolution (H3, Conroy et al., 2019), the RAdial Velocity Experiment (RAVE, Steinmetz et al., 2020), and GALactic Archaeology with HERMES (GALAH, Buder et al., 2021) have provided the Galactic science community with _millions_ of stellar spectra. In the wake of this abundance of stellar spectroscopic data, recent work has begun to investigate _how_ much information is actually contained in these datasets. Does each element carry unique information, or are many of these abundances correlated? More specifically, do stars at a fixed metallicity tend to display similar chemical profiles in elements across the periodic table, or is there more diversity in their possible compositions? The literature refers to this notion as the _dimensionality_ of chemical abundance space. The dimensionality of Milky Way abundance space carries both physical and practical implications that affect a wide range of subfields of astronomy. When viewed through the physical lens, the dimensionality of Milky Way chemical abundances traces the stability of nucleosynthetic yields across Galactic time and space and the scale and efficiency at which newly synthesized elements are dispersed and mixed into the interstellar medium. Furthermore, because stellar compositions dictate the architectures and compositions of their planetary systems (e.g., Fischer and Valenti, 2005; Nielsen et al., 2023), the dimensionality of abundance space dictates the expected diversity of planetary systems around Milky Way stars. When viewed through the practical lens, the question of chemical dimensionality seeks to assess whether one truly needs to measure tens of elements to fully understand a star's composition, or if measuring a few elements and inferring the rest produces the same quantity and quality of information, thereby significantly enhancing efficiency. Additionally, the dimensionality of Milky Way abundance space contributes to the validity of strong chemical tagging, the method of reconstructing dispersed stellar birth siblings using chemistry alone (e.g. Freeman & Bland-Hawthorn, 2002). Strong chemical tagging poses two requirements: (1) that stars born together are chemically homogeneous (e.g., Bovy, 2016; Hawkins et al., 2020; Nelson et al., 2021) and (2) that groups of stars born together are chemically unique (e.g., Lambert & Reddy, 2016; Price-Jones et al., 2020; Cheng et al., 2021). If chemical dimensionality is sufficiently high, then the strong chemical tagging requirement that each birth cluster possesses a unique chemical profile could be satisfied. Finally, this topic carries practical implications for extragalactic studies: if chemical abundances are low-dimensional, then one need not spend observing resources sampling entire galaxies if small fields with limited elements sampled are enough to extrapolate the chemical compositions of the remaining stars. Ting et al. (2012) was among the first works to directly investigate the dimensionality of Milky Way chemical abundance space. They performed principal component analysis on combined data from several different high-resolution spectroscopic studies to find that stellar abundances tend to possess between six and nine principal components associated with various nucleosynthetic sites, a result they later validate in APOGEE (Ting & Weinberg, 2022). Other works, however, find that chemical abundances may have as few as two dimensions. For example, Ness et al. (2019) finds that for the low-\(\alpha\) disk, with a star's [Fe/H] and age, one can predict its remaining APOGEE abundances to within 0.02 dex. Furthermore, a star's abundance of elements produced in supernovae can be predicted to \(\approx\)0.015 dex using Fe, Mg, and age (Ness et al., 2022). This is on the order of the intrinsic scatter of these elements within open clusters, where stars are known to be born together (e.g. Bovy, 2016). This implies that these three dimensions link to birth radii, but individual elements produced in supernovae can not be used to distinguish individual birth groups. Sharma et al. (2022) similarly find that the abundances of most GALAH-reported elements can be inferred to within 0.03 dex using just [Fe/H] and age, though they note that certain elements such as Y and Ba are exceptions to this and cannot be predicted well. Weinberg et al. (2022) and Griffith et al. (2022, 2023) also address this question. They create a two-dimensional (also called a two-zone) model that describes the chemical evolution of the Milky Way according to global enrichment due to time as well as the relative ratio of Type Ia to Type II supernovae. They then subtract this model from the APOGEE and GALAH data and study the residual abundance patterns in the data. In APOGEE, the two-dimensional model is suitable enough to predict a star's APOGEE abundances to within 0.03 dex for all well-measured elements (Weinberg et al., 2022), with the addition of a third dimension modelling asymptotic giant branch (AGB) star nucleosynthesis marginally improving the representation of the data (Griffith et al., 2023). In GALAH, the two-dimensional model produces abundance residuals less than 0.07 dex for most well-measured elements, and the addition of a third dimension, again associated with asymptotic giant branch (AGB) star nucleosynthesis, further decreases abundance residuals (Griffith et al., 2022). The majority of the analysis that has been done in the context of APOGEE reports a low-dimensionality of chemical abundance space. However, APOGEE is limited in the nucleosynthetic channels it samples. Therefore, the dimensionality of the Milky Way's chemical abundances is still an open question. In this work, we seek to address the specific role that the neutron-capture elements play in the chemical dimensionality of Milky Way abundance space. The GALAH survey offers a more rigorous test of the underlying dimensionality of the Milky Way disk, as it is explicitly designed to capture five channels of nucleosynthetic enrichment for the purpose of testing the validity of strong chemical tagging (De Silva et al., 2015). Most critically, due to its optical coverage and high resolution (\(R\sim 28,000\)), GALAH measures the neutron-capture elements. These include elements formed in both the rapid (\(r\)-) and slow (\(s\)-) neutron capture processes, two nucleosynthetic families that are not presently well-measured in APOGEE but may add to the dimensionality of Milky Way abundance space (e.g. Lambert & Reddy, 2016; Griffith et al., 2022; Sharma et al., 2022). We study the potentially unique information contained in neutron-capture elements through the lens of "chemical doppelgangers," stars that are highly chemically similar but otherwise dynamically unrelated. Ness et al. (2018) was the first to measure the so-called "chemical doppelganger rate," defined as the rate at which randomly drawn pairs of field stars are measured to be as chemically similar as stars born together. Stars that are born together, such as those in open clusters, have been found to be highly chemically similar, with intrinsic dispersions in APOGEE-measured elements ranging from 0.005 to 0.070 dex (e.g., Bovy et al., 2016; Poovelli et al., 2020). Thus, open cluster stars are typically considered to represent an upper limit for stellar chemical homogeneity and random field stars the lower limit. Ness et al. (2018) found that between 0.3 and 1% of randomly drawn field pairs in APOGEE DR13 (Holtzman et al., 2018) appear to be as chemically similar as stars residing in open clusters. While a fraction of these field pairs are likely true conatal pairs from dispersed open clusters, it is unlikely that all are conatal. To place this result in perspective, if all massive star clusters that have formed in the disk had unique chemical abundance profiles, the expected recovery rate of true birth siblings would be closer to \(10^{-4}\) or \(10^{-5}\) given the star formation history of the Milky Way (e.g., Bland-Hawthorn et al., 2010). Thus, most doppelgangers are likely not true birth siblings, and stars can share remarkably similar chemical profiles despite being born of different star forming complexes due to a relatively homogeneous chemical evolution of the thin disk. These results are validated by de Mijolla et al. (2021) in APOGEE DR16 (Ahumada et al., 2020). Together, these works suggest that the diversity of Milky Way disk star abundances is qualitatively low. In this work, we perform the first measurement of the doppelganger rate in GALAH. We center our investigation on whether the neutron-capture elements affect the measured doppelganger rate. If neutron-capture elements add at least one unique dimension to chemical abundance space at the available abundance precision, then we expect the doppelganger rate to decrease with the addition of this nucleosynthetic family. If neutron-capture elements do not add a unique dimension, then we expect the doppelganger rate to remain relatively unchanged with the addition of these elements. Through this test, we implicitly probe whether neutron-capture elements enhance the diversity of Milky Way stars or simply trace the lighter (light/odd-Z, \(\alpha\), and iron-peak) elements primarily produced in supernovae. We use the following questions to guide our analysis: * What is the doppelganger rate in GALAH? * How does the inclusion of neutron-capture elements specifically affect the doppelganger rate? * Do there exist pairs of stars that are "doppelganger" in the light, \(\alpha\), and iron-peak elements but show differences in the neutron-capture elements? If so, are there any physical characteristics that differentiate these pairs from pairs that are "doppelganger" in all elements? In Section 2, we describe the GALAH dataset and our choice of open cluster stars that serve as a reference in our doppelganger rate measurements. In Section 3, we use _The Cannon_ to re-derive abundance ratios in 16 elements (Fe, O, Al, Mg, Ca, Si, Cr, Cu, Zn, Mn, Zr, Y, Ba, Ce, Nd, and Eu) for the purpose of enhancing precision and ensuring well-constrained uncertainties. We then measure the doppelganger rate using two distinct approaches. In Section 4, we investigate the impact of the neutron-capture elements on the measured doppelganger rate and compare the physical characteristics of stars that are partial doppelgangers (doppelganger in the light,\(\alpha\), and iron-peak elements) with those that are complete doppelgangers (doppelganger in all measured elements). We discuss our results in the context of previous observational and simulation work in Section 5 and conclude in Section 6. ## 2 Data GALAH Data Release 3 (DR3 Buder et al., 2021) serves as the basis of our investigation. GALAH is an optical (4710 A \(<\lambda\)! 7890 A spread across four non-contiguous cameras), magnitude limited (V<14) spectroscopic survey with high resolving power (\(R\sim 28,000\)) that targets Milky Way stars at \(|\)b\(|\)? 10\({}^{\circ}\)(De Silva et al., 2015). GALAH DR3 reports stellar parameters (e.g., T\({}_{\rm eff}\), log g, spectral broadening, [Fe/H], etc.) and abundances for nearly 600,000 stars derived using Spectroscopy Made Easy (SME, Valenti & Piskunov, 1996; Piskunov & Valenti, 2017) and 1D MARCS model atmospheres (Gustafsson et al., 1975; Bell et al., 1976; Gustafsson et al., 2008). Non-local thermodynamic equilibrium (NLTE) is assumed during spectral line synthesis of 13 elements (H, Li, C, O, Na, Mg, Al, Si, K, Ca, Mn, Fe, and Ba) whereas LTE is assumed for the rest. For each star, GALAH DR3 reports its surface abundances in up to 30 elements spanning five major nucleosynthetic channels: the light (Li, C), \(\alpha\) (Mg, Si, Ca, O), odd-Z (Na, Al, K), iron-peak (Sc, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ti), and slow (\(s\)-) and rapid (\(r\)-) process neutron-capture (Rb, Sr, Y, Zr, Mo, Ru, Ba, La, Ce, Nd, Sm, and Eu) elements. In addition to the main catalog, DR3 provides the community with several other data products, such as a 1-dimensional, radial-velocity-corrected, continuum-normalized, combined spectrum for nearly every star sampled by the survey as well as a catalog of dynamical parameters and age estimates for nearly all GALAH targets. Dynamical parameters are determined using Python package galpy(Bovy, 2015, see GALAH survey webpage1 for details on their assumed Galactic potentials and properties). Age estimates for GALAH stars are determined using Bayesian Stellar Parameter Estimation code (BSTEP) from Sharma et al. (2018), which uses PARSEC release v1.2S + COLIRRI stellar isochrones (Marigo et al., 2017) and Bayesian estimation to infer intrinsic stellar parameters from observables T\({}_{\rm eff}\), log g, [Fe/H], [\(\alpha\)/Fe], parallax, and 2MASS J & H-band magnitudes. Footnote 1: [https://www.galah-survey.org/dr3/the_catalogs/](https://www.galah-survey.org/dr3/the_catalogs/) For this investigation, we aim to measure the doppelganger rate using abundances that we re-derive from the GALAH DR3 spectra using _The Cannon_(Ho et al., 2017, see Section 3.1.1 for our motivation for this choice). We elect to perform our investigation on red giant and red clump stars in the GALAH survey, as they are on average brighter and thus probe a larger Galactic volume relative to dwarfs. This enables us to a) better compare to Ness et al. (2018), which also used giants, and b) understand the doppelganger rate on a broader spatial scale, expanding on work such as that of Bedell et al. (2018) which investigated the chemical diversity of stars within the local (100 pc) Solar neighborhood. To build our stellar sample, we apply a series of selections that we motivate in the following paragraph: 1. [label=] 2. **flag_sp** = 0 3. **flag_fe_h** = 0 4. **snr_c2_iraf** > 20 5. **ruwe** < 1.2 6. **v** 1.5 **< log** g **< 3.5 7. **vi** 0.0033*teff** - 13.6 **< log** g **< 0.0036*teff** - 13.9 8. **vii** -1.20 **< fe_h** < 0.20 9. **viii** -0.25 **< Cr_fe** < 0.15 10. **ix** Cu_fe** > -0.30 11. **x** Zr_fe** < 0.60 12. **xi** Y_fe** < 0.60 13. **xi** Ba_fe** < 0.80 14. **xiii** -0.50 **< Ce_fe** < 0.40 15. **x** Nd_fe** < 0.60 16. **x** Eu_fe** < 0.60 Though we re-derive abundances using _The Cannon_, the first two requirements ensure reliable GALAH-reported stellar parameters and [Fe/H] abundances which consequently eliminates clearly problematic spectral data. The third requirement ensures that all spectra have a signal-to-noise (SNR) ratio above 20 in the \(\sim\) 5700 Angstrom spectral region, and the fourth requirement filters for potential spectroscopic binaries (e.g., Belokurov et al., 2020) that were missed by the flag_sp flag. Requirements (v) and (vi) ensure we select red giant and red clump stars. The remaining requirements excise certain stars that have either extreme abundances (e.g., \(s\)-process enhanced stars, which are likely post-mass transfer systems and no longer reflect their natal composition) and/or GALAH-reported abundances that are not sampled by our high-quality training set and thus less reliably inferred by our model (see Section 3.1.4). ### Open Cluster Catalog As mentioned in Section 1, we define doppelgangers to be pairs of field stars that appear as chemically similar to one another as stars residing in open clusters. As such, a reliable set of reference open cluster stars is critical for our investigation. We use the open cluster catalog of Spina et al. (2021) to build this reference set. Spina et al. (2021) builds off of the widely-used open cluster catalog of Cantat-Gaudin et al. (2018), specifically improving cluster membership determinations for GALAH-sampled open clusters. They make use of a Support Vector Machine classifier (see their footnote 2 for a detailed description) to re-assess cluster memberships using Gaia astrometry (Gaia Collaboration et al., 2020) and validate their results with careful inspection of the resulting cluster isochrones and radial velocity distributions. For each cluster star, they report a membership probability. We only consider open cluster stars that have a probability of membership that exceeds 50% (P\({}_{\rm mem}\) \(>\) 0.5). In practice, the vast majority of open cluster stars in our selection have membership probabilities between 90% and 100%, but this selection allows for a few additional stars with membership probabilities of 75%. ## 3 Method The primary goal of this work is to measure how often random pairs of field stars sampled by GALAH appear as chemically similar as GALAH stars in open clusters, referred to as the doppelganger rate. Throughout this work, we closely follow the method of Ness et al. (2018). They compute the doppelganger rate using 20 elements ( Fe, C, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Ni, P, Cr, Co, Cu, and Rb) homogeneously derived from APOGEE DR13 spectra using _The Cannon_(Ness et al., 2015). To compute the doppelganger rate, they draw random pairs of field stars unassociated with known clusters and compare their chemical similarity to that of random stellar pairs drawn from within open clusters, which they refer to as _intracluster pairs_. To quantify the abundance similarity of stars in a pair, they compute a \(\chi^{2}\) value for each pair, defined as: \[\chi^{2}_{nn^{\prime}}=\sum_{i=1}^{I}\frac{[x_{ni}-x_{n^{\prime}i}]^{2}}{\sigma ^{2}_{ni^{\prime}}+\sigma^{2}_{n^{\prime}i}} \tag{1}\] where the two stars in the pair are indexed as \(n\), \(n^{\prime}\) and \(x\), \(\sigma\) are their derived abundance and abundance uncertainty in element \(i\). This leads to a global chemical similarity metric for each pair that considers all sampled elements. Doppelgangers are defined as stellar pairs with \(\chi^{2}\) values less than the median of intracluster pairs. ### The Cannon As in Ness et al. (2018), we measure the doppelganger rate using abundances and abundance uncertainties re-derived using _The Cannon_(Ho et al., 2017). _The Cannon_ is a data-driven method for determining parameters and abundances from stellar spectra. _The Cannon_ does not explicitly use atomic physics to determine these parameters: instead, it fits a suitably flexible model to the relationship between each spectral pixel's intensity and each input label (e.g., T\({}_{\rm eff}\), log g, [Fe/H], etc.) using a high-fidelity _training_ set. We use a second-order polynomial to model the relationship between spectral pixels and the following labels: T\({}_{\rm eff}\), log g, broadening velocity, O, Mg, Al, Si, Ca, Cr, Cu, Zn, Mn, Zr, Y, Ba, Ce, Nd, and Eu. We subsequently infer these labels at _test_ time in our implementation. This model is similarly used in several other works that require high precision abundances for hundreds of thousands of stars (e.g., Buder et al., 2018; Wheeler et al., 2020, Walsen et al. 2023, submitted). _The Cannon_ is a generative model, and constructs, for each label inference, a probability distribution function for the observed flux - that is, a theoretical spectrum for each star for which the labels are inferred. This enables the goodness-of-fit to be evaluated for the model spectrum versus the data, for each label and for each star. We direct the reader to Ness et al. (2015) for a thorough description of the methodology of _The Cannon_. #### 3.1.1 Why Re-Derive GALAH DR3 Abundances Using The Cannon? At the core of our investigation is the determination of the chemical similarity of pairs of stars. In an ideal setting, we could simply take the absolute difference in the elemental abundance ratios of each star in a pair to determine their degree of chemical similarity. However, this is impossible because all abundance measurements have an associated uncertainty, and thus, we must factor them into our determinations of chemical similarity. In this work, _The Cannon_ is critical for 1) providing improved precision of the measured abundances and 2) providing accurate uncertainty estimates for stars that we can validate are well fit by our model in spectral space. Abundance precision is a key extrinsic factor that influences the measured doppelganger rate. With large uncertainties in [X/Fe], it is more difficult to distinguish true doppelgangers from stars that show chemical differences unresolved at the current precision level. As such, maximizing precision is key to accurately constraining the doppelganger rate. As explained in Buder et al. (2021), _The Cannon_ is capable of outperforming the GALAH DR3's SME-derived abundance precision. For GALAH DR3, elemental abundances were derived using on-the-fly spectral synthesis, and in order to limit computation time, the syntheses were only performed for a selection of unblended spectral regions associated with each element of interest. This means that some elemental abundances, such as [Mg/Fe], were derived using just one or two spectral lines, leading to larger uncertainties (e.g., Jofre et al., 2019). _The Cannon_ in part achieves increased abundance precision because it leverages the _entire_ spectrum to retrieve abundance information. This means it can use _all_ spectral features - strong lines, weak lines, blended lines, lines with uncertain line data, and even continuum effects - associated with each element to infer an abundance. It is known that changes in the chemical composition of a star's atmosphere will affect the atmospheric opacity profile of the star, particularly when the abundance of electron-donating atoms is altered (Ting et al., 2018). Changes in opacity will consequently affect several different parts of the spectrum, including the continuum and line strengths of other elements. _The Cannon_ is able to capture these trends, though careful consideration of non-physical correlations between abundances and spectral features must also be taken (see Section 3.1.4). Well-constrained abundance uncertainties are also key in this investigation. Underestimated uncertainties will artificially decrease the doppelganger rate while over-estimated uncertainties will artificially increase it. Abundance uncertainties from classical abundance determination methods are influenced by a collection of sources, including but not limited to uncertainties in the data reduction process, input atomic physics, and choice of continuum placement when fitting a spectrum (e.g., Jofre et al., 2017). Using _The Cannon_, we can determine a systematic cross-validation uncertainty for each label, that represents the fidelity with which we recover the training labels. This measurement uncertainty incorporates the uncertainty on the training labels that are inherited from GALAH. However, we can take advantage of repeat visits of individual stars to quantify our internal precision, which represents the overall systematic precision with which we can determine each stellar label, and this is what we ultimately adopt for our abundance uncertainties (see Section 3.1.5). It is important to highlight that in our investigation, abundance accuracy, which we inherit from the training set of stars and can not control, is not important. Our only requirement is that spectra that are identical possess identical labels, so any global offsets in abundance do not impact our result, only relative offsets between different stars due to differing chemistry. #### 3.1.2 Re-derived Stellar Parameters and Abundances Instead of re-deriving the full array of elements reported by GALAH, we select a subsample of the elements. We select elements that enable us to ask our scientific question of the data but limit our selection to the set of abundances for which we can build a high-fidelity training set. Our selection samples elements from each major nucleosynthetic family, including a light/odd-Z element (Al), \(\alpha\) elements (O, Mg, Ca, Si), iron-peak elements (Fe, Cr, Cu, Zn, Mn), first (\(s\)-process) peak elements (Y, Zr), second \(s\)-process peak elements (Ba, Ce, Nd), and an \(r\)-process element (Eu). In addition to these elements, we re-derive T\({}_{\rm eff}\), log g, and v_broad (broadening velocity due to rotation, macroturbulence, etc.) for each spectrum. #### 3.1.3 Additional Modifications to Spectra Prior to Input into The Cannon Upon downloading2 all GALAH DR3 spectra that satisfy the conditions enumerated in Section 2, we interpolate each star's flux and flux error array over the shared wavelength grid recommended by the GALAH team.3 We perform several tests to assess how manipulating the spectra affects our resulting label precision and find that truncating the spectra to only include the first three CCDs (\(\lambda\)\(\leq\) 6730) increases the performance of _The Cannon_. This increase in performance from neglecting the last CCD is likely due to the strong spikes in the redmost spectral segment that do not originate in the stellar photosphere and make fitting a model to that spectral region difficult. These spectral spikes in the redmost CCD are likely due to imperfect telluric subtraction, a common challenge in the spectral reduction of near-infrared spectra (e.g., Griffith et al., 2022). We are able to neglect this final CCD without a loss of precision because most of the spectral lines associated with our sampled elements lie in the first three CCDs. The only exception to this is O: by removing the fourth CCD, we lose access to the only available O lines in GALAH. Previous works using _The Cannon_ have successfully recovered O abundances without using O lines (Ting et al., 2018), so we proceed with inferring O, but we _do not_ consider it in our subsequent measurement of the doppelganger rate. Additionally, we remove three spectral segments containing strong diffuse interstellar bands (DIBs, e.g., Vogrincic et al., 2023) near \(\lambda\)5798, \(\lambda\)5871, and \(\lambda\)6614. These features are caused by interstellar dust absorbing the star's light and are not intrinsic to the stellar photosphere. Thus, to ensure that _The Cannon_ does not use these features for inference, we remove them. Footnote 2: GALAH DR3 spectra can be downloaded from [https://datacentral.org.au/services/download/](https://datacentral.org.au/services/download/) Footnote 3: [https://github.com/svenbuder/GALAH_DR3/](https://github.com/svenbuder/GALAH_DR3/) #### 3.1.4 Training Set When building our high-fidelity training set, we experiment with various selections to identify one that allows for high output abundance precision while also sampling sufficient open cluster stars. Our final choice of training set is limited to stars that satisfy the following requirements, in addition to those mentioned in 2: i flag_X_fe = 0 ii chi2_sp < 10 iii snr_c2_iraf > 100 The first requirement makes use of an element-specific flag that controls for unreliable GALAH-measured abundances in specific elements. The second requirement reports the \(\chi^{2}\) fit of the best-fitting SME model to the GALAH data. Previous works, such as Nandakumar et al. (2022), have required that chi2_sp < 4 when building a GALAH-based training set. We found that increasing our requirement to chi2_sp < 10 enables us to achieve the same precision while allowing better sampling of high-metallicity ([Fe/H] > 0) red giant stars. The final requirement ensures that we are training _The Cannon_ on high SNR data to maximize its ability to learn real correlations between spectral pixels and labels as opposed to learning from noise. In Figure 1, we present a Kiel (log g vs. T\({}_{\rm eff}\)) diagram of our final training set (black dots) consisting of 956 stars that sample the surface area of our chosen parameter space, which we define by the polygon in Figure 1. We note that our chosen polygon is not parallel to the red giant branch. This means that we are unable to sample cooler, metal-rich giants beyond [Fe/H] = 0.2. This is out of necessity: there are not enough high SNR stars with unflagged abundances to extend our training sample to this region of the Kiel diagram. However, this is not a problem as we conduct our entire analysis in the chosen polygon, and the polygon is well-sampled by the training set. To illustrate the parameter space sampled by our open cluster stars, we include them in the figure as open triangles, with M 67, a chemically homogeneous open cluster (e.g., Bovy, 2016; Ness et al., 2018; Poovelil et al., 2020), as orange filled triangles. We note that these open cluster stars are not in our training set. The background distribution represents the full GALAH dataset, and all background stars Figure 1: Kiel diagram (GALAH-reported logg vs. T\({}_{\rm eff}\)) for the full GALAH sample (background distribution). Our parameter space of investigation, which was designed to contain red giant and red clump stars, is encapsulated within the black polygon. Black dots mark stars in our training set. We include open cluster stars as triangles to illustrate their parameter space coverage, with filled orange triangles highlighting members of chemically-homogeneous open cluster M 67. that fall within the polygon and satisfy our quality cuts are re-analyzed using _The Cannon_. In Figure 2, we plot [X/Fe] vs. [Fe/H] distributions for our training set stars atop the equivalent for our full sample, highlighting that the surface area of the [X/Fe] vs. [Fe/H] distribution of our full sample is fully covered by our training set. This is important for ensuring that the model need not extrapolate when inferring stellar abundances. We assess the ability for our model to recover the GALAH labels of the training set by performing a series of ten leave-10%-out cross-validation tests. This involves training our model on 90% of the training data and assessing its ability to recover the GALAH-reported labels of the remaining 10% of the training data. In Figure 3, we plot the _Cannon_-recovered label as a function of input GALAH label for all stars in our training set, marking the one-to-one line for reference. The model is successful in recovering the input training data labels to high precision, with recovered labels agreeing with GALAH-reported labels within 0.04 to 0.08 dex for most elements. Exceptions to this are O, Zr, and Y, which we recover to within 0.11 to 0.14 dex. We note that this cross validation is an assessment of the fidelity with which we can determine the reference labels, but as it includes the GALAH label uncertainties, it is not a measurement of the internal precision of _The Cannon_ on these data (see Section 3.1.5). As a measure of robustness, we assess the fit of the output _Cannon_ model spectra to the input GALAH spectrum, both globally and around strong lines of measured elements, via a \(\chi^{2}\) goodness-of-fit metric that considers GALAH flux uncertainties. To determine goodness-of-fit to specific lines, we adopt the GALAH SME line masks presented in Buder et al. (2018). We flag and subsequently ignore all stars with global \(\chi^{2}\) and line-specific \(\chi^{2}\) values that exceed two times the degrees of freedom (e.g., spectral pixels). In Figure 4, we show an example fit to a typical open cluster star to illustrate the quality of _The Cannon_'s model spectra fits. When using data-driven algorithms such as _The Cannon_ to measure abundances from the full spectral range without the use of censoring, a procedure where _The Cannon_ is only allowed to learn abundance information from specified strong lines of each element, it is important to acknowledge that the model can infer abundances using correlations between spectral features not directly associated with the element. As mentioned in Section 3.1.1, in some cases, there are physical reasons for the existence of these correlations, as changes in a star's atmospheric composition in an element can influence the spectral behavior of other elemental lines or continuum regions (e.g., Ting et al., 2018). However, in other cases, this can lead to abundance inferences of certain elements that are instead primarily driven by a non-physical correlation that is introduced by the training set data. In the context of our high resolution spectra, we expect the primary abundance information for each element to be learned from strong, known lines of the element. We conduct two tests to confirm this. We first inspect the first-order _Cannon_ model coefficients which describe the direct quadratic relationship between each spectral pixel and each label. In Figure A15, we plot the first-order _Cannon_ coefficients for each label as a function of wavelength. We mark strong, known lines of each element with a red dashed line and confirm that _The Cannon_ is drawing its primary abundance information from those line regions. Next, in Figure 5, we repeat this exercise by plotting the median spectrum of our sample and highlighting in orange spectral regions that correspond to the strongest 1% of first-order coefficients for a selection of five elements. We mark known strong lines of each element with a thick orange line. These two tests make evident that the primary abundance information retrieved _The Cannon_ is coming from Figure 2: Density distributions in [X/Fe] vs. [Fe/H] for our training sample (color) atop our full sample (background gray). Note that the colormaps are colored by logarithmic stellar density. We ensure that the training set spans the parameter space of our full sample to ensure reliable output _Cannon_ abundances. Figure 4: A comparison of our _Cannon_ model fit (red) to the spectrum of a star in M 67 (black). The smaller top panels highlight segments of the spectra corresponding to strong lines of each element. Bottom panel shows a larger cutout of the spectrum. It is apparent that _The Cannon_ is capable of fitting the GALAH data well, both globally and around lines of interest. Figure 3: Combined results of our 10 leave-10%-out cross-validation tests, where we plot output _Cannon_ label as a function of input GALAH DR3 label for all 19 labels-of-interest in our 956 star training set. Colormap corresponds to point density and dashed-line represents the one-to-one line. The offset from (\(\Delta\)) and scatter (\(\sigma\)) around the one-to-one line is printed in the corner of each panel. Our model is able to retrieve the GALAH-reported abundances of our training set within 0.07 dex for the majority of elements. Figure 5: Spectral windows, showing the continuum normalized flux as a function of wavelength, for the median stellar spectrum in our sample (black) containing regions of absorption lines of Al, Ca, Zr, Y, and Eu, from top to bottom. We highlight in orange spectral regions that correspond with the 1% largest first order _Cannon_ model coefficients in the labelled element. The black spectrum is the base _Cannon_ spectrum, which represents the median spectrum of the full dataset. Light orange vertical lines correspond to known lines of the element, and thin black vertical lines correspond to known lines of Fe. We note that, even in the most difficult to measure elements such as Zr, the _Cannon_ model successfully draws its primary abundance information from the relevant element’s lines. See Appendix Figure A15 for the full array of first order coefficients as a function of wavelength for each label. strong lines of each element, though it is also clear that _The Cannon_ is leveraging the full spectrum to extract abundance information. This is by design and allows _The Cannon_ to achieve its enhanced precision. #### 3.1.5 Abundance Uncertainties from The Cannon To determine our final abundance uncertainties, we must take into account two sources of uncertainty. The first is that reported directly by _The Cannon_, which reflects the dispersion in the final likelihood function for each label. The second is the external, systematic uncertainty that is best parametrized as a function of spectral SNR (e.g., Ness et al., 2015; Wheeler et al., 2020; Nandakumar et al., 2022). To determine our model's systematic precision, we make use of repeat visit spectra, spectra taken of the same object and later coadded before being measured for the final main GALAH catalog. Repeat visit spectra present the opportunity to test our model's stability as a function of SNR. In an ideal case, when running spectra of the same source but with different SNRs through _The Cannon_, our model should always return the same labels regardless of the SNR of the spectrum. Thus, any change in the model's inferred labels between spectra of the same source but at different SNRs can quantify our SNR dependent label uncertainty. For this test, we download the GALAH DR3 all_spec catalog, which reports stellar parameters for each individual observed spectrum. We then identify stars with more than one observation by filtering for repeated values in the dr3_source_id column. We then download 8 nights of data that have significant numbers of targets with repeat-visit spectra (150427, 150428, 150429, 150430, 170912, 170911, 170910, and 170909) and only consider the 387 targets that span the parameter space of our larger data set. We then produce several instances of each source's spectrum at various SNRs, starting first with single spectra populating the lowest SNR bins, followed by coadded versions of the spectra. For example, if a source has three total observations, we are able to produce three low, three medium, and one high SNR version of its spectra for the purpose of this experiment. We then run the spectra through _The Cannon_ and, taking the labels reported for the highest SNR spectrum as "truth," measure the dispersion in the difference in the inferred labels between the low- and high-SNR spectra as a function of SNR. As reported in Nandakumar et al. (2022), our precision increases with SNR exponentially and plateaus beyond a SNR of 40. We fit exponential functions to describe the relationship between SNR and label recovery precision and adopt the SNR-dependent dispersions as the external precision of our inferred abundances. We compute our final label uncertainties by taking the quadratic sum of the internal model uncertainties reported by _The Cannon_ and the external uncertainties from our SNR experiment. In Figure 6, we compare our repeat-visit abundance Figure 6: Results of our repeat visit spectrum investigation into the model’s precision for each of the 16 elemental abundance ratios that we re-derive in this work. Solid black triangles present the standard deviation in the difference between the reported _Cannon_ label between low-SNR spectra and the highest SNR spectrum as a function of SNR for 387 objects with repeat observations. An exponential fit to this relationship, which is our final adopted precision, is shown as a dashed orange line. Gray circles represent the equivalent of the black triangles except for the GALAH-reported abundances. Red crosses display the mean uncertainty as a function of SNR. We mark the standard deviation from our cross-validation test (Figure 3) in light blue. This figure illustrates the enhanced precision achieved by _The Cannon_, highlighted by the difference in the red and black curves. dispersion (black triangles) with those of GALAH repeat-visit results (gray circles) and GALAH-reported uncertainties (red crosses) as a function of SNR. Our resulting abundance precision is improved relative to the GALAH-reported precision by up to a factor of three. We find that for most elements, the dispersion in the difference between GALAH-reported labels and our _Cannon_-inferred labels from our cross-validation (solid gray line) is similar to the GALAH uncertainty, indicating that the internal precision of _The Cannon_ is very high. In Figure 7, we show the spread in abundance across stars in our sample (black histogram) and compare it to _The Cannon's_ mean achieved abundance precision (purple) and that of GALAH DR3 (navy). Elements such as Zr, Y, and Ce, for example, previously had uncertainties equivalent to the full sample's 1-\(\sigma\) abundance spread (orange line) in the element. Thus, any potential distinguishing power in these elements was thwarted by the GALAH precision. With our _Cannon_-enhanced abundance precisions, these elements can now be used to potentially distinguish between doppelgangers. ### Final Catalog Our final catalog consists of 28,120 stars with newly inferred values of v_broad, T\({}_{\rm eff}\), log g, [Fe/H], and [X/Fe] for O, Al, Mg, Si, Ca, Cr, Cu, Zn, Mn, Zr, Y, Ba, Ce, Nd, and Eu for each star that populates the polygon in Figure 1. In Figure 8, we present the [Fe/H] vs. T\({}_{\rm eff}\) distributions for the 14 open clusters in our sample, showing the GALAH-reported distribution in the top panel and the _Cannon_-inferred distribution in the bottom panel. The table schema for our final catalog is included in the Appendix Table A1 and the full table is available online. As mentioned in Section 3.1.4, we flag as unreliable all stars with global \(\chi^{2}\) and individual element \(\chi^{2}\) goodness-of-fit values that exceed two times the degrees of freedom of the spectrum or relevant line mask. We hereafter only consider stars with unflagged global and individual element abundances. To assess the chemical homogeneity of the open clusters in our sample in light of our re-derived abundances, we draw all possible intracluster pairs with \(\Delta\)T\({}_{\rm eff}\)\(<\) 100K, \(\Delta\)logg \(<\) 0.1 dex, and unflagged abundances in all elements and present the distributions in absolute difference in abundance for all intracluster pairs in each element in Figure 9, with median values marked with a dashed line. Intracluster pairs in general tend to show small abundance differences, with the majority of elements showing median absolute differences in abundance between 0.024 (Zn) and 0.074 (Zr) dex. It is evident that some intracluster pairs, however, display abundance differences that are large (up to 0.2 dex for Zr, Y, and Ba). ### Measuring the Doppelganger Rate in GALAH DR3 Instead of measuring the doppelganger rate in our full sample, we measure it in a higher quality subset of our data. This Figure 7: Distributions of the _Cannon_-measured elemental abundances in our final sample. We mark the mean GALAH-reported error (\(\mu_{\rm err,G}\)) in each abundance in dark blue, the mean _Cannon_-reported abundance error (\(\mu_{\rm err}\), C) in lighter purple, and the spread in the _Cannon_-measured abundances (\(\sigma\)) in orange For the majority of elements, _The Cannon_ achieves either comparable or lower abundance uncertainties compared to the reported GALAH uncertainties. Elements with the greatest potential for discriminating power in our sample possess small uncertainty-to-abundance-spread ratios. is because the doppelganger rate is sensitive to abundance precision and our choice of open cluster reference pairs. In general, in the low SNR regime, where abundance uncertainties are high, pairs of stars will tend to look more chemically similar within uncertainties. We find that snr_c2_iraf\(>40\) is an ideal SNR lower limit for measuring a meaningful doppelganger rate in our sample as it ensures we maximize our abundance precision. This SNR cutoff enables us to sample 47 stars across five clusters (NGC 2112, NGC 6253, NGC 2204, Collinder 261, and M 67) and 13,375 stars in the field. To build our reference sample of open cluster stars, we draw all possible combinations of intracluster pairs where stars in the pair have \(\Delta\)T\({}_{\rm eff}\) < 100 K and \(\Delta\)log g < 0.1 dex. The \(\Delta\)T\({}_{\rm eff}\) and \(\Delta\)log g requirement ensures that we avoid potential systematic trends between abundance and T\({}_{\rm eff}\), log g to artificially enhance or minimize the abundance similarity of stars in a pair. We ultimately build a population of 122 intracluster pairs that serve as a reference point for the chemical similarity of stars born together in open clusters. For our field sample, we draw one million unique pairs of stars that are not members of Spina et al. (2021) open clusters and that satisfy the same \(\Delta\)T\({}_{\rm eff}\), \(\Delta\)log g requirements as those for open cluster pairs. These field pairs serve to sample the chemical diversity (or lack thereof) of the phase-mixed Galactic disk population. After building our intracluster and field pair samples, we measure the doppelganger rate using the method of Ness et al. (2018), computing a global \(\chi^{2}\) value using Equation 1 for each pair of stars and defining doppelgangers to be field pairs with \(\chi^{2}\) less than the median \(\chi^{2}\) value of intracluster pairs in the considered elements. ## 4 Results ### Impact of neutron-capture elements on the DR Our primary question in this work asks whether the neutron-capture elements affect the doppelganger rate. In Figure 10, we use Equation 1 to determine the \(\chi^{2}\) distributions, a proxy for chemical similarity, of intracluster pairs (top panels) and field pairs (bottom panels). We compare the \(\chi^{2}\) distribution of the field pairs with that of the intracluster pairs when excluding (gray) and including (orange) the neutron-capture elements. Faint dashed lines correspond to the median \(\chi^{2}\) value for intracluster pairs when excluding (gray) and including (orange) neutron-capture elements. The left panel presents the aforementioned for the full SNR > 40 sample, while the right panel presents it for a subset of stars with -0.1 < [Fe/H] < 0.1. As found in Ness et al. (2018), stars born together in open clusters are far more chemically similar than random field pairs. However, there exist field star pairs that are just as, if not more, chemically alike than intracluster pairs, as quantitatively shown in their \(\chi^{2}\) values. These field pairs are deemed doppelgangers. As expected, adding neutron-capture elements (Zr, Y, Ba, Ce, Nd, and Eu) shifts the \(\chi^{2}\) distributions for each population due to adding degrees of freedom to the \(\chi^{2}\) calculation. However, the \(\chi^{2}\) distribution of open cluster pairs shifts comparatively _less_ than that of random field pairs. If neutron-capture elements had no further distinguishing power compared to the lighter elements, then the doppelganger rate would remain constant when including them. However, including neutron-capture elements decreases the doppelganger rate from 0.9% to 0.6% in our full sample and from 2.2% to 0.4% in our sample with a narrow [Fe/H] range. Figure 10 illustrates that the neutron-capture elements have a subtle but non-negligible affect on the doppelganger rate. ### Which elements matter most in distinguishing disk stars? In Figure 11, we present the impact of each successive elemental family on the doppelganger rate via a _cumulative_ doppelganger rate (CDR) computed in the narrower -0.1 < [Fe/H] < 0.1 range. In the top panel, we report the measured CDR as a function of each elemental family and the elemental families lighter than it. For example, when we plot the CDR associated with the iron-peak elements, we plot the percentage of pairs that have \(\chi^{2}\) values computed using Fe, Cr, Cu, Mn, and Zn less than the median of intracluster Figure 8: [Fe/H] as a function of T\({}_{\rm eff}\) for all 122 open cluster stars in our sample. Background shows the full sample’s distribution in this plane. Top panel presents the GALAH DR3 abundances and bottom panel presents the _Cannon_ results. The 86 stars in this figure with SNR > 40 and unflagged abundances in all elements (see Section 3.1.4) serve as the reference point for the chemical homogeneity of stars born together, an ingredient in our measurement of the doppelganger rate. pairs. Next, for the light elements, we plot the same as for the iron-peak elements but this time also considering Al in our \(\chi^{2}\) calculations. Next, for \(\alpha\) elements, we plot the same but considering Al, Fe, Cr, Cu, Mn, Zn, Mg, Ca, and Si. This continues on as one moves rightward, ultimately terminating with the \(s\)-process elements. In the bottom panel, we illustrate the practical discriminating power of each elemental family by reporting the multiplicative factor with which the CDR changes upon the addition of each new family. We find that once doppelgangers are identified using the iron-peak elements, the \(s\)-process elements possess the greatest additional distinguishing power, followed by the \(\alpha-\)elements, the light elements, and the \(r\)-process elements. We note that this narrow [Fe/H] range contains primarily low-\(\alpha\) disk stars, and this may play a role in the relatively low distinguishing power of the \(\alpha-\)elements. As in Figure 10, we find that the doppelganger rate considering light, \(\alpha\), and iron-peak elements is 2.23% for stars with -0.1! [Fe/H]! 0.1. With the addition of the neutron-capture elements (Eu, Zr, Y, Ba, Ce, and Nd), the doppelganger rate reduces to 0.39%, by a factor of 5.75 (with \(r\)-process element Eu reducing it to 1.73% and \(s\)-process elements reducing it to the final 0.39%.) Figure 10: \(\chi^{2}\) (see Equation 1) distributions for intracluster pairs (top panels) and those drawn from the field (bottom panels). Gray histograms present the \(\chi^{2}\) distributions when considering the light (here, Al), \(\alpha\) (Mg, Si, Ca), and iron-peak (Fe, Cr, Cu, Zn, Mn) elements only. The orange histograms present the \(\chi^{2}\) distributions when considering the aforementioned elements as well as the neutron-capture elements (Zr, Y, Ba, Ce, Nd, Eu). Vertical lines mark the median \(\chi^{2}\) value for open cluster pairs without (gray) and with (orange) the consideration of neutron-capture elements. Left panels contain the results for our full SNR? 40 sample while the right panels contain results for stars within the -0.1! [Fe/H]! 0.1 range. The addition of neutron-capture elements reduces the doppelganger rate by a factor of a third for all field pairs and nearly six for field pairs in the narrow [Fe/H] range. Figure 9: Distributions of absolute difference in _Cannon_-derived abundance for 122 intracluster pairs with \(\Delta\)T\({}_{\rm eff}\)! 100 K, \(\Delta\)log g! 0.1 in our sample for each element. The median absolute abundance difference value for all pairs is marked with a dashed line and printed in each panel. We see that in general, open clusters in our sample are highly chemically similar but display signs of abundance variation, particularly in the light \(s\)-process neutron capture elements (Zr, Y, Ba). ### Partial vs. Complete Doppelgangers Upon discovering that neutron-capture elements affect the doppelganger rate, we isolate pairs of field stars that are _partial_ doppelgangers (that is, they satisfy doppelganger requirements in the light, \(\alpha\), and iron-peak elements) from pairs of field stars that are _complete_ doppelgangers (that is, they satisfy doppelganger requirements in all elements). In Figure 12, we show the distributions in absolute difference in abundance for partial (open violin plots) and complete (filled violin plots) doppelgangers and also include as reference the median absolute difference in abundance for intracluster pairs (yellow crosses). We note that this plot reports abundance differences between stars in a pair, not \(\chi^{2}\) values, which we use in the actual computation of the doppelganger rate. This figure illustrates that there exist pairs of field stars that are as chemically similar in the light, \(\alpha\), and iron-peak elements as stars born together but deviate in the neutron-capture elements by up to 0.6 dex. In Figure 13, we show example spectra of partial doppelgangers (black) and complete doppelgangers (red). Note that despite both pairs of stars satisfying doppelganger requirements in the lighter elements, the partial doppelganger spectra show deviations in the neutron-capture lines (highlighted in faint orange) while the complete doppelgangers do not. We note here that since neutron-capture elements can Figure 11: _Top panel:_ The cumulative doppelganger rate (CDR) as a function of the addition of each new elemental family. The CDR represents the total doppelganger rate when considering all elemental elemental families successively. That is, on the far left, we determine the doppelganger rate considering just the iron-peak elements while on the far right, we determine the doppelganger rate with the light, \(\alpha\), iron-peak, \(r\)-, and \(s\)-process elements. The introduction of neutron-capture elements reduces the doppelganger rate by a factor of 5.75. It is possible that \(s\)-process elements have a greater influence on the CDR than the \(r\)-process elements, but we remind the reader that we only use one \(r\)-process element, Eu, but five \(s\)-process elements (Zr, Y, Ba, Ce, and Nd), so the comparison is not straightforward. _Bottom panel:_ The factor with which the CDR changes upon adding each new elemental family. The y-axis reports the factor with which the doppelganger rate changes as a function of each added elemental family. The bottom panel illustrates that once pairs are selected by chemical similarity in iron-peak elements, the \(s\)-process elements possess the greatest additional distinguishing power, followed by the alpha elements, the light elements, and the \(r\)-process elements. Figure 12: Violin plots showing the absolute difference in abundance (\(\Delta\) [Fe/H] for Fe, \(\Delta\) [X/Fe] for the remaining elements) between stars in doppelganger pairs for each element. The abundance difference distributions for partial doppelgangers (those that are doppelganger exclusively in light, \(\alpha\), and iron-peak elements) are represented by the empty violin plots. The same distributions for complete doppelgangers (doppelganger in all measured elements) are represented by the filled violin plots. The median abundance difference in each element for intracluster pairs is represented by the orange crosses. This figure illustrates that random pairs of field stars can appear as chemically similar as open cluster stars in the lighter elements but show strong deviations in the heavier elements. distinguish partial from complete doppelgangers, the results of Ness et al. (2018) may overestimate the doppelganger rate due to the exclusion of this elemental family. We explore this possibility in Manea et al., in prep.. It is of physical interest to investigate why some pairs are partial doppelgangers while others are complete doppelgangers. Adopting the dynamical parameters and age estimates from the associated GALAH value-added catalogs (see Section 2), we compare the similarity in these characteristics for pairs of stars that are partial versus complete doppelgangers. In Figure 14, we plot the running mean (left column) and standard deviation (right column) in the (absolute for the left column) difference in age (top panel) and a series of dynamical characteristics (\(J_{\rm R}\), L\({}_{\rm Z}\), J\({}_{\rm Z}\), eccentricity, z\({}_{\rm max}\), and energy) for pairs as a function of their chemical similarity, which we measure using a reduced \(\chi^{2}\), defined by the formula in Equation 1 but further divided by the number of degrees of freedom (e.g., number of elements considered, which is 9 when ignoring the neutron-capture elements, marked by the gray curves, and 15 when including them, marked by the orange curves.) Dashed lines represent the median reduced \(\chi^{2}\) value for intracluster pairs. Shading represents the uncertainty, which for the left panels is +/- \(\sigma\)/(N)\({}^{1/2}\) and for the right panel is +/- \(\sigma\)/(2N)\({}^{1/2}\) where N is the number of stars in the bin. \(\chi^{2}\) correlates strongly with similarity in age or dynamical characteristics. However, we do not find clear evidence that complete doppelgangers (orange curve leftward of the dashed lines) possess more similar age and dynamical parameters relative to partial doppelgangers (gray curve leftward of dashed lines) using the \(\chi^{2}\) metric. However, these results should be followed up using higher precision ages and dynamical characteristics, as any potential signature may be concealed beneath large uncertainties in age and orbital properties. Furthermore, this may be due to our definition of \(\chi^{2}\), which is a sum of chemical similarity in all elements. Treating each element individually may yield different results, but we leave this for a future investigation. ## 5 Discussion Our investigations into the doppelganger rate in GALAH DR3 conclude that the neutron-capture elements possess subtle but non-negligible distinguishing power in the Milky Way disk that is important to consider when chemically characterizing stars. As captured in Figures 10 and 11, these results suggest that the neutron-capture elements contain information distinct from that of the lighter elements and thus add to the dimensionality of Milky Way abundance space. For exam Figure 13: Example spectra of the two types of doppelganger pairs: partial doppelgangers (black) and complete doppelgangers (red). In each panel, we highlight spectral regions around neutron-capture element lines, with the faint orange line marking the line core of the labelled element. Note how the complete doppelganger spectra match in the highlighted neutron-capture features while the partial doppelgangers show deviations, as expected given our definitions of these two populations. ple, when ignoring the neutron-capture elements, we identify over 6,700 pairs of apparently-unrelated field stars that are as chemically similar as stars in open clusters. However, when introducing the neutron-capture elements, we find that 1/3rd of those pairs are not in fact as chemically similar as stars in open clusters when considering the neutron-capture elements. When restricting ourselves to a narrow [Fe/H] range (-0.1! [Fe/H]! 0.1), we identify over 600 pairs of doppelgangers when omitting the neutron-capture elements, and upon introducing the neutron-capture elements, we are left with just \(\sim\)108 pairs (a reduction by a factor of 5.75). These results directly support those of Griffith et al. (2022), which found that introducing a chemical dimension corresponding to AGB star nucleosynthesis is required to reproduce the chemical distribution of the GALAH survey. Additionally, these results support those of Lambert & Reddy (2016), which found that open clusters deviate most significantly in the neutron-capture elements even when sharing light (Z!30) element compositions, suggesting that neutron-capture elements contain additional distinguishing power not captured by the lighter elements. ### Possible Physical Origins of Enhanced Distinguishing Power of Neutron-capture Elements There are potential physical explanations for why neutron-capture elements could add to the chemical dimensionality of Milky Way stars. Their unique production and dispersal sites may embed neutron-capture elemental abundances with temporal and spatial information that is distinct from that contained in the lighter (Z!30) elements. For example, several works have identified that \(s\)-process element abundances, when compared to \(\alpha\)-element abundances, can be effective chemical clocks that probe stellar age both within and outside the Milky Way (e.g., Feltzing et al., 2017; Skuladotti et al., 2019; Ratcliffe et al., 2023). This is due to their primary nucleosynthetic origins in AGB stars, which are lower mass (M \(<8-10\)M\({}_{\odot}\)) than the high mass (M \(>\) 10M\({}_{\odot}\)) stars that produce the \(\alpha\) elements (e.g., Karakas & Lattanzio, 2014). Because stellar mass strongly influences stellar lifetime, \(s\)-process elements have a longer delay time for enrichment in the Galaxy with respect to the \(\alpha\)-elements. When stars form from gas that is enriched in products of both AGB star and core collapse supernova nucleosynthesis, this delay-time difference enables \(s\)-process elements to trace the star's age when compared to \(\alpha\) elements, though the relationship between the [\(s\)-process/alpha] ratio and stellar age is neither universal nor simple (e.g., Casali et al., 2020). In addition to age information, \(s\)-process elements may also add information about stellar birth position that is not captured by supernova-produced elements. Simulations suggest that the \(s\)-process elements have shorter correlation lengths in the Milky Way interstellar medium relative to the lighter elements due to the nature of their dispersal (e.g., Armillotta et al., 2018; Krumholz & Ting, 2018; Emerick et al., 2020). The \(s\)-process elements are dispersed via relatively gentle, localized AGB star winds, contrary to the highly energetic dispersal of the lighter elements via supernovae (e.g., Cox et al., 2012). The more localized nature of their dispersal may thus allow for greater variation in \(s\)-process composition as a function of location within the interstellar medium. Thus, stars born together in pockets of this enriched gas will be chemically similar in these heavier elements, but stars across different birth groups, even at fixed metallicity, may differ in their \(s\)-process composition. We indeed see evidence of this when studying open clusters (e.g., Lambert & Reddy, 2016). This could imply that either a) the Figure 14: Running mean (left column) and standard deviation (right column) in (absolute, for left column) age and dynamical property difference for random pairs of field stars as a function of their reduced \(\chi^{2}\), a measure of their chemical similarity normalized by the number of elements considered (i.e., the degrees of freedom). The gray curve omits the neutron-capture elements while the orange curve includes the neutron-capture elements, and the thick dashed line represents the median reduced \(\chi^{2}\) for intracluster pairs. While chemical similarity globally correlates with age or dynamical similarity, complete doppelgangers (orange curve that lies leftward of the dashed line) do not appear to be more similar in age and dynamical parameters than partial doppelgangers (gray curve that lies leftward of dashed line) using our defined \(\chi^{2}\) metric. timescale between the enrichment of _s_-process products into the interstellar medium and the formation of stars from this material is shorter than the timescale with which the interstellar medium mixes away gas-phase abundance variations, or b) the interstellar medium is less efficient at mixing _s_-process products (again potentially due to their less-energetic dispersal compared to supernova products, Krumholz and Ting, 2018; Emerick et al., 2020). The _r_-process elements also have physical reasons to add a unique dimension to Milky Way abundance space. The _r_-process elements are believed to be formed in stochastic events such as magnetorotational supernovae (e.g., Siegel and Metzger, 2017; Halevi and Mosta, 2018; Siegel et al., 2019) and compact object mergers (e.g., Korobkin et al., 2012). The stochastic nature of their synthesis could allow them to carry additional information about stellar birth position or age. We note that the origin of the _r_-process elements is still a major open question in the field (e.g., Lian et al., 2023; Kobayashi et al., 2023). The results of our analysis of the CDR (Figure 11) appear to suggest that the _r_-process elements hold less distinguishing power than the _s_-process elements. We caution that Eu is the only pure _r_-process element in our analysis that represents this nucleosynthetic family, whereas we have five elements representing the _s_-process family, one of which also has significant _r_-process contribution (Nd, e.g., Kobayashi et al., 2020). It is thus possible that much of the distinguishing power of _r_-process elements is overshadowed by the numerous _s_-process elements we sample. However, the _s_-process elements possessing greater distinguishing power than the _r_-process elements also has physical support. Leading potential origins of _r_-process elements involve dispersal via energetic supernovae, and this may cause them to behave differently than _s_-process elements in the ISM. Further work into the dependence of the doppelganger rate on _r_-process elements must be conducted to clarify whether they truly carry less distinguishing power than _s_-process elements, or if this is just a consequence of the choice of elements considered. Given the neutron-capture elements that we consider in this work, the doppelganger rate reduces from \(\sim\)2.2% to 0.4%. Whilst this is a substantial drop in the doppelganger rate, and informative for nucleosynthetic sources and mixing, this still prohibitive for the prospect of strong chemical tagging in the Milky Way Galaxy. This rate, which measures the probability of which random stars are chemically as similar as stars born together, is still a factor of at \(\approx\)1000-10000 times greater than the expected rate of recovering true birth pairs in a disk. This is assuming that clusters form from with a typical mass of \(1\times 10^{4}\,\mathrm{M}_{\odot}\) - \(1\times 10^{6}\,\mathrm{M}_{\odot}\)(e.g., Bland-Hawthorn et al., 2010). ### Analysis Limitations and Looking Ahead to GALAH DR4 We emphasize that the doppelganger rate is influenced by several factors, some intrinsic to the Milky Way (e.g., mixing efficiency in the interstellar medium, variations in nucleosynthetic yields) and some caused by the available data. The results of this work are limited by the precision of derived elemental abundances. With increased abundance precision, we may find that the doppelganger rate decreases even further. However, the results of this work suggest that at the abundance precision of GALAH DR3 combined with _The Cannon_, we are able to harness the distinguishing power of neutron-capture elements. We note that if we repeat this experiment using the provided GALAH DR3 abundances of a high SNR (SNR > 100), high precision subsample of the catalog, we obtain the same qualitative results. This work is also limited by the number of elements that we consider in our analysis. For the purpose of building a reasonably sizeable training set, we could not consider all \(\sim\)30 elements reported by GALAH, as it is rare for stars to have high-fidelity abundance measurements in all elements. As such, we only considered between 1 and 5 elements from each nucleosynthetic family. The upcoming fourth data release of GALAH is an ideal environment for expanding on this experiment. GALAH DR4 will have a larger sample size overall and enhanced sampling of open cluster stars. Enhancing the reference open cluster sample would allow for a more granular exploration of the doppelganger rate as a function of metallicity. In this work, we are limited by the small number of open cluster stars sampled by GALAH DR3, and separating the data into metallicity bins would make the number of reference open cluster stars per metallicity bin prohibitively small. In DR4, the number of sampled open cluster stars will be doubled, and the sampled open clusters will span -2 < [Fe/H] < 1, where as DR3 only reasonably samples open clusters -0.5 < [Fe/H] < 0.5. Furthermore, GALAH DR4 will have improved abundance precision, further minimizing its extrinsic effect on the measured doppelganger rate. Finally, GALAH DR4 will provide a larger set of stars sampled in a wider range of elements, enabling the consideration of a greater number of elements and thus allowing for a result that better reflects the intrinsic doppelganger rate of our Galaxy. ## 6 Conclusions In this work, we measure the doppelganger rate among red clump and red giant stars in GALAH DR3. The doppelganger rate measures the rate at which randomly drawn pairs of apparently unrelated field stars appear to be as chemically similar as stars born together. It probes the chemical diversity of Milky Way stars, the chemical dimensionality of Milky Way abundance space, and the complexity with which Galactic chemical evolution operates. After re-deriving stellar parameters and abundances with _The Cannon_, we measure the chemical doppelganger rate. We find that 0.9% of random pairs of fields stars are doppelgangers in the light-, \(\alpha\)-, and iron-peak elements. This number increases to \(\sim\)2.2% when we restrict ourselves to stars in the -0.1! [Fe/H]! 0.1 dex range. However, we find that including neutron-capture elements Zr, Y, Ba, Ce, Nd, and Eu in our analysis decreases the doppelganger rate significantly. When considering our full sample, the neutron-capture elements reduce the doppelganger rate to 0.6%, and when restricting to the -0.1! [Fe/H]! 0.1 dex range, the doppelganger rate drops to 0.4%, by nearly a factor of 6 relative to the rate measured considering only the lighter (Z! 30) elements. In other words, up to 85% of stars that are highly chemically similar in the lighter elements deviate in their neutron-capture element abundances. Chemical similarity strongly correlates with similarity in age or dynamics. However, we do not identify any clear signatures that complete doppelgangers, pairs of stars that are doppelganger in the light, \(\alpha\), iron-peak, _and_ neutron-capture elements, are more similar in age and dynamical characteristics than partial doppelgangers, those that are doppelganger in the lighter elements but show deviations in the neutron-capture elements. However, these results are not conclusive, so additional follow up work should be done to to further explore this. Finally, our results suggest that the \(s\)-process elements may carry greater distinguishing power in our sample than the \(r\)-process elements, though we urge additional follow-up to confirm this. This work highlights that the neutron-capture elements carry unique information that is distinct from that found in the light-, \(\alpha\)-, and iron-peak elements and are thus important tools in the chemical characterization of Milky Way stars. Despite their enhanced distinguishing power, our final doppelganger rate of \(\sim\)0.4% suggests that neutron-capture elements measured at the precision of this work are likely not sufficient to satisfy the requirements for strong chemical tagging. However, our results illustrate that neutron-capture elements can distinguish between 85% of stars that appear chemically similar in the light, \(\alpha\), and iron-peak elements, suggesting that these heavy elements, particularly the \(s\)-process elements, are potentially important tools for the weak chemical tagging of stars to known clusters and stellar populations. Our results motivate the need for continued work to improve atomic data for the heavy elements and enhance our ability to extract precise and accurate neutron-capture elemental abundances from stellar spectra. ## Acknowledgements CM is supported through the University of Texas at Austin Graduate Continuing Fellowship. KH acknowledge support from the National Science Foundation grant AST-1907417 and AST-2108736 and from the Wootton Center for Astrophysical Plasma Properties funded under the United States Department of Energy collaborative agreement DE-NA0003843. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. This work was also performed in part at the Simons Foundation Flatiron Institute's Center for Computational Astrophysics during KH's tenure as an IDEA Fellow. This work was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. DBZ and SLM acknowledge the support of the Australian Research Council through Discovery Project grant DP220102254, and SLM acknowledges the support of the UNSW Scientia Fellowship Program. The following software and programming languages made this research possible: topcat (Python (version 3.9) and its packages astropy (version 2.0; Astropy Collaboration et al., 2013), scipy (Virtanen et al., 2020), matplotlib (Hunter, 2007), pandas (version 0.20.2; Reback et al., 2020) and NumPy (van der Walt et al., 2011). This research has made use of the VizieR catalog access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A&AS 143, 23. Colour maps used in figures were adopted from those created by Fabio Crameri ([http://doi.org/10.5281/zenodo.1243862](http://doi.org/10.5281/zenodo.1243862)). ## Data Availability This work has made use of data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This work has also made use of GALAH DR3, based on data acquired through the Australian Astronomical Observatory, under programmes: A/2013B/13 (The GALAH pilot survey); A/2014A/25, A/2015A/19, A2017A/18 (The GALAH survey). We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present. The GALAH DR3 data underlying this work are available in the Data Central at [https://cloud.datacentral.org.au/teamdata/GALAH/public/GALAH_DR3/](https://cloud.datacentral.org.au/teamdata/GALAH/public/GALAH_DR3/) and can be accessed with the unique identifier galah_dr3 for this release and sobject_id for each spectrum.
2303.02613
On Data-Driven Drawdown Control with Restart Mechanism in Trading
This paper extends the existing drawdown modulation control policy to include a novel restart mechanism for trading. It is known that the drawdown modulation policy guarantees the maximum percentage drawdown no larger than a prespecified drawdown limit for all time with probability one. However, when the prespecified limit is approaching in practice, such a modulation policy becomes a stop-loss order, which may miss the profitable follow-up opportunities if any. Motivated by this, we add a data-driven restart mechanism into the drawdown modulation trading system to auto-tune the performance. We find that with the restart mechanism, our policy may achieve a superior trading performance to that without the restart, even with a nonzero transaction costs setting. To support our findings, some empirical studies using equity ETF and cryptocurrency with historical price data are provided.
Chung-Han Hsieh
2023-03-05T08:51:28Z
http://arxiv.org/abs/2303.02613v1
# On Data-Driven Drawdown Control ###### Abstract This paper extends the existing drawdown modulation control policy to include a novel _restart_ mechanism for trading. It is known that the _drawdown modulation policy_ guarantees the maximum percentage drawdown no larger than a prespecified drawdown limit for all time with probability one. However, when the prespecified limit is approaching in practice, such a modulation policy becomes a stop-loss order, which may miss the profitable follow-up opportunities, if any. Motivated by this, we add a data-driven restart mechanism into the drawdown modulation trading system to auto-tune the performance. We find that with the restart mechanism, our policy may achieve a superior trading performance to that without the restart, even with a nonzero transaction costs setting. To support our findings, some empirical studies using equity ETF and cryptocurrency with historical price data are provided. C + Footnote †: footnote]This paper was supported in part by Ministry of Science and Technology, R.O.C. Taiwan, under Grant: MOST 111–2221–E–007–124–. ontrol applications, stochastic systems, algorithmic trading, financial engineering, drawdown control. ## 1 Introduction Starting from the pioneering work by Markowitz (1952, 1959), the portfolio optimization problem is often solved by the _mean-variance_ approach. That is, the trader seeks an optimal trade-off between payoff and risk measured by the portfolio returns' mean and variance. While the variance is widely used as a standard _risk_ metric in finance, it is more to the _dispersion risk_, which treats both positive and negative deviation from the mean as equally risky; see Fabozzi et al. (2007) for a good introduction to this topic. ### Downside Risks To remedy the equal riskiness on the dispersion risks, many surrogate risk measures are proposed to pay attention to the _downside risks_. This includes Value at Risk (VaR); see Jorion (2000), Conditional Value at Risk (CVaR); see Rockafellar and Uryasev (2000), absolute drawdown; see Magdon-Ismail and Atiya (2004); Hayes (2006), conditional expected drawdown (CED), the tail mean of maximum drawdown distributions, see Goldberg and Mahmoud (2017), and the more general _coherent risk_ that axiomatize the risk measures; see Luenberger (2013); Shapiro et al. (2021). See also Korn et al. (2022) for an empirical study of comparing various drawdown-based risk metrics. In this paper, we focus on a more practical drawdown measure, the _maximum percentage drawdown_, the maximum percentage drops in wealth over time, as the risk measure. ### Control of Drawdown Different types of drawdown and methodologies are studied extensively in the existing literature. For example, optimal drawdown control problem in a continuous-time setting are studied in Grossman and Zhou (1993); Cvitanic and Karatzas (1994); Chekhlov et al. (2005); Malekpour and Barmish (2012, 2013). See also Boyd et al. (2017) for a study on multiperiod portfolio optimization problems involving drawdown as a constraint in a discrete-time setting. A recent study that uses deep reinforcement learning to address practical drawdown issues can be found in Wu et al. (2022). Among all the existing papers, the prior works in Hsieh and Barmish (2017a,b) are the most closely related papers, where the key result, the so-called _drawdown modulation lemma_, is proved. Roughly speaking, it shows a necessary and sufficient conditions for a broad class of control policies that guarantees an almost sure maximum percentage drawdown protection for all time. However, Hsieh and Barmish (2017a) indicate that, in practice, when the prespecified drawdown limit is approaching, the trading policy may behave like a stop-loss order, and the trading may be stopped; see also Hsieh (2022a) for a study on a class of affine policies with a stop-loss order. To this end, we extend the drawdown modulation policy with a novel restart mechanism to remedy the stop-loss phenomenon. ### Contributions of the Paper This paper extends the existing drawdown modulation policy with a novel restart mechanism. The preliminaries are provided in Section 2. We formulate an optimal drawdown control problem for a two-asset portfolio with one risky and one riskless asset. Then we extend the existing drawdown modulation theory so that the riskless asset is explicitly involved; see Lemma 3.1. The necessary and sufficient conditions for a broad class of control policy which we call the drawdown modulation policy are provided. Then, the modulation policy with a restart mechanism is discussed in Section 3. The idea of the restart is simple: When the percentage drawdown up-to-date is close to the prespecified drawdown limit, we restart the trades with an updated policy. We also provide numerical examples involving ETF and cryptocurrency historical price data to support our findings; see Section 4. ## 2 Preliminaries We now provide some useful preliminaries for the sections to follow. ### Asset Trading Formulation Fix an integer \(N>0\). For stage \(k=0,1,\ldots,N\), we let \(S(k)>0\) denote the prices of the underlying financial asset at stage \(k\). The associated _per-period returns_ are given by \(X(k):=\frac{S(k+1)-S(k)}{S(k)}\) and the returns are assumed to be bounded, i.e., \(X_{\min}\leq X(k)\leq X_{\max}\) with \(X_{\min}\) and \(X_{\max}\) being points in the support, denoted by \(\mathcal{X}\), and satisfying \(-1<X_{\min}<0<X_{\max}\). For the money market asset, e.g., bond or bank account, we use \(r_{f}(k)\) to denote the interest rate at stage \(k\). Remark 2.1: Note that the returns considered here are not necessarily independent and can have an arbitrary but bounded distribution with the bounds \(X_{\min}\) and \(X_{\max}\). ### Account Value Dynamics Beginning at some initial account value \(V(0)>0\), consider a portfolio consisting of two assets, with one being risky and the other being a riskless asset with interest rate \(r_{f}(k)\in[0,X_{\max}]\) for all \(k\) almost surely. For stage \(k=0,1,\ldots,\) we let \(V(k)\) denote the account value at stage \(k\). Then the evolution of the account value dynamics is described by the stochastic recursion \[V(k+1)=V(k)+u(k)X(k)+(V(k)-u(k))r_{f}(k).\] Given a _prespecified drawdown limit_\(d_{\max}\in(0,1)\), we focus on conditions on selecting a policy \(u(k)\) under which satisfaction of the constraint \(d(k)\leq d_{\max}\) is assured for all \(k\) with probability one where \(d(k)\) is the percentage drawdown up to date \(k\), which is defined below. Definition 2.1 (Maximum Percentage Drawdown): For \(k=0,1,\ldots,N\), the _percentage drawdown_ up to date \(k\), denoted by \(d(k)\), is defined as \[d(k):=\frac{V_{\max}(k)-V(k)}{V_{\max}(k)}\] where \(V_{\max}(k):=\max_{0\leq i\leq k}V(i)\). The _maximum percentage drawdown_, call it \(d^{*}\), is then defined as \[d^{*}:=\max_{0\leq k\leq N}d(k).\] Remark 2.2: It is readily seen that the percentage drawdown satisfies \(d(k)\in[0,1]\) for all \(k\) with probability one. ## 3 Drawdown Modulation with Restar According to Hsieh and Barmish (2017a), it states a necessary and sufficient condition on any trading policy \(u(k)\) that guarantees the percentage drawdown up to date \(d(k)\) is no greater than a given level \(d_{\max}\) for all \(k\) with probability one. Below, we extend the result to include a riskless asset. Lemma 3.1 (Drawdown Modulation): _Let \(d_{\max}\in(0,1)\) be given. An trading policy \(u(\cdot)\) guarantees prespecified drawdown limit satisfying \(d(k)\leq d_{\max}\) for all \(k\) with probability one if and only if for all \(k\), the condition_ \[-\frac{M(k)+r_{f}(k)}{X_{\max}-r_{f}(k)}V(k)\leq u(k)\leq\frac{M(k)+r_{f}(k)}{ |X_{\min}|+r_{f}(k)}V(k)\] _is satisfied along all sample paths where_ \[M(k):=\frac{d_{\max}-d(k)}{1-d(k)}.\] Proof: The idea of the proof is similar to that of Hsieh and Barmish (2017a). However, for the sake of completeness, we provide full proof here. To prove necessity, assuming that \(d(k)\leq d_{\max}\) for all \(k\) and all sequences of returns, we must show the condition on \(u(k)\) holds along all sequences of returns. Fix \(k\). Since both \(d(k)\leq d_{\max}\) and \(d(k+1)\leq d_{\max}\) for all sequences of returns, we claim this forces the required inequalities on \(u(k)\). Without loss of generality, we prove the right-hand inequality for the case \(u(k)\geq 0\) and note that an almost identical proof also works for \(u(k)<0\). To establish the condition on \(u(k)\) for all sequences of returns, it suffices to consider the path with the worst loss \(|X_{\min}|u(k)\). In this case, we have \(V_{\max}(k+1)=V_{\max}(k)\). Hence, \[d(k+1)\] \[=\frac{V_{\max}(k+1)-V(k+1)}{V_{\max}(k+1)}\] \[=\frac{V_{\max}(k)-V(k+1)}{V_{\max}(k)}\] \[=\frac{V_{\max}(k)-V(k)+u(k)|X_{\min}|-(V(k)-u(k))r_{f}(k)}{V_{ \max}(k)}\] \[=d(k)+\frac{u(k)|X_{\min}|-(V(k)-u(k))r_{f}(k)}{V_{\max}(k)}\leq d _{\max}\] We now substitute \(V_{\max}(k)=\frac{V(k)}{1-d(k)}>0\) into the inequality above and obtain \[|X_{\min}|u(k)-(V(k)-u(k))r_{f}(k)\leq M(k)V(k),\] where \(M(k)=\frac{d_{\max}-d(k)}{1-d(k)}\). This implies that \[(|X_{\min}|+r_{f}(k))u(k)\leq(M(k)+r_{f}(k))\,V(k).\] Or equivalently, \[u(k)\leq\frac{M(k)+r_{f}(k)}{|X_{\min}|+r_{f}(k)}V(k).\] To prove sufficiency, assuming that the stated bounds on \(u(k)\) hold along all sequences of returns, we must show \(d(k)\leq d_{\max}\) for all \(k\) and all sequences of returns. Proceeding by induction, for \(k=0\), we trivially have \(d(0)=0\leq d_{\max}\). To complete the inductive argument, we assume that \(d(k)\leq d_{\max}\) for all sequences of returns, and must show \(d(k+1)\leq d_{\max}\) for all sequences of returns. Without loss of generality, we again provide a proof for the case \(u(k)\geq 0\) and note that a nearly identical proof is used for \(u(k)<0\). Indeed, by noting that \[d(k+1)=1-\frac{V(k+1)}{V_{\max}(k+1)},\] and \(V_{\max}(k)\leq V_{\max}(k+1)\) for all sequences of returns, we split the argument into two cases: If \(V_{\max}(k)<V_{\max}(k+1)\), then \(V_{\max}(k+1)=V(k+1)\) and we have \(d(k+1)=0\leq d_{\max}\). On the other hand, if \(V_{\max}(k)=V_{\max}(k+1)\), with the aid of the dynamics of account value, we have \[d(k+1) =1-\frac{V(k)+u(k)X(k)+(V(k)-u(k))r_{f}(k)}{V_{\max}(k)}\] \[\leq 1+\frac{-V(k)(1+r_{f}(k))+u(k)(|X_{\min}|+r_{f}(k))}{V_{\max}( k)}.\] Using the upper bound on \(u(k)\); i.e., \[u(k)\leq\frac{M(k)+r_{f}(k)}{|X_{\min}|+r_{f}(k)}V(k)\] and \(V_{\max}(k)=\frac{V(k)}{1-d(k)}\), we obtain \[d(k+1) \leq 1+(-1+M(k))(1-d(k))\] \[=d(k)+M(k)(1-d(k)).\] Using the definition of \(M(k)=\frac{d_{\max}-d(k)}{1-d(k)}\), it follows that \(d(k+1)\leq d_{\max}\), and the proof is complete. \(\Box\) ### Drawdown Modulation Policy Consistent with Hsieh and Barmish (2017a), fix the pre-specified drawdown limit \(d_{\max}\in(0,1)\). With the aid of Lemma 3.1, one can readily obtain a class of policy functions \(u(k)\) expressed as a _linear time-varying feedback controller_ parameterized by a gain \(\gamma\), leading to the satisfaction of the drawdown specification. Specifically, we express \(u(k)\) in the feedback form \[u(k):=K(k)V(k) \tag{1}\] with \(K(k):=\gamma M(k)\) and \[\gamma\in\Gamma:=\left[\frac{-1}{X_{\max}-\max_{k}r_{f}(k)},\,\frac{1}{|X_{ \min}|+\max_{k}r_{f}(k)}\right].\] Equation (1) is called the _drawdown modulation policy_, which is parameterized by the two parameters \((\gamma,d_{\max})\). **Remark 3.1**: \((i)\) _It is readily verified that the drawdown modulation policy (1) satisfies Lemma 3.1. \((ii)\) To link back to finance concepts, the special case of buy-and-hold is obtained when \(K(k)\equiv 1\). Note that \(u(k)<0\) stands for short selling.1 \((iii)\) Instead of using a fixed feasible set \(\Gamma\), it is also possible to allow a time-varying feasible set, say \(\Gamma_{k}\), to reflect the time dependency of the returns._ Footnote 1: Short selling a stock means that a trader borrows the stocks from someone who owns it and selling it with the hope that the prices of the stock will drop in the near future; see Luenberger (2013). **Corollary 3.1**: \((\)_Maximum Drawdown Protection\()\)_. With the drawdown modulation policy (1), the maximum percentage drawdown satisfies \(d^{*}\leq d_{\max}\)._ Proof: Since the drawdown modulation policy satisfies Lemma 3.1, it assures \(d(k)\leq d_{\max}\) for all \(k\) with probability one. Therefore, it follows that \[d^{*}=\max_{0\leq k\leq N}d(k)\leq d_{\max}.\qquad\Box\] ### Optimal Drawdown Control Problem Having obtained the drawdown modulation policy (1), a natural question arises of how to select an "optimal" \(\gamma\). To this end, we define the _total return_ up to terminal stage \(N\) as a ratio \[R_{\gamma}(N):=\frac{V(N)}{V(0)}\] where the subscript of \(R_{\gamma}(\cdot)\) is used to emphasize the dependence on the gain \(\gamma\). Define \(J(\gamma):=\mathbb{E}[R_{\gamma}(N)]\). Then, we consider a multiperiod drawdown-based stochastic optimization problem \[J^{*}:=\max_{\gamma\in\Gamma}J(\gamma)\] subject to \[V(k+1) =V(k)+u(k)X(k)+(V(k)-u(k))r_{f}(k)\] \[=[1+r_{f}(k)+\gamma M(k)(X(k)-r_{f}(k))]V(k).\] It is readily verified that \[\frac{V(N)}{V(0)}=\prod_{k=0}^{N-1}[1+r_{f}(k)+\gamma M(k)(X(k)-r_{f}(k))].\] Therefore, we rewrite the problem as the following equivalent form \[\max_{\gamma\in\Gamma}\mathbb{E}[R_{\gamma}(N)]\] \[=\max_{\gamma\in\Gamma}\mathbb{E}\left[\prod_{k=0}^{N-1}[1+r_{f}( k)+\gamma M(k)(X(k)-r_{f}(k))]\right]. \tag{2}\] In the sequel, we shall use \(\gamma^{*}\) to denote a maximizer of the optimization problem above. In practice, if one view that the optimum \(\gamma^{*}\) obtained may be too aggressive, a practical but arguably suboptimal way is to introduce an additional _fraction_, call it \(\alpha\), that is used to shrink the investment size; see Maclean et al. (2010) for a similar idea. That is, instead of working with \(\gamma^{*}\), one may work with \(\alpha\gamma^{*}\) where \(\alpha\in(0,1]\). **Remark 3.2**: \((\)_Non-Convexity\()\)_. It is important to note that solving Problem (2) is challenging since the modulation function \(M(k)\) depends on \(\gamma\) and the history of \(X(0),\ldots,X(k-1)\), which in general yields a nonconvex problem; e.g., see Figure 4 in Section 4 for an illustration of non-convexity nature. Therefore, Monte-Carlo simulations are often needed to obtain the optimum._ ### Modulation with Restart As mentioned in Section 1, while the derived drawdown modulation policy provides almost sure drawdown protection, it may incur a stop-loss behavior. To remedy this, we now introduce a restart mechanism into the modulation policy. Specifically, let \(\varepsilon\in(0,d_{\max})\) be a prespecified threshold parameter. Then we set the _threshold_ for restarting the trade by \[d(k)+\varepsilon>d_{\max} \tag{3}\] If, at some stage \(k=k_{0}\), Inequality (3) is satisfied, the trading is restarted by re-initializing \(d(k_{0}):=0\) and reset the time-varying gain function \(K(k)\) of the modulation policy \(u(k)=K(k)V(k)\) at that stage \(k=k_{0}\) as \[K(k_{0}):=\gamma^{*}\alpha e^{-k_{0}/N}M(k_{0}) \tag{4}\] where \(\alpha e^{-k_{0}/N}\) represents a _forgetting factor_ with fraction \(\alpha\in(0,1]\) mentioned previously. Then we continue the trade until the next restart stage or to the terminal stage \(N\). **Remark 3.3**: \((i)\) _The forgetting factor in Equation (4) reflects the idea that the trading size should be shrunk after the restart. Said another way, if the trades are approaching the prespecified drawdown limit \(d_{\max}\), the follow-up trades should be more conservative after the restart. \((ii)\) Note that after the restart, the control policy satisfies \(|u(k_{0})|\leq|\gamma^{*}|\alpha d_{\max}\) since \(M(k)\leq d_{\max}\) for all \(k\) with probability one. \((iii)\) While it does not consider in this paper, we should note that the optimal \(\gamma^{*}\) in Equation (4) can also be re-calculated at each restart time by using the previous \(k_{0}-m\) stages information for some integer \(m>1\); see also Wang and Hsieh (2022) for a similar idea for obtaining a data-driven log-optimal portfolio via a sliding-window approach._ ## 4 Illustrative examples In this section, two trading examples are proposed to support our theory. The first example is trading with ETF and riskless asset. The second example is trading with Bitcoin and a riskless asset. For the sake of simplicity, we take a constant daily interest rate \(r_{f}(k):=0.01/365\) for all \(k\), which corresponds to a 1% annual rate. While our theory allows _leveraging_, in the sequel, we impose an additional _cash-financing_ condition by imposing that \(|u(k)|\leq V(k)\) for all \(k\), which corresponds to \(|K^{*}(k)|\leq 1\). ### Trading with ETF and Riskless Asset Consider the Vanguard Total World Stock Index Fund ETF (Ticker: VT)2 to be the risky asset covering a one-year in-sample period for optimization from January 02, 2019 to January 02, 2020, and the out-of-sample period from January 02, 2020 to September 20, 2022, which contains a total of \(N=684\) trading days; see Figure 2 where the cyan colored trajectory is used for in-sample optimization and the blue colored trajectory is used for the out-of-sample trading test. It should be noted that due to the COVID-19 pandemic, the considered prices covering the 2020 stock market crash from February 20, 2020 to April 7, 2020, and a recovery period after the crash. Thus, we view this dataset as an excellent back-testing case for our proposed drawdown modulation policy with the restart. Footnote 2: VT invests in both foreign and U.S. stocks; hence it can be viewed as a good representative of the global stock market Without loss of generality, consider the initial account value to be \(V(0):=\$1\). To implement the drawdown modulation policy with restart, we set \(d_{\max}:=0.1\) and restart threshold \(\varepsilon:=d_{\max}/10\). With the data collected in the in-sample period, the corresponding feasible set is \(\Gamma\approx[-30.79,\ \ 34.9]\). Then we numerically solve the optimization problem (2) via Monte-Carlo simulations. It follows that any \(\gamma\in(8.5,34.9)\) share an almost identical optimal value when the cash-financing condition is imposed. For the sake of risk-averseness, we pick the infimum of the candidates, i.e., \(\gamma^{*}:=8.5\). Using this \(\gamma\), we obtain the drawdown modulation policy \(u^{*}(k)=\gamma^{*}M(k)V(k)\). The corresponding trading performance is shown in Figure 2, where the green dots indicate that the trade is restarted. In the same figure, we also compare it with the standard buy-and-hold strategy3 and the modulation policy without restart. We see clearly that the modulation policy with restart leads to superior performance. Some key performance metrics, including maximum drawdown and cumulative rate returns, and \(N\)-period Sharpe ratio4 are reported in Table 1. Footnote 3: Here, we mean buy and hold on the risky asset VT with \(K(k)=1\). Footnote 4: The per-period Sharpe ratio is \(SR:=\frac{\mu-r_{f}}{\sigma}\) where \(\mu\) is the per-period sample mean return, \(\sigma\) is the per-period sample standard deviation, and \(r_{f}\) is the per-period riskless return. ### Trading with Cryptocurrency and Riskless Asset As a second example, we consider a portfolio consisting of cryptocurrency BTC-USD and a riskless asset. The BTC-USD asset covers the same in-sample and out-of-sample periods described in Example 4.1. From January 02, 2020 to September 20, 2022, it has a total of \(N=993\) trading Figure 1: Stock Prices of VT Figure 2: Drawdown Modulation with/without Restart (Green Dots Indicate a Restart) days5; see Figure 3. The corresponding feasible set for \(\gamma\) is \(\Gamma=[-5.761,\,7.083]\). Footnote 5: It is worth noting that the cryptocurrency is typically traded at 24 hours a day, seven days a week. Therefore, it has longer tradings days than that trades with VT in Section 4.1. Take \(d_{\max}:=0.2\) and restart threshold \(\varepsilon:=d_{\max}/10\). By solving the optimal drawdown control problem (2), we obtain \(\gamma^{*}\approx 5.138\); see Figure 4 for \(J(\gamma)\) versus \(\gamma\in\Gamma\). Note that the \(J(\gamma)\) is clearly not concave for \(\gamma\in\Gamma\). To consider the volatile nature of cryptocurrency and unforeseen estimation error, we consider using a fractional \(\gamma^{*}\) by \(\alpha\gamma^{*}\) with \(\alpha=1/2\). That is, \(u^{*}(k)=K^{*}(k)V(k)\) with \(K^{*}(k)=\frac{\gamma^{*}}{2}M(k)V(k)\). The trading performances using drawdown modulation policy with and without restart, and buy-and-hold strategy are shown in Figure 5, where the green dots indicate that the trades were restarted. Some performance metrics are summarized in Table 2 where we see that the modulation with restart provides the highest Sharpe ratio among all the other strategies. ## Acknowledgment The author thanks Chia-Yin Lee for coding and running some preliminary numerical examples on the early draft of this paper.
2310.05044
Quantum state preparation for bell-shaped probability distributions using deconvolution methods
Quantum systems are a natural choice for generating probability distributions due to the phenomena of quantum measurements. The data that we observe in nature from various physical phenomena can be modelled using quantum circuits. To load this data, which is mostly in the form of a probability distribution, we present a hybrid classical-quantum approach. The classical pre-processing step is based on the concept of deconvolution of discrete signals. We use the Jensen-Shannon distance as the cost function to quantify the closeness of the outcome from the classical step and the target distribution. The chosen cost function is symmetric and allows us to perform the deconvolution step using any appropriate optimization algorithm. The output from the deconvolution step is used to construct the quantum circuit required to load the given probability distribution, leading to an overall reduction in circuit depth. The deconvolution step splits a bell-shaped probability mass function into smaller probability mass functions, and this paves the way for parallel data processing in quantum hardware, which consists of a quantum adder circuit as the penultimate step before measurement. We tested the algorithm on IBM Quantum simulators and on the IBMQ Kolkata quantum computer, having a 27-qubit quantum processor. We validated the hybrid Classical-Quantum algorithm by loading two different distributions of bell shape. Specifically, we loaded 7 and 15-element PMF for (i) Standard Normal distribution and (ii) Laplace distribution.
Kiratholly Nandakumar Madhav Sharma, Camille de Valk, Ankur Raina, Julian van Velzen
2023-10-08T06:55:47Z
http://arxiv.org/abs/2310.05044v2
# Quantum state preparation for bell-shaped probability distributions using deconvolution methods ###### Abstract Quantum systems are a natural choice for generating probability distributions due to the phenomena of quantum measurements. The data that we observe in nature from various physical phenomena can be modelled using quantum circuits. We present a hybrid approach to loading probability distributions by performing deconvolution as a pre-processing step before the quantum circuit. To quantify the closeness of the distribution of outcomes from the hybrid classical-quantum block and the target distribution, we use the Jensen-Shannon distance as the cost function. The chosen cost function is symmetric and allows us to improve the deconvolution step before the use of quantum circuits leading to an overall reduction of the circuit depth. The deconvolution step consists of splitting a bell-shaped probability mass function into smaller probability mass functions. The classical step paves the way for parallel data processing in the quantum hardware that consists of a quantum adder circuit as the penultimate step before measurement. We test the algorithm on IBM Quantum simulators and IBMQ Kolkata, a 27-qubit quantum processor, and validate the hybrid Classical-Quantum algorithm by loading two different distributions of bell shape. We load 7 and 15-element PMF of (i) Standard Normal distribution and (ii) Laplace distribution. ## 1 Introduction Traditional methods to model various real-world phenomena use random processes or random variables whose distribution is to be learnt. In classical computing machines or systems, the process of learning the distribution of a random source is termed stochastic modelling. Or, the problem can be alternatively seen as generating a given probability distribution, namely the target distribution. In such cases, it becomes important to learn the parameters of that distribution to generate samples from it easily. Seen from a quantum computing lens, efficient data generation using quantum measurements on qubits can solve problems in many areas, particularly finance. For example, quantum computers can be used for derivative pricing, risk modelling, and portfolio optimization [1]. In the study of the efficiency of Quantum computers in finance, there exists an important algorithm called Quantum amplitude estimation (QAE) [2]. QAE promises quadratic speed-up over the classical Monte Carlo algorithm, which has applications in finance to price an option [3] or to calculate risk metrics like Value at Risk (VaR) and Conditional Value to Risk (CVaR) [4]. Fig. 1 shows the block diagram of the steps taken in the calculation of VaR with amplitude estimation. First, we load the probability distribution of interest into the quantum computer and implement the objective function step. Then, the QAE block estimates the value of the objective function at each value of \(x\) (for this, we use bisection search). The second, third and fourth steps are iterative and stop when the desired value of \(x\) is reached, which is the required VaR value. As explained in Fig. 1, the QAE algorithm can be used to calculate the risk metric VaR, which can be extended to calculate CVaR [4]. Apart from finance and risk management, the QAE algorithm finds application in all other fields where the Monte Carlo algorithm is used for calculation. However, the success of the QAE algorithm depends on the fact that an algorithm with very low circuit depth can prepare a given quantum register in a given target probability distribution. The first step to using the QAE algorithm is to prepare a given quantum register in a probability distribution specific to the problem of interest, which we refer to as the loading problem. Suppose the probability distribution of interest comes under the family of log-concave probability distributions. We can use the well-known Grover-Rudolph (GR) [5] state preparation method to load the probability distribution. However, the GR state preparation method uses many multi-controlled single-target Rotational \(Y\) (\(R_{y}\)) gates. Implementing these gates on hardware requires many small gates that eventually increase circuit depth. If the probability distribution does not come under the log-concave probability distributions, we require an exponential number of gates to load the distribution into the quantum register [6, 7]. In this paper, we discuss a new approach inspired by the principle of deconvolution of a discrete-time sequence into two discrete-time sequences as part of the classical pre-processing step. Our work uses a classical pre-processing step of deconvolution of Probability Mass Function (PMF), which decreases the circuit depth by using more qubits and is compatible with any state preparation method. This method is more compatible with today's Noisy Intermediate-Scale Quantum computer (NISQ) because it is envisaged to have more qubits but a lower operating time. The structure of this paper is as follows. We discuss the motivation for using deconvolution in Section 2. Section 3.1 discusses how we intend to deconvolve a given discrete probability mass function. Section 4 discusses the experiments and their results. Particularly, we look at the circuit depth required to load the Gaussian and Laplace distribution using our method compared to GR's state preparation. This is followed by conclusions in Section 5. Figure 1: Flowchart representing all the steps involved in the calculation of VaR using a quantum computer. ### Notation 1. Random variables by \(\mathcal{X}\), \(\mathcal{Y}\), \(\mathcal{Z}\) 2. Pauli gates by \(X,Y,Z\) 3. Probability mass functions by \(\boldsymbol{P},\boldsymbol{R},\boldsymbol{q}\) 4. Number of qubits in a quantum register \(\mathfrak{a}\), \(\mathfrak{b}\), \(\mathfrak{n}\). 5. \(\lceil\ \rceil\) represents the ceiling function. 6. \(\lfloor\ \rfloor\) represent the floor function. ## 2 Hybrid classical quantum algorithm Recently, there has been a lot of activity in the quantum computing community to solve the data loading problem using techniques from other fields like Tensor Networks [8]. A divide-and-conquer approach to the problem was proposed by Araujo _et al._, providing an exponential time advantage with a quantum circuit having poly-logarithmic circuit depth [9]. But the algorithm scales in space (no.of qubits) as \(O(N)\), where \(N\) is the dimension of the state vector in which the quantum state should be prepared. In this work, we try to solve this problem for bell-shaped distribution important to risk management using concepts like convolution from signal processing. An important algorithm that depends on the efficient loading of an arbitrary probability distribution is quantum amplitude estimation (QAE). We explain the QAE algorithm in more detail and stress its importance The complexity of creating an arbitrary quantum state is exponential in the number of qubits. However, there exists a trade-off between the number of qubits required for loading and time in terms of circuit depth. QAE provides quadratic speed-up over classical systems that use Monte Carlo simulations [7]. This can lead to promising solutions when the loading problem is efficiently solved. Our approach uses a hybrid scheme consisting of a classical pre-processing step followed by a quantum circuit. In this way, we attempt to integrate classical and quantum computers. We use the classical constraint optimization algorithm, namely the trust region method, to deconvolve the PMF and design a quantum circuit for loading probability distribution that uses more qubits but fewer gates. The classical pre-processing step runs iteratively until the chosen cost function is minimized. This cost-function-dependent outcome works with the quantum circuit meant for implementing a quantum algorithm. Fig. 2 depicts the hybrid classical-quantum algorithm we propose in this article. The classical step Figure 2: Flowchart representing all the steps involved. Here \(\boldsymbol{P}\) (input) is the target probability distribution. consists of deconvolving the target PMF into smaller PMFs sharing the task of generating a bell-shaped probability distribution. The classical step is explained in detail in Section 3. The quantum step is presented next. ### Preparation of quantum registers To load a given distribution into a quantum register having \(\mathfrak{n}\) qubits, we have to discretize the probability distribution into \(2^{\mathfrak{n}}\) regions. This will produce a Probability Mass function (PMF) having \(2^{\mathfrak{n}}\) entries, and in terms of quantum computing, it will correspond to a state vector of a \(\mathfrak{n}\) qubit system. In this article, we will concentrate on log-concave probability distributions. The log-concave probability distribution is a family of distributions efficiently integrable using existing classical integration techniques and can be discretized efficiently using a classical computer. Once we have discretized the probability distribution and we have \(2^{\mathfrak{n}}\) length state vector \(\ket{\psi}\), the next step is to prepare a quantum circuit that will transform the state \(\ket{0}_{\mathfrak{n}}\) to \(\ket{\psi}_{\mathfrak{n}}\). In this work, we consider a circuit library consisting of one qubit gate plus the Controlled-Not gate (\(CX\) gate) to calculate the circuit depth. Common methods, like the GR method and those based on the GR method, use multi-controlled \(R_{y}\) gates one after another to load a given distribution into a \(\mathfrak{n}\) qubit system. Since these multi-controlled \(R_{y}\) gates are applied sequentially, the decomposition of the multi-controlled gates is required for implementation at the hardware level, leading to an increase in the circuit depth. To demonstrate the scaling of circuit depth for different state preparation methods, in Fig. 3, we plot a graph depicting how the circuit depth increases as we increase the number of qubits in state preparation for different state preparation methods. The blue, green, and red plots represent the scaling of circuit depth concerning several qubits that are being prepared in a given target probability state using different state preparation methods like the Grover Rudolph (GR) state preparation method, qiskit's built-in initializ Figure 3: On the X-axis, we plot the number of qubits that we have to prepare in a given superposition state, and on the Y-axis, we plot the circuit depth required to prepare these qubits in a given superposition state. We use the Grover Rudolph state preparation method to prepare the state of the qubits in a given probability distribution. To implement the multi-controlled \(R_{y}\) gate, we use the VChain method depicted in Fig. 14 in Appendix A.3. Grover Rudolph method with VChain implementation, respectively. The difference between the GR and GR with VChain implementation is how we decompose the multi-controlled \(R_{y}\) gate. In the normal GR implementation, we do not use ancilla qubits and decompose the multi-controlled \(R_{y}\) using single qubit and CNOT gates. We can also use another approach in which we use ancilla qubits to decompose the multi-controlled \(R_{y}\), depicted in Fig. 14 called the VChain method explained in the Appendix A.3. In this approach, the Toffoli (\(CCX\)) gates are further decomposed into single qubit and CNOT gates for hardware implementation. But still, as expected from the MCMT documentation available on the Qiskit website[11], we see a circuit depth reduction. To prepare the state of one qubit, we need only one \(R_{y}\) gate. Still, to prepare a superposition state of 5 qubits where we use all the \(2^{5}\) computational basis available in the 5 qubit Hilbert space, we need a quantum circuit having a circuit depth = \(O(2^{5})\). Similarly, for preparing a 13 qubit system in a given distribution using the GR method, we need \(O(2^{13})\) gates, which is huge for current NISQ computers. Using Qiskit's in-built function called "initialize" instead of the Grover Rudolph method results in a relatively reduced circuit depth. The Qiskit's in-built function qiskit.extensions.Initialize() is based on the method described in [12]. However, it's important to note that as the number of qubits increases, the depth of the circuit prepared by the Qiskit initialize function also scales up rapidly. The complexity of the GR method and Initialize function is similar to \(O(2^{n})\) but differ in a small constant factor. The current NISQ computers, based on superconducting qubits technology, are expected to scale up to 10k-100k qubits by 2026. At the same time, the decoherence time for the qubits is going to be limited [13]. This inspires us to develop methods that reduce circuit depth and utilize more qubits. Hence, in this work, we have devised a scheme to reduce the circuit depth requirement for bell-shaped distributions like the normal distribution by adding an extra classical step that deconvolves the PMF. To understand deconvolution, we first define what is convolution. Suppose we have two independent random variables \(\mathcal{X}\) and \(\mathcal{Y}\) and we define another random variable \(\mathcal{Z}\) as the sum of the random variables \(\mathcal{X}\) and \(\mathcal{Y}\). Then, the probability distribution of the random variable \(\mathcal{Z}\) is given by the convolution of the probability distributions of random variables \(\mathcal{X}\) and \(\mathcal{Y}\), respectively [14]. deconvolution is the exact opposite of convolution. Given a probability distribution, can we split it into two probability distributions, each corresponding to an independent random variable? In Section 3.1, we discuss an optimization approach to deconvolve a given PMF of length \(N\) into two smaller PMFs of length \(\lfloor\frac{N+1}{2}\rfloor\) and \(\lceil\frac{N+1}{2}\rceil\). Loading these two smaller distributions requires fewer gates. In Section 4, where we discuss our results, we load PMFs of length 7 and 15. Since the lengths are odd, the value of \(\frac{N+1}{2}\) is an integer, and hence for the odd length cases, we have \(\mathfrak{a}=\mathfrak{b}=\left\lceil\log_{2}\left(\frac{N+1}{2}\right)\right\rceil\). We load these two in parallel on two different quantum registers, then add the quantum data from these two registers and store the result of the adder in any of the registers using an additional qubit. We take a \(\mathfrak{a}\)-qubit quantum register and a \(\mathfrak{b}\)-qubit quantum register, where \(\mathfrak{a}=\left\lceil\log_{2}\left(\lfloor\frac{N+1}{2}\rfloor\right)\right\rceil\) and \(\mathfrak{b}=\left\lceil\log_{2}\left(\lceil\frac{N+1}{2}\rceil\right)\right\rceil\). Here, note that \(\mathfrak{b}\geq\mathfrak{a}\) since in \(\mathfrak{b}\) formula \(\frac{N+1}{2}\) is rounded of by the ceiling function. Then we prepare one quantum register in the state \(\left|\phi_{1}\right\rangle_{\mathfrak{a}}\) and the other in the state \(\left|\phi_{2}\right\rangle_{\mathfrak{b}}\) where \[\left|\phi_{1}\right\rangle_{\mathfrak{a}} =\sum_{i=0}^{L(\boldsymbol{q_{1}})-1}\sqrt{q_{1_{i}}}\left|i \right\rangle_{\mathfrak{a}}=A\left|0\right\rangle_{\mathfrak{a}}, \tag{1}\] \[\left|\phi_{2}\right\rangle_{\mathfrak{b}} =\sum_{j=0}^{L(\boldsymbol{q_{2}})-1}\sqrt{q_{2_{j}}}\left|j \right\rangle_{\mathfrak{b}}=B\left|0\right\rangle_{\mathfrak{b}}, \tag{2}\] where \(L(\mathbf{q_{1}})\) and \(L(\mathbf{q_{1}})\) represent the lengths of PMFs \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\) respectively. In the above equations 1, 2, the vectors \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\) are normalized. Using a quantum adder circuit represented by the unitary \(U_{A}\), we add two quantum states defined in Equation 1 and 2, then we get a quantum state of length \(\mathfrak{n}+1\) as shown in Fig. 4. The construction of Quantum adder circuit \(U_{A}\) is discussed in detail in the next Section 2.2. Mathematically we can write (3) \[U_{A}\left(\sum_{i=0}^{L(\mathbf{q_{1}})-1}\sum_{j=0}^{L(\mathbf{q_{2}})-1}q_{1_{i}}q_{ 2_{j}}\ket{i}\ket{j}_{\mathfrak{n}}\ket{0}_{\mathfrak{n}}\right)=\sum_{i=0}^{ L(\mathbf{q_{1}})-1}q_{1_{i}}\ket{i}_{\mathfrak{n}}\sum_{h=0}^{L(\mathbf{q_{2}})-2}k_{h} \ket{h}_{\mathfrak{n}+1}\ket{0}_{\mathfrak{n}-1}. \tag{4}\] The quantum state obtained after the addition has a probability distribution obtained by the convolution of probability distribution \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\), i.e., \(\mathbf{k}=\mathbf{q_{1}}*\mathbf{q_{2}}\) and \(h=i+j\) and is overwritten on the quantum register \(\ket{j}_{\mathfrak{n}}\) in equation 4. Therefore, if we break down the probability distribution to be loaded into two smaller probability distributions, then all we need to do is load these two distributions using two separate GR state preparation circuits and add the two resulting states using the quantum adder circuit shown in Fig. 2. ### Quantum adder We present the design for the Quantum adder circuit used in Fig. Fig. 2 and 4, used for adding two quantum states, \[\ket{\phi_{1}}_{\mathfrak{a}}\ket{\phi_{2}}_{\mathfrak{b}}\ket{0}_{\mathfrak{ b}}\xrightarrow{\text{Adder Circuit}}\ket{\phi_{1}}_{\mathfrak{a}}\ket{\phi_{1}\oplus\phi_{2}}_{ \mathfrak{b}+1}\ket{\text{ancilla}}_{\mathfrak{b}-1}, \tag{5}\] where, \(\phi_{1}\) and \(\phi_{2}\) refer to the computational basis state of \(\mathfrak{a}\) qubits and \(\mathfrak{b}\) respectively. The "\(\bigoplus\)" in equation 5 represents bit-wise modulo-2 addition. The design for the quantum adder circuit used in this work is inspired by the VBE adder circuit described in [15], but using this algorithm, we can add quantum registers of the same sizes. This implies we can add quantum registers having \(\mathfrak{a}=\mathfrak{b}\) and in Algorithm 1, we assume \(\mathfrak{a}=\mathfrak{b}=\mathfrak{n}\). Algorithm 1 shows how to construct an adder circuit, and Fig. 5 gives this circuit for the case \(\mathfrak{n}=2\). In this design, we note that we do not reset the ancilla qubits used by the adder circuit. For resetting the ancilla qubit, we need extra gates. For example, for \(\mathfrak{a}=\mathfrak{b}=2\) in Algorithm 1, we need one extra CCX gate for resetting the ancilla qubit. Equation 5 gives the mathematical representation for the action of the quantum adder circuit: ``` Input: n qubit quantum state \(\left|\phi_{1}\right\rangle_{\mathfrak{n}}\) and \(\left|\phi_{2}\right\rangle_{\mathfrak{n}}\). 1 extra qubit \(\left|0\right\rangle\), which is included with the ancilla qubit. Require:\(\mathfrak{n}-1\) ancilla qubits \(i\gets 0\) while\(i\neq\mathfrak{n}\)do CCXgate( Control qubit = \(\left(\left|\phi_{1}\right\rangle_{\mathfrak{n}}[i],\left|\phi_{2}\right\rangle_ {\mathfrak{n}}[i]\right)\),Target = \(\left|\text{ancilla}\right\rangle_{\mathfrak{n}}[i]\)) \(i\gets i+1\) endwhile \(i\gets 0\) while\(i\neq\mathfrak{n}\)do CCXgate( Control qubit = \(\left(\left|\phi_{2}\right\rangle_{\mathfrak{n}}[i],\left|\text{ancilla} \right\rangle_{\mathfrak{n}}[\text{i-1}]\right)\),Target = \(\left|\text{ancilla}\right\rangle_{\mathfrak{n}}[i]\)) \(i\gets i+1\) endwhile \(i\gets 0\) while\(i\neq\mathfrak{n}\)do CCXgate( Control qubit = \(\left(\left|\phi_{2}\right\rangle_{\mathfrak{n}}[i],\left|\text{ancilla} \right\rangle_{\mathfrak{n}}[\text{i-1}]\right)\),Target = \(\left|\text{ancilla}\right\rangle_{\mathfrak{n}}[i]\)) \(i\gets i+1\) endwhile \(i\gets 0\) while\(i\neq\mathfrak{n}\)do CXgate( Control qubit = \(\left|\text{ancilla}\right\rangle_{\mathfrak{n}}[i]\),Target = \(\left|\phi_{2}\right\rangle_{\mathfrak{n}}[i+1]\)) \(i\gets i+1\) endwhile ``` **Algorithm 1** Algorithm for designing Quantum adder Circuit The GR state preparation method scales as \(\mathcal{O}(2^{\mathfrak{n}})\), and the complexity of the adder circuit scales linearly with the number of qubits [15]. So, instead of using a big GR state preparation circuit to prepare a state in \(\left|\psi\right\rangle_{\mathfrak{n}+1}\), we make use of two smaller GR state preparation circuits and make two different quantum states \(\left|\phi_{1}\right\rangle_{\mathfrak{n}}\) and \(\left|\phi_{2}\right\rangle_{\mathfrak{n}}\). Then we pass these two states through the quantum adder circuit to achieve the state \(\left|\psi\right\rangle_{\mathfrak{n}+1}\). We note in the current context from [14] that addition in basis space is convolution in probability space, i.e., when we perform an addition operation on two quantum states, the probability amplitudes corresponding to each basis state of the two quantum states will convolve with each other. Therefore, we need only \(\mathcal{O}(2^{\mathfrak{n}-1})\) gates for the GR part and \(\mathcal{O}(\mathfrak{n})\) gates for the adder circuit. In the Algorithm 1 the notation \(\left|\phi_{1}\right\rangle_{\mathfrak{n}}[i]\) refers to the \(i^{\text{th}}\) qubit of the quantum register \(\left|\phi_{1}\right\rangle_{\mathfrak{n}}\). In the next section, we will discuss two approaches to deconvolution, one in which use an optimization technique called trust region and the second in which we use polynomial factorization. ## 3 Deconvolution of Target PMF We present the classical part of our hybrid algorithm, which employs classical techniques to accelerate the functioning of quantum circuits. We present two approaches to deconvolve a target PMF that we envisage to generate as the end goal of the quantum hardware. ### Deconvolution using trust region Method In this section, we describe how to perform deconvolution of a given PMF using the trust region method [16]. This is a constraint optimisation problem and hence can be mathematically formulated as : \[\min_{\mathbf{q_{1},\mathbf{q_{2}}}}f(\mathbf{q_{1},\mathbf{q_{2}}})=JS(\mathbf{P}||\mathbf{Q})\text{ where }\mathbf{Q}=\mathbf{q_{1}}*\mathbf{q_{2}}, \tag{6}\] where, \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\) represents normalized vectors and \(*\) represents convolution of \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\)[17]. Also, \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\) vectors in Equation 6 corresponds to a PMF and hence \(0<q_{1_{i}}<1,0<q_{2_{j}}<1\). Here \(\mathbf{Q}\) is a PMF of length \(L(\mathbf{q_{1}})+L(\mathbf{q_{2}})-1\). \(\mathbf{P}\) represents the probability mass function that we require to deconvolve and hence is also a vector of length \(L(\mathbf{q_{1}})+L(\mathbf{q_{2}})-1\). The function \(JS(\mathbf{P}||\mathbf{Q})\) is called the Jensen Shannon distance and is given by \[JS(\mathbf{P}||\mathbf{Q})=DS(\mathbf{P}||\mathbf{R})+DS(\mathbf{Q}||\mathbf{R}), \tag{7}\] where \[\mathbf{R}=\frac{1}{2}\left(\mathbf{P}+\mathbf{Q}\right). \tag{8}\] Further, \(DS(\mathbf{P}||\mathbf{R})\) can be defined as \[DS(\mathbf{P}||\mathbf{Q})=\sum_{i=0}^{L(\mathbf{q_{1}})+L(\mathbf{q_{2}})-2}P_{i}\log\frac{P_ {i}}{Q_{i}}, \tag{9}\] where \(P_{i}\) refers to the \(i^{\text{th}}\) element of the PMF \(\mathbf{P}\) and \(Q_{i}\) refers to the \(i^{\text{th}}\) element of the PMF \(\mathbf{Q}\). We use the trust region method to find the optimum \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\) such that it minimises the Figure 5: Design for Quantum adder Circuit constructed using Algorithm 1 for \(\mathfrak{n}=2\). Jensen-Shannon distance function \(f(\mathbf{q_{1}},\mathbf{q_{2}})\). We chose the Jensen-Shannon function as the cost function due to its symmetric nature. We calculate the gradient of the Jensen-Shannon distance with respect to both the probability vector \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\), which have \(L(\mathbf{q_{1}})\) and \(L(\mathbf{q_{2}})\) independent variables, respectively, because they both represent a PMF with number of elements equal to \(L(\mathbf{q_{1}})\) and \(L(\mathbf{q_{2}})\), respectively. Since they are PMFs, they have to satisfy the constraint \(\sum_{i=0}^{L(\mathbf{q_{1}})-1}q_{1_{i}}=1\) and \(\sum_{i=0}^{L(\mathbf{q_{2}})-1}q_{2_{i}}\), which reduces the number of independent variables from \(L(\mathbf{q_{1}})\) to \(L(\mathbf{q_{1}})-1\) and \(L(\mathbf{q_{2}})\) to \(L(\mathbf{q_{2}})-1\) respectively. We pass the Jensen-Shannon distance function, the gradient, the Hessian matrix and the method we intend to use to the Python package scipy.optimize.minimize. We explain the calculation of the Gradient and Hessian functions in detail in Appendix A.4. We use a standard Trust region optimizer from scipy[18] to find the minimum of the Jensen-Shannon Distance function. We explain the sequence in which we perform these steps in Algorithm 2. ``` Input: \(P\) variable declare \(\mathbf{q_{1}}=\) Random Guess. variable declare \(\mathbf{q_{2}}=\) Random Guess. Calculate the Gradient using 26. Calculate the Hessian Matrix using the equations from 28 to 31. Pass the cost function, gradient vector, and Hessen matrix to the scipy.optimize package of Python. Terminate after 1000 iterations. ``` **Algorithm 2** Algorithm for deconvolution of PMF ### Deconvolution Using Polynomial Factoring We know that any given PMF can be represented using a probability-generating function (PGF) [14]. The PGF for a discrete distribution is given by a polynomial function. We can calculate Probability density function (PDF) \(f(x)\) corresponding to a discrete probability distribution PMF \(\mathbf{P}\) as \[f(x)=\sum_{i=0}^{L(\mathbf{P})-1}P_{i}x^{i}. \tag{10}\] This implies if we have PMF of length \(2^{\text{n}}\), then we have a polynomial of degree \(2^{\text{n}-1}\) corresponding to the PGF of the given PMF. It is also possible to calculate the PMF from the PGF of a distribution as: \[P_{i}=\frac{\partial^{i}f(x)}{\partial^{i}x}\Bigg{|}_{x=0}. \tag{11}\] In this context, we can reformulate the deconvolution of probability distribution as follows. Let \(f(x)\) be the polynomial function representing the PGF of a given probability distribution. We need to factorize the function \(f(x)\) into polynomials which have non-negative coefficients i.e., \[f(x)=\prod_{i=1}^{K}\tilde{f}_{i}(x), \tag{12}\] where \(\tilde{f}_{i}(x)\) represents polynomial function of degree \(\deg(\tilde{f}_{i}(x))<2^{\mathfrak{n}-1}\) having non negative coefficients such that \[\sum_{i=1}^{K}\deg\left(\tilde{f}_{i}(x)\right)=2^{\mathfrak{n}-1}. \tag{13}\] Since polynomial \(f(x)\) have all positive coefficients, then according to Theorem 3 in [19], \(f(x)\) does not have a root in \(\mathbb{R}^{+}\) including zero. This implies that the linear factors of the polynomial will be of the form \((x+c)\) where \(c\) is a real positive constant i.e., polynomial \(f(x)\) will have linear factors with positive coefficients alone. It will also have quadratic polynomial factors of the form \(x^{2}-bx+c\) and \(x^{2}+bx+c\). Now, our task is to get rid of the quadratic polynomial of the form \(x^{2}-bx+c\) by multiplying it with the appropriate \(x^{2}+bx+c\) polynomial and forming higher order polynomials having positive coefficients. We have developed an algorithm based on calculating all the roots of the polynomial and taking into account that there can be multiple possible combinations for multiplying polynomials with all positive coefficients that produce the target polynomial. Therefore factorization of the target polynomial into polynomials of smaller degrees having all positive coefficients is not unique. The algorithm for deconvolution using polynomials is given in Algorithm 3. The Python function FindRoots defined in the Algorithm 3 returns a 2D array of roots where the conjugate pair complex roots are stored as a 1D array. In Algorithm 3, \(basket_{1}\) and \(basket_{2}\) are also 2D arrays of roots, and we randomly shuffle the 1D set of roots stored in the \(basket_{2}\) and not the individual roots inside the 1D array. The numpy.polyroot python function used in Algorithm 3 takes the coefficients of a polynomial as input and returns all the roots of the entered polynomial \(f(x)\)[20]. Similarly, the Python function numpy.polyfromroots takes roots as input and returns the coefficients of a monic polynomial with the roots we gave as input [21]. In Appendix A.5, we explain this algorithm in detail using an example. In the next Section 4, we will discuss the results that we obtained by deconvolving the given PMF using the optimization technique discussed in Section 3.1. However, using the approach discussed in this section to speed up the classical process and get more accurate results is possible. Deconvolution performed by this approach is more accurate since algorithms based on an optimization approach sometimes give solutions up to an error bound. The optimization approach is efficient when the parameters we optimize are less, and the cost function is smooth without any local minimum. When the parameter space is too big, the algorithm based on the optimization technique terminates based on the iteration limit we imposed, and the algorithm outputs an approximate result for deconvolution. ## 4 Discussion of the experiments and results An extra classical deconvolution layer can reduce the circuit depth at the cost of using more ancillary qubits. To demonstrate this, in 7, we plot circuit depth vs. the number of qubits that will be prepared in a given target probability state with and without using the deconvolution layer. Also, Fig. 7 shows that the state preparation method with the deconvolution layer performs better in terms of the circuit depth than the one without it. Hence, if the deconvolution layer is implemented, then the state preparation method scales as \(O(2^{\mathfrak{n}-1}+\mathfrak{n})\), and hence approximately halves the circuit depth. In Fig. 7a, 7b and 7c we plot the circuit depth scaling of Qiskit Initialize function, GR State preparation method and GR state preparation method with VChain implementation Vs the number of qubits whose state is being prepared in a target probability distribution with and without the deconvolution layer. The difference between the GR state preparation method and the GR state preparation method with VChain implementation is in how we implement the multi-controlled ``` 1:Input: Target Polynomial \(f(x)\). 2:procedureFindRoots(coefficients of the target polynomial\(f(x)\)) 3: Use the numpy.polyroot function to find all the roots of the polynomial \(f(x)\) given by the coefficients. 4:return Roots 5:endprocedure 6:procedureGroupRoots(Roots) 7: Group the roots into three: Complex roots with real part positive, Complex roots with real part negative and real roots. 8:returnComp_root_real_pos,Comp_root_real_neg,real_root 9:endprocedure 10:Arrange the Complex root with a positive real part in ascending order with respect to the real term and call it basket\({}_{1}\). 11:Merge the group of Complex roots with the negative real part and the group of real roots and call it basket\({}_{2}\). 12:Randomly shuffle the elements of the basket\({}_{2}\). 13:while length(basket\({}_{1}\)) \(\geq\) 0 do 14: temp_array = basket\({}_{1}\)[0] 15: temp_boolean = True 16:while temp_boolean do 17: random_index = generate random integer between 0 and length(basket\({}_{2}\)). 18: temp_array = temp_array + basket\({}_{2}\)[random_index] 19: poly_coeff = numpy.polynomial.polyfromroots(temp_array) 20: Delete the element basket\({}_{2}\)[random_index] from the list basket\({}_{2}\) 21:if any(c.real \(\leq\) 0 for c in poly_coeff) then 22: temp_boolean = False 23:else 24: temp_boolean = True 25:endif 26:endwhile 27: Delete the first element from basket\({}_{1}\). 28: basket\({}_{2}\).append(temp_array) 29:endwhile 30:Create an empty array variable and name it list_of_poly_factors = [ ]. This list holds the factors of the target polynomial. 31:for i in basket\({}_{2}\)do 32: temp = numpy.polynomial.polyfromroots(i) 33: list_of_poly_factors.append(temp/sum(temp)) 34:endfor ``` **Algorithm 3** Algorithm for deconvolution using polynomial factorization gate. In the first case, we let Qiskit break down the big gates into smaller gates without using any ancilla qubits. In the next case, we explicitly use the feature of multi-control multi-target V-Chains (MCMTVChain) in Qiskit to implement the multi-controlled \(R_{y}\)-gates as shown in Fig. 14. Here we use ancilla qubits in a V-Chain structure to create a multi-controlled single target \(R_{y}(\theta)\)-gate. To test and compare the different state preparation methods with and without the deconvolution layer, we load Gaussian and Laplacian distributions on a QASM simulator and real IBM quantum hardware. We also calculate a metric called Quantum Circuit Volume (QCV) [22] for each implementation. The mathematical definition of QCV is given below : \[\text{QCV}(C)=s(C)\times d(C). \tag{14}\] In equation 14, \(C\) represents the quantum algorithm whose quantum circuit we are implementing, \(s(C)\) represents the number of qubits we used for this implementation and \(d(C)\) represents the depth of the quantum circuit implementation. QCV is used to quantify the amount of quantum resources used to implement an algorithm. To test the method's performance on quantum hardware, real and simulated, we measured the quantum circuit 2048 times (shots). Then, the JS distance between the empirical PMF and the input PMF is measured. Here, JS distance is used as a metric to quantify the differences between the measured and target PMF. The circuits are run on IBMQ's noiseless QASM simulator and on _ibmq_kolkata_, which is one of the IBM Falcon Processors. IBMQ Kolkata has 27 qubits and a measured Quantum Volume of \(128=2^{7}\). Quantum Volume is different from Quantum Circuit Volume. Quantum volume is a metric that is used to compare different NISQ devices. The largest random circuit of equal depth and width that can be successfully implemented on a quantum computer is termed quantum volume [23]. Figure 6: Flowchart for Algorithm 3. Figure 7: We plot the Log of circuit depth Vs the number of qubit graphs for different state preparation methods. In Fig. (a), we use the Qiskit initialize function to prepare the qubits in a given state, (b) we use the GR state preparation method with VChain implementation for state preparation, and (c) we use the GR state preparation method. In the above figures, the orange line represents state preparation using the state preparation method without the deconvolution step, and the blue line represents state preparation using state preparation methods with an extra classical step of deconvolution. Figure 8: Discretized Gaussian PMF on a quantum computer: Fig.8a and Fig. 8c show results on a noiseless simulator. The PMF is very well approximated by all methods, as seen in Table 1. The novel deconvolution method produces the best results (lowest distance) with the fewest gates. The same holds for the results on IBMQ Kolkata (Figures 8b and 8d), but it should be noted that noise limits the performance of the 15-element PMF. Using the Qiskit primitive sampler, we can optimise the circuit and make it more resilient to noise [24]. We used optimisation level 3, which means the circuit is transpiled with 1Q gate optimisation, dynamical decoupling, commutative cancellation, and 2 qubit KAK optimisation. We used resilience level 1, which means that errors associated with readout errors are mitigated with matrix-free measurement mitigation (M3). For more information on these techniques, we refer to qiskit runtime [25]. The results for a Gaussian PMF are depicted in Fig. 8 and Table 1, and the results for Laplacian PMF is shown in Fig. 9 and Table 2. It can be seen that the deconvolution method is well able to approximate the PMF on the simulator. The deconvolution method performs best on the quantum computer compared to the other methods. Due to noise and circuit depth (\(\mathcal{O}(10^{2})\) for the 15-element PMF), the JS distance is on the order of \(10^{-1}\) for all methods, but lowest for the deconvolution method. From the Table 1 and Fig. 8, it is clear that the deconvolution method has a shallower circuit and the best outcome compared to the GR state preparation method. distribution, and the probability density function corresponding to a random variable \(\mathcal{X}\) that follows the Laplace distribution is [26]: \[f(x|\mu,\theta)=\frac{1}{2\theta}e^{-\frac{|x-\mu|}{\theta}}, \tag{15}\] where the parameter \(\mu\) is the mean value of the random variable \(\mathcal{X}\), \(\theta\) is called the scale parameter. \(\mu\in\mathbb{R}\) is also called the location parameter, whereas the scale parameter \(\theta\) can only take positive real values i.e., \(\theta>0\). We choose the parameters \(\theta=2\) and \(\mu=0\) to validate our findings. ## 5 Conclusion and future work In this article, we discuss a hybrid classical-quantum algorithm whose classical step is based on the principle of deconvolution from signal processing. This classical preprocessing step helps in reducing the circuit depth required to load a given bell-shaped probability distribution. The deconvolution step breaks down the target PMF into smaller PMFs, which can be loaded in parallel into different quantum registers, reducing the circuit depth. The quantum part consists of (controlled) rotation \begin{table} \end{table} Table 2: Results of preparing Laplace PMFs on the quantum simulator and IBMQ Kolkata. The JS distance is measured between the original discretized PMF and the measurement results. The deconvolution method has the lowest JS distance and lowest circuit depth for all experiments. Figure 9: Preparing a discretized Laplace (\(\theta=2\)) PMF on a quantum computer (analogous to Fig. 8, but with other PMF). Figures 8(a) and 8(c) are the results on IBM’s QASM simulator (noiseless). All methods very well approximate the PMFs. When looking at Table 2, it is clear that the proposed deconvolution method results in the lowest distance with the lowest number of gates. When running on IBMQ Kolkata (figures 8(b) and 8(d)), the results are similar. gates followed by a quantum adder circuit leading to reduced circuit depth. The algorithm reduces the dependency of the state preparation algorithms on big multi-controlled single-target gates used in traditional approaches for loading bell-shaped distributions. We propose the deconvolution step discussed in this article as an additional classical step that can be merged with any state preparation algorithm for loading bell-shaped distributions. This leads to a further reduction in the circuit depth. Deconvolution is the inverse of convolution and is a kind of inverse problem which we translated into a constrained optimization problem. We defined the JS function as the cost function of the optimization problem. We use the trust region method to find the optimum value of \(\mathbf{q_{1}}\) and \(\mathbf{q_{2}}\) that minimizes the cost function. This deconvolution algorithm is developed as part of the proof of concept for our approach, and we discuss a more efficient deconvolution algorithm based on polynomial factorization. To verify state preparation using the deconvolution approach, we load 7 and 15-element PMF constructed by discretizing the two different probability distributions (i) Standard Normal Distribution and (ii) Laplace distribution. The algorithm's results are positive on the QASM simulator and show a reduction in the circuit depth. Hence, the outcomes of the deconvolution method agree well with the expected outcome. But on the real hardware, still more circuit depth reduction is required since the noise is taking over. We believe this work is the first where a theoretical concept from the signal processing field is used to solve state preparation problems in quantum computation. Due to a shared foundation in the mathematical framework of probability and statistics, signal processing and quantum computation have many overlapping and complementary topics. In future, we would like to dig further into different concepts of signal processing that can be used in quantum computation to design classical techniques that can efficiently load more complex problems into quantum processing units (QPUs) and provide us with more accurate results. We want to extend this method to solve the more complex problem of stochastic volatility modelling in finance, where the probability distribution is more complex. Calculations of some quantities of interest have no closed-form solutions, so they must be modelled using Monte Carlo simulation. Once we can load the required probability distribution into the QPU with minimum circuit depth, we can use the remaining available time to calculate complex financial quantities. Also, loading required probability distribution with minimum circuit depth into the current NISQ QPUs also finds application in other fields.
2305.01094
Performative Prediction with Bandit Feedback: Learning through Reparameterization
Performative prediction, as introduced by Perdomo et al, is a framework for studying social prediction in which the data distribution itself changes in response to the deployment of a model. Existing work in this field usually hinges on three assumptions that are easily violated in practice: that the performative risk is convex over the deployed model, that the mapping from the model to the data distribution is known to the model designer in advance, and the first-order information of the performative risk is available. In this paper, we initiate the study of performative prediction problems that do not require these assumptions. Specifically, we develop a reparameterization framework that reparametrizes the performative prediction objective as a function of the induced data distribution. We then develop a two-level zeroth-order optimization procedure, where the first level performs iterative optimization on the distribution parameter space, and the second level learns the model that induces a particular target distribution at each iteration. Under mild conditions, this reparameterization allows us to transform the non-convex objective into a convex one and achieve provable regret guarantees. In particular, we provide a regret bound that is sublinear in the total number of performative samples taken and is only polynomial in the dimension of the model parameter.
Yatong Chen, Wei Tang, Chien-Ju Ho, Yang Liu
2023-05-01T21:31:29Z
http://arxiv.org/abs/2305.01094v4
# Performative Prediction with Bandit Feedback: ###### Abstract Performative prediction, as introduced by Perdomo et al. (2020), is a framework for studying social prediction in which the data distribution itself changes in response to the deployment of a model. Existing work on optimizing accuracy in this setting hinges on two assumptions that are easily violated in practice: that the performative risk is convex over the deployed model, and the mapping from the model to the data distribution is known to the model designer in advance. In this paper, we initiate the study of tractable performative prediction problems that do not require these assumptions. To tackle this more challenging setting, we develop a two-level zeroth-order optimization algorithm, where one level aims to compute the distribution map, and the other level _reparameterizes_ the performative prediction objective as a function of the induced data distribution. Under mild conditions, this reparameterization allows us to transform the non-convex objective into a convex one and achieve provable regret guarantees. In particular, we provide a regret bound that is sublinear in the total number of performative samples taken and only polynomial in the dimension of the model parameter. ## 1 Introduction Performance prediction, as introduced by Perdomo et al. (2020), provides a framework for studying prediction and risk minimization when the data distribution itself changes in response to the deployment of a model. Such distribution shifts are especially common in social prediction settings. For example, when a college admission process places heavy emphasis on standardized test scores, it encourages students to invest greater effort on test preparation, so that the decision maker ultimately encounters an applicant pool with higher test scores than if they had used different admission criteria. More precisely, consider a standard empirical risk minimization (ERM) problem defined by a loss function \(\ell\), model parameter space \(\Theta\subset\mathbb{R}^{d_{\Theta}}\), instance space \(Z=X\times Y\), and fixed data distribution \(\mathcal{D}\) over \(Z\). The task is to find a model that minimizes the empirical risk \(\mathbb{E}_{z\sim\mathcal{D}}[\ell(z;\theta)]\). Performative prediction extends this learning task by positing that \(\mathcal{D}\) is not fixed, but is instead a function of the model parameter vector \(\theta\in\Theta\). Here, we call \(\mathcal{D}(\cdot)\) a _distribution map_, and \(\mathcal{D}(\theta)\) the data distribution _induced_ by the model \(\theta\). The objective is then to minimize the _performative risk_, defined as \[\mathsf{PR}(\theta):=\mathbb{E}_{z\sim\mathcal{D}(\theta)}[\ell(z;\theta)]\;.\] A model \(\theta_{\mathsf{OPT}}\in\Theta\) is said to be _performatively optimal_ if \(\mathsf{PR}(\theta_{\mathsf{OPT}})=\min_{\theta\in\Theta}\mathsf{PR}(\theta)\). Optimizing the performative risk is challenging in general. In standard ERM, a convex loss function \(\ell\) implies a convex empirical risk. But as Perdomo et al. [14] already observed, the performative risk \(\mathsf{PR}\) may be non-convex even when the loss \(\ell\) is convex. For this reason, earlier studies [14, 12, 6, 2] focused instead on computing a _performatively stable_ solution. A performatively stable model is loss-minimizing _on the data distribution it induces_, though there may exist other models that incur smaller loss on their respective induced distributions. However, as recent works [13, 9] point out, such stable solutions may be highly suboptimal, and worse yet, may not exist in certain settings. Hence recent work has begun to revisit performative optimality, namely, the model \(\theta_{\mathsf{OPT}}\). Minimizing the performative risk often assumes the knowledge of the distribution map \(\mathcal{D}(\cdot)\)[13, 9]. In addition, to ensure making performative risk minimization tractable, one also requires imposing structure assumptions on the distribution map. For example, [9] makes parametric assumptions on \(\mathcal{D}(\theta)\) and assumes that \(\mathcal{D}(\theta)\) has a continuously differentiable density \(p(z;\varphi(\theta))\), where \(\varphi(\cdot):\Theta\to\Phi\) represents the mapping from the model parameter space \(\Theta\) to the data distribution parameter space \(\Theta\). [14] assumes the convexity of \(\mathsf{PR}(\theta)\) over \(\theta\). With this assumption, one can use first-order gradient descent algorithms to find the optimal model \(\theta_{\mathsf{OPT}}\). Miller et al. [13], in contrast, impose a _mixture dominance_ assumption on the distribution map \(\mathcal{D}(\cdot)\) from which it follows that \(\mathsf{PR}(\theta)\) is convex; this again leads to a gradient-based optimization algorithm [12, 9, 6, 3]. In this work, we consider a more practical scenario where the distribution map \(\mathcal{D}(\cdot)\) is not known in advance. In order to learn the performatively optimal model, the learner needs to adaptively deploy models to infer the underlying distribution map. We also relax the convexity assumption of \(\mathcal{D}(\cdot)\) over the model \(\theta\), and aim to design an online algorithm that works for a generic class of non-convex \(\mathsf{PR}(\theta)\) with provable performance. Technical Challenges.There are two outstanding challenges in characterizing the performatively optimal model \(\theta_{\mathsf{OPT}}\) in performative prediction. The first is that whether the performative risk \(\mathsf{PR}(\theta)\) is convex over the model parameter. Prior works often assume the convexity of \(\mathsf{PR}(\theta)\) over the model parameter \(\theta\). In this paper, we introduce a different type of structure on \(\mathcal{D}(\cdot)\). Departing from previous work, we allow \(\mathsf{PR}\) to be non-convex in the model parameter \(\theta\), but suppose it is convex in the _data distribution_ parameter \(\phi\equiv\varphi(\theta)\).5 Leveraging this property and inspired from [16], we develop a new _reparameterization_ approach that handles the non-convexity of \(\mathsf{PR}\). Informally, under mild conditions, we show that non-convex \(\mathsf{PR}(\theta)\) can be reparameterized as a new (convex) function \(\mathsf{PR}^{\dagger}(\phi)\) over the induced data distribution parameter \(\phi\). Footnote 5: Later in Section 3 we argue that this is a weaker condition than those used in previous literature. The second challenge we face comes from the unknown distribution map \(\mathcal{D}(\cdot)\). In our problem, when deploying a model \(\theta\), the learner can observe data samples that are i.i.d realized from the induced data distribution. This observation allows us to develop a bandit algorithm that uses only bandit feedback from each model we deployed. To this end, by leveraging our problem structure, we connect our setup to the zeroth-order convex optimization problem to perform gradient updates using only the bandit feedback we received from the observed samples after each model deployment. However, even with the reparameterized convex function \(\mathsf{PR}^{\dagger}(\phi)\), we remark that, the unknown \(\mathcal{D}(\cdot)\) also poses another significant challenge - the learner cannot directly evaluate the value of \(\mathsf{PR}^{\dagger}(\phi)\) for a particular distribution parameter \(\phi\), which makes the standard zeroth-order convex optimization technique not applicable to our setting. Indeed, given a target data distribution parameter \(\phi\), we develop an inner algorithm to identify the model \(\theta\) whose corresponding \(\phi(\theta)\) is "close" to \(\phi\). Our Contribution.We study the performative prediction problem with the focus on finding the performative optimal model. We consider the scenario where the distribution map is unknown in advance and consider relaxing non-convex performative risk. Our main contribution is a two-level bandit convex optimization algorithm with a reparameterization approach to deal with the non-convexity of performative risk. To this end, we provide regret analysis w.r.t the total number of samples observed throughout the process, rather time steps, which we believe is a more realistic measure in the performative prediction setting - especially in many social computing scenarios where the deployed models directly impact the human welfare. Our informal result is stated as follows: **Theorem 1** (Informal).: _There exists an algorithm that, under appropriate conditions, incurs regret \(\widetilde{O}((d_{\Theta}+d_{\Phi})\cdot N_{\mathsf{KL}}^{1/6}\cdot N^{5/6})\)6 after \(N\) performative samples7 with probability at least \(1-p\), where \(N_{\mathsf{KL}}\) depends on the sample efficiency of an off-the-shelf estimator for KL divergence, and \(d_{\Theta}\) and \(d_{\Phi}\) denote the dimension of the model and distribution parameter space, respectively._ Footnote 6: \(\widetilde{O}(\cdot)\) suppresses polylogarithmic factors in \(N\) and the failure probability \(1/p\). Footnote 7: Samples that the learner deploy along the way of finding the performative optimal model. \(N_{\mathsf{KL}}\) term in our regret depends on the sample efficiency of the estimator for KL divergence. The discussion is detailed in Section 4. Compared to recent work [10] that proposes using Lipschitz bandit approach to find the performative optimal model without explicitly making the convexity assumption, our results differ from theirs in the following ways: first, our regret is defined w.r.t the total number of performative samples rather than w.r.t the total number of steps; second, by operating on the distribution parameter space, we show the regret has polynomial dependency on the model parameter and distribution parameter dimension. ### Related work Performative Prediction, first explored in [14], has recently received much follow-up research [13; 9; 6; 12; 2; 10; 5; 3; 11; 15]. The original work and the follow-ups both study the performative stability and the performative optimality, including proposing algorithmic procedure that converges to performatively stable or optimal points. Like other works [5; 10; 9; 13], our paper focuses on the performative optimality. But different from the earlier works, we consider a more practical scenario where \(\mathcal{D}(\cdot)\) is not known in advance and also aim to design online algorithms that work for non-convex performative risks. Our algorithms and techniques are based on the line of work on zeroth-order convex (also known as bandit) convex optimization initiated by Flaxman et al. [7], who has showed how to optimize an unknown convex function \(f\), using only function value query access to \(f\). [1; 18] later extend the technique that allows multiple points query and showed that two points suffice to guarantee that the regret bounds closely resemble bounds for the full information case. To deal with non-convex performative risk, we use a reparameterization approach to transform the performative risk as a function over the induced data distribution parameter. The reparameterization approach mirrors the intuition behind the algorithms proposed for learning from revealed feedback (or preferences) [16; 19; 4], which consider a Stackerlberg game involving a utility maximizing learner and strategic agent. Our work differs from theirs as we consider a different problem - performative prediction, and the environment responding to the learner's model deployment is exogenously characterized by a distribution map \(\mathcal{D}(\cdot)\). ### Notations In this paper, \(\|\cdot\|\) always denotes the \(\ell_{2}\) norm, and Lipschitz condition is with respect to \(\ell_{2}\). Let \(d\in\mathbb{Z}_{>0}\) denote the dimension of the data, \(\mathbb{S}^{d}:=\{z\in\mathbb{R}^{d}\,|\,\|z\|=1\}\) and \(\mathbb{B}:=\{z\in\mathbb{R}^{d}\,|\,\|z\|\leq 1\}\) refer to the unit sphere and ball, respectively. Given a function \(f\), a constant \(\delta>0\), and \(v\) that is uniformly sampled from \(\mathbb{B}^{d}\), let \(\hat{f}(x):=\mathbb{E}_{v\sim\mathbb{B}^{d}}[f(x+\delta v)]\) refer to the value of \(f\) at \(x\)_smoothed over the \(\delta\)-ball_, and \(x_{\delta}:=\Pi_{(1-\delta)X}(x)\) is the \(\ell_{2}\)-projection of \(x\) onto the subset \((1-\delta)X:=\{(1-\delta)x\,|\,\,x\in X\}\). Let \(d_{\Theta}\in\mathbb{Z}_{>0}\) denote the dimension of the model parameter \(\theta\), and let \(D_{\Theta}:=\sup\{\|\theta-\theta^{\prime}\|,\forall\theta,\theta^{\prime}\in\Theta\}\) denote the diameter of the model parameter space \(\Theta\). The data distribution \(\mathcal{D}(\theta)\) has a parametric continuously differentiable density \(p(z;\varphi(\theta))\) where \(\varphi(\theta)\) denote the distribution parameter for \(\mathcal{D}(\theta)\). We use \(\varphi(\cdot)\) to denote the distribution parameter mapping while \(\phi\) to denote a given distribution parameter. Let \(d_{\Phi}\in\mathbb{Z}_{>0}\) denote the dimension of the model parameter \(\phi\), and let \(D_{\Phi}:=\sup\{\|\phi-\phi^{\prime}\|\,\,|\,,\forall\phi,\phi^{\prime}\in\Phi\}\) denote the diameter of the model parameter space \(\Phi\). When it is clear from the content, we use \(\varphi(\theta)\) to represent \(\mathcal{D}(\theta)\) the distribution \(\theta\) induces. Let \(\vartheta^{*}(\phi)\) denote the optimal model parameter that induces a specific distribution parameter \(\phi\). Structure of the paper.We structure the rest of the paper as follows: in Section 2, we introduce the problem formulation and provide a warm-up setting when \(\mathsf{PR}(\theta)\) is convex over the model parameter \(\theta\). Using this simple setting, we then introduce the technique we use which will serve as the building block to solve a more complicated setting (i.e., when \(\mathsf{PR}(\theta)\) is _not_ convex over \(\theta\)). In Section 3, to solve the setting when \(\mathsf{PR}(\theta)\) is not convex over the model parameter \(\theta\), we introduce a reparameterization approach, which transforms \(\mathsf{PR}(\theta)\) into an indirectly convex function over the distribution parameter \(\phi\), and describe a bandit optimization framework that operating on the distribution parameter space. Section 4 describes another bandit optimization framework used to solve for a subproblem directly solved using a blackbox oracle in Section 3, and Section 5 contains the overall regret analysis. ## 2 Preliminaries In this section, we formally state our problem and present preliminary results. ### Problem formulation Restating from introduction, we largely extend from the traditional empirical risk minimization (ERM) problem defined by a loss function \(\ell\), model parameter space \(\Theta\subset\mathbb{R}^{d\Theta}\), instance space \(Z=X\times Y\), and fixed data distribution \(\mathcal{D}\) over \(Z\). Our setting, or rather performative prediction, extends this learning task by positing that the real risk \(\theta\) encounters is over an induced distribution \(\mathcal{D}(\theta)\) by a machine learning model \(\theta\in\Theta\). In other words, the underlying data distribution is no longer fixed, but is instead a function of the model parameter \(\theta\). The objective in performative prediction is then to minimize the _performative risk_. A model \(\theta_{\mathsf{OPT}}\in\Theta\) is said to be _performatively optimal_ if \(\mathsf{PR}(\theta_{\mathsf{OPT}})=\min_{\theta\in\Theta}\mathsf{PR}(\theta)\). To find out the performatively optimal model, one needs to have the full knowledge of the underlying distribution map of the environment. In this work, we consider a more practical scenario where the distribution map \(\mathcal{D}(\cdot)\) is not known in advance, and to learn the performatively optimal model, the learner has to adaptively deploy models with gradually learning the underlying distribution map. Formally, we consider the following repeated interaction between the learner and the environment. The interaction proceeds for \(T_{\mathsf{total}}\) time steps, at each time step \(t=1,\ldots,T_{\mathsf{total}}\): (1) the learner deploys a model \(\theta_{t}\in\Theta\); (2) the learner observes \(n_{t}\) data samples \(z_{t}^{(i)}\stackrel{{\mathrm{iid}}}{{\sim}}\mathcal{D}(\theta_{ t})\); (3) the learner incurs empirical loss \(\ell(z_{t}^{(i)};\theta_{t})\) for each sample The goal of the learner is to design an online model deployment policy \(\mathcal{A}\) such that it minimizes her cumulative empirical risk over all observed data samples \[\mathcal{R}_{N}(\mathcal{A},\mathsf{PR})=\sum_{t=1}^{T_{\mathsf{total}}}\sum_ {i=1}^{n_{t}}\ell(z_{t}^{(i)};\theta_{t})-N\cdot\mathsf{PR}(\theta_{\mathsf{ OPT}}) \tag{1}\] where \(N:=\sum_{t=1}^{T_{\mathsf{total}}}n_{t}\) denotes the total number of observed data samples throughout the process. The reason we introduce \(T_{\mathsf{total}}\) instead of \(N\) directly is because the steps (\(t\)) of our algorithm perform different tasks, where we would impose different requirement of samples to be collected. This shall become clear later when we present our solution. ### When \(\mathsf{PR}(\theta)\) is Convex in the Model \(\theta\) In this section, we analyze a simple scenario when the performative risk \(\mathsf{PR}(\theta)\) is convex over the model parameter \(\theta\). The technique we use to solve this simple case will be the building block to solve the later more challenge problem where \(\mathsf{PR}(\theta)\) is _not_ convex over the model parameter \(\theta\). Recall that when the learner deploys a model \(\theta\), she observes a set of data samples which are i.i.d drawn from the underlying data distribution \(\mathcal{D}(\theta)\). This enables us to compute an unbiased estimate \(\widetilde{\mathsf{PR}}(\theta)\) for the performative risk \(\mathsf{PR}(\theta)\) of the deployed model \(\theta\). \[\widetilde{\mathsf{PR}}(\theta)=\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}\ell(z_{t}^{ (i)};\theta)\;\;\text{and}\;\;\mathbb{E}[\widetilde{\mathsf{PR}}(\theta)]= \mathsf{PR}(\theta),\qquad\forall\theta\in\Theta\] where the expectation is over the randomness of the observed samples. Since \(\mathsf{PR}(\theta)\) is convex over the model parameter \(\theta\), one can use off-the-shelf zeroth-order convex optimization technique [1] to solve the current problem. **Lemma 1**.: _When \(\mathsf{PR}(\theta)\) is convex, \(L\)-Lipschitz w.r.t. the deployed model parameter \(\theta\), there exists an Algorithm 3 achieving \(\mathcal{R}_{N}(\mathcal{A}_{3},\mathsf{PR})=O(\sqrt{d_{\Theta}N\log\frac{1}{p}})\) with probability at least \(1-p\)._ Algorithm 3 allows the learner to deploy two models at each time step, in doing so, one can show that the regret bounds are closely resemble bounds for the full information case where the learner knows the distribution map \(\mathcal{D}(\cdot)\). The proof of the above result builds on the main result of [1], and also incorporates an improved analysis of the gradient estimate due to Shamir [18]. We defer the proof and the details of the Algorithm 3 to Appendix B. ### Overview of Our Solutions When \(\mathsf{PR}(\theta)\) is not convex over the model parameter \(\theta\), the zeroth-order convex optimization technique used in Section 2.2 is not applicable. Instead, we leverage the structure of \(\mathsf{PR}(\theta)\) and _reparameterize_ it as a function of the _induced_ data distribution \(\mathcal{D}(\theta)\). In particular, we assume the data distribution \(\mathcal{D}(\theta)\) has a parametric continuously differentiable density \(p(z;\varphi(\theta))\). We also assume that the data distribution \(\mathcal{D}(\theta)\) falls in a distribution family. Thus, the functional form \(p(z;\phi)\) is known to the learner but the the distribution parameter \(\phi\) remains unknown. Under mild conditions, we show that the performative risk \(\mathsf{PR}(\theta)\) can be expressed as a function of the _induced_ distribution distribution parameter \(\phi\equiv\varphi(\theta)\), namely, \[\mathsf{PR}(\theta)=\mathsf{PR}^{\dagger}(\varphi(\theta))\equiv\mathsf{PR}( \vartheta^{*}(\phi))\;, \tag{2}\] and \(\mathsf{PR}^{\dagger}(\phi)\) is convex over the distribution parameter \(\phi\) (See more details in Section 3). With this reparameterization, one can operate on the space of distribution parameters and hopefully apply the zeroth-order convex optimization technique. However, one notable challenge is in zeroth-order convex optimization, the learner is usually assumed to have an direct query access to an unknown convex function \(f\). Namely, when query point \(x\), the learner immediately knows the (noisy) value of \(f(x)\). In our setting, such direct access is not available since the mapping \(\varphi(\cdot)\) is not known to the learner. Indeed, the learner can only deploy a model \(\theta\) to observe the empirical performative risk \(\widetilde{\mathsf{PR}}(\theta)\) which is evaluated over the observed data samples that are drawn from the induced data distribution \(\mathcal{D}(\theta)\). Hence, to evaluate the value \(\mathsf{PR}^{\dagger}(\phi)\) on a target data distribution with the parameter \(\phi\), we develop a new algorithm called LearnModel to find a model \(\bar{\theta}\) such that \(\varphi(\bar{\theta})\approx\phi\) (See Section 4). We summarize the idea behind our algorithm in the Figure 1. All of the omitted proofs can be found in the Appendix. ## 3 The Outer Algorithm: A Reparameterization Approach In this section, we study the scenario where \(\mathsf{PR}(\theta)\) is not convex over the model parameter. The high-level idea is that we can _reparameterize_ the performative risk \(\mathsf{PR}(\theta)\) as a function \(\mathsf{PR}^{\dagger}(\phi)\) over the data distribution parameter \(\phi\). We first reformulate the learner's loss function so that it can be expressed as a function _only_ in the induced data distribution. For each data distribution \(\phi\in\Phi\), assume the set of learner's actions Figure 1: A figure illustration of our Algorithm 1. Each big block (consists of a pink and a yellow block) represents one step \(t\) of the outer algorithm. (deployed model parameters) that induce \(\phi\) is \[\Theta^{*}(\phi)=\{\theta\in\Theta|\varphi(\theta)=\phi\}\] Among all of the learner's actions that induce \(\phi\), the optimal one which achieve the minimal \(\mathsf{PR}\) loss across the whole population is: \[\vartheta^{*}(\phi)=\operatorname*{argmin}_{\theta\in\Theta^{*}(\phi)}\mathsf{ PR}(\theta)\] where ties are broken arbitrarily. Now we can rewrite learner's objective function as a function of \(\phi\) \[\mathsf{PR}^{\dagger}(\phi)=\mathsf{PR}(\vartheta^{*}(\phi)) \tag{3}\] To make the problem tractable, we consider following generic class of \(\mathsf{PR}^{\dagger}(\cdot)\) that is convex and Lipchitz continuous. **Assumption 1**.: \(\mathsf{PR}^{\dagger}(\phi)\) _is convex and \(L^{\dagger}\)-Lipschitz over the data distribution parameter \(\phi\in\Phi\)._ Earlier work [13] posits the "mixture dominance assumption", under which the performative prediction risk turns out to be convex in \(\theta\). However, as we demonstrate in Example 1 in Appendix C, this condition may be violated by a simple family of examples. With reparameterizing \(\mathsf{PR}(\theta)\) as a function \(\mathsf{PR}^{\dagger}(\phi)\) over the induced data distribution parameter \(\phi\), we now wish to minimize a bounded, \(L^{\dagger}\)-Lipschitz function \(\mathsf{PR}^{\dagger}(\cdot):\Phi\to\mathbb{R}\), where \(\Phi\subset\mathbb{R}^{d_{\Phi}}\) has bounded diameter \(D_{\Phi}\), by operating on the distribution parameter space \(\Phi\). Instead of having an immediate query access in zeroth-order convex optimization algorithm, in our setting, we cannot directly evaluate the (noisy) value \(\mathsf{PR}^{\dagger}(\phi)\) for a particular data distribution parameter, but may query the following oracles: * A noisy _function oracle_\(\mathsf{EstimatePR}\), as we defined in Section 2.2. * A noisy _reparameterization oracle_\(\mathsf{LearnModel}(\phi,\epsilon_{\mathsf{LM}},p_{\mathsf{LM}})\), which takes \(\phi\in\Phi\), \(\epsilon_{\mathsf{LM}}>0\), and \(p_{\mathsf{LM}}>0\) as input and returns \(\theta\in\Theta\) such that \(\Pr(\|\varphi(\theta)-\phi\|\geq\epsilon_{\mathsf{LM}})\leq p_{\mathsf{LM}}\). We will specify \(\mathsf{LearnModel}\) in Section 4. The following algorithm performs this task; specifically, it returns both \(\bar{\theta}\in\Theta\) and \(\bar{\phi}\in\Phi\) such that with probability at least \(1-p\), \(|\mathsf{PR}(\bar{\theta})-\mathsf{PR}(\theta_{\mathsf{OPT}})|\leq\epsilon\) and \(|\mathsf{PR}^{\dagger}(\bar{\phi})-\mathsf{PR}(\theta_{\mathsf{OPT}})|\leq\epsilon\). ``` function\(\mathsf{EstimatePR}(\theta)\)\(\triangleright\) Unbiased estimate of \(\mathsf{PR}(\theta)\) Deploy \(\theta\), observe sample \(z\sim\mathcal{D}(\theta)\) return\(\ell(z;\theta)\) function\(\mathsf{MinimizePR}(\mathsf{LearnModel}:\Phi\to\Theta;\epsilon,p,\epsilon_{\mathsf{LM}},p_{\mathsf{LM}}>0)\) \(T\leftarrow\frac{d_{\Phi}}{(\epsilon-\sqrt{\epsilon_{\mathsf{LM}}d_{\Phi}})^ {2}}\) \(\delta\leftarrow\sqrt{\epsilon_{\mathsf{LM}}d_{\Phi}}\) \(\eta\gets 1/\sqrt{d_{\Phi}T}\) \(y_{1}\gets 0\) for\(t\gets 1,\ldots,T\)do \(u_{t}\leftarrow\) sample from \(\mathrm{Unif}(\mathbb{S})\) \(\phi_{t}^{+}\leftarrow\phi_{t}+\delta u_{t}\), \(\phi_{t}^{-}\leftarrow\phi_{t}-\delta u_{t}\) \(\hat{\theta}_{t}^{+}\leftarrow\mathsf{LearnModel}(\phi_{t}^{+},\epsilon_{ \mathsf{LM}},p_{\mathsf{LM}})\) \(\hat{\theta}_{t}^{-}\leftarrow\mathsf{LearnModel}(\phi_{t}^{-},\epsilon_{ \mathsf{LM}},p_{\mathsf{LM}})\)\(\triangleright\)\(\hat{\theta}_{t}^{+}\) such that \(\mathsf{PR}(\hat{\theta}_{t}^{+})\approx\mathsf{PR}^{\dagger}(\phi_{t}^{+})\) \(\mathsf{PR}(\hat{\theta}_{t}^{+})\leftarrow\mathsf{EstimatePR}(\hat{\theta}_{t}^{+})\) \(\mathsf{PR}(\hat{\theta}_{t}^{-})\leftarrow\mathsf{EstimatePR}(\hat{\theta}_{t}^ {-})\)\(\triangleright\) Approximations of \(\mathsf{PR}(\hat{\theta}_{t}^{+})\), \(\mathsf{PR}(\hat{\theta}_{t}^{-})\) \(\tilde{g}_{t}\leftarrow\frac{d_{\Phi}}{28}\left(\widetilde{\mathsf{PR}}(\hat{ \theta}_{t}^{+})-\widetilde{\mathsf{PR}}(\hat{\theta}_{t}^{-})\right)\cdot u_{t}\)\(\triangleright\) Approximation of \(\nabla_{\phi}\mathsf{PR}^{\dagger}(\phi_{t})\) \(\phi_{t+1}\leftarrow\Pi_{(1-\delta)\Phi}(\phi_{t}-\eta\tilde{g}_{t})\)\(\triangleright\) Take gradient step and project \(\bar{\phi}\leftarrow\frac{1}{T}\sum_{t=1}^{T}\phi_{t}\) \(\bar{\theta}\leftarrow\mathsf{LearnModel}(\bar{\phi},\epsilon_{\mathsf{LM}},p_ {\mathsf{LM}})\) return\(\bar{\theta},\bar{\phi}\) ``` **Algorithm 1** Bandit algorithm for minimizing an indirectly convex function with noisy oracles For analysis purpose, we also define regret in \(T\), the total number of steps MinimizePR has to go through in order to get an \(\epsilon\)-suboptimal model parameter w.r.t the PR objective function: \[\mathcal{R}_{T}(\mathsf{MinimizePR},\mathsf{PR})=\sum_{t=1}^{T}\Big{[}\mathsf{ EstimatePR}(\hat{\theta}_{t}^{+})+\mathsf{EstimatePR}(\hat{\theta}_{t}^{-})-2 \mathsf{PR}(\theta_{\mathsf{OPT}})\Big{]}\] We demonstrate the following regret bound for this algorithm: **Theorem 2** (High-probability regret bound for Algorithm 1 in \(T\)).: _When Algorithm 1 is called with arguments \(\epsilon_{\mathsf{LM}}\) and \(p_{\mathsf{LM}}\), we have for every \(p>0\) that_ \[\mathcal{R}_{T}(\mathsf{MinimizePR},\mathsf{PR})=O\left(\sqrt{d_{\Phi}T}+ \sqrt{\epsilon_{\mathsf{LM}}d_{\Phi}}\cdot T+\sqrt{T\log\frac{1}{p}}\right)\] _with probability at least \(1-p-2Tp_{\mathsf{LM}}\)._ The above Theorem 2 requires that the output of \(\mathsf{LearnModel}\) is \(\epsilon_{\mathsf{LM}}\)-close to the target distribution parameter \(\phi\) with probability at least \(1-p_{\mathsf{LM}}\). Later in Section 4, we show that how we achieve this by developing an zeroth-order convex optimization algorithm with the objective of minimizing the \(\mathsf{KL}\) divergence of two distributions. ## 4 Inner Algorithm: Inducing Target Distribution Using \(\mathsf{LearnModel}\) ### Objective Function for \(\mathsf{LearnModel}\) and Technique Assumptions In this section, we show how to solve the sub-problem \(\mathsf{LearnModel}\) mentioned in Algorithm 1: given a target distribution with the parameter \(\phi\in\Phi\), find a model \(\theta\in\Theta\) whose corresponding distribution parameter \(\varphi(\theta)\) is close to \(\phi\). To this end, we consider minimizing the \(\mathsf{KL}\) divergence between \(\phi\) and \(\varphi(\theta)\): 8 Footnote 8: For notation simplicity, here, we use \(\mathsf{KL}(\phi_{1}||\phi_{2})\) to represent \(\mathsf{KL}(\mathcal{D}_{1}||\mathcal{D}_{2})\) where the data distribution \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) has the parameter \(\phi_{1}\) and \(\phi_{2}\), respectively. \[\mathsf{KL}(\phi||\varphi(\theta)):=\int_{z}p(z;\phi)\log\frac{p(z;\phi)}{p(z ;\varphi(\theta))}dz \tag{4}\] where \(p(z;\phi)\) denotes the pdf for the target distribution \(\phi\), and \(p(z;\varphi(\theta))\) denotes the pdf for the distribution induced by deploying \(\theta\). In general, \(\mathsf{KL}(\phi||\varphi(\theta))\) measures how much a distribution with the parameter \(\varphi(\theta)\) is away from the target distribution with the parameter \(\phi\): if the two distributions \(\phi_{1},\phi_{2}\in\Phi\) satisfy \(\phi_{1}=\phi_{2}\), then \(\mathsf{KL}(\phi_{1}||\phi_{2})=0\), otherwise \(\mathsf{KL}(\phi_{1}||\phi_{2})>0\). Intuitively, the lower the value \(\mathsf{KL}(\phi_{1}||\phi_{2})\) is, the better we have matched the target distribution with our approximate distribution induced by the chosen model. However, the \(\mathsf{KL}(\phi||\cdot)\) is generally not convex and not Lipschitz. Hence, to make the problem tractable, we will make several assumptions. We view these assumptions as comparatively mild, and provide examples shortly after stating the assumptions we need. **Assumption 2**.: _The function \(\mathsf{KL}(\phi||\varphi(\cdot))\), the data distribution \(\mathcal{D}(\theta)\), and its parameter mapping \(\varphi(\cdot)\) satisfies the following properties._ 1. \(\mathsf{KL}(\phi||\varphi(\cdot))\) _is convex in the model parameter_ \(\theta\in\Theta\)_;_ 2. _The data distribution_ \(\mathcal{D}(\theta)\) _with the parameter_ \(\varphi(\theta)\) _is_ \((\ell_{2},K)\)_-Lipschitz continuous in the model parameter_ \(\theta\in\Theta\) _with constant_ \(K(z),\forall z\in Z\)__9_;_ Footnote 9: A distribution \(\mathcal{D}(\theta)\) with the density function \(p(\cdot|\varphi(\theta))\) parameterized by \(\theta\in\Theta\) is called \((\ell_{2},K)\)-Lipschitz continuous [8] if for all \(z\) in the sample space, the log-likelihood \(f(\theta)=\log p(z|\varphi(\theta))\) is Lipschitz continuous with respect to the \(\ell_{2}\) norm of \(\theta\) with constant \(K(z)\). 3. _Let_ \(\mathcal{D}_{1},\mathcal{D}_{2}\) _be two data distributions with the parameter_ \(\phi_{1},\phi_{2}\in\Phi\)_, and_ \(d_{\mathsf{TV}}(\mathcal{D}_{1},\mathcal{D}_{2})\) _be the total variation distance. Then_ \(\|\phi_{1}-\phi_{2}\|\leq L_{\mathsf{TV}}\cdot d_{\mathsf{TV}}(\mathcal{D}_{1},\mathcal{D}_{2})\) _for some constant_ \(L_{\mathsf{TV}}>0\)_._ Here, we provide examples to demonstrate that the above assumptions are comparatively mild. The following is an example showing the convexity of \(\mathsf{KL}(\phi||\varphi(\cdot))\). **Example 1**.: _Consider the density function \(p(z;\varphi(\theta))\) of the data distribution \(\mathcal{D}(\theta)\) satisfying \(p(z;\varphi(\theta))=\mathrm{Unif}(\exp(c\varphi(\theta)))\) for some constant \(c>0\) and for any convex function \(\varphi(\theta)\), then \(\mathsf{KL}(\phi||\varphi(\cdot))\) is convex over \(\theta\)._ In the above Assumption 2b, we assume a family of distribution called the \((\ell_{2},K)\)-Lipschitz continuous. This Lipschitz continuity over the parametrization of probability distributions allows us to have the following Lipschitz condition of the function \(\mathsf{KL}(\phi||\varphi(\cdot))\) over the model parameter \(\theta\): **Lemma 2** (Lipschitzness of \(\mathsf{KL}(\phi||\varphi(\theta))\) in \(\theta\)).: _Given two \((\ell_{2},K)\)-Lipschitz continuous distributions \(\mathcal{D}_{1}=p\left(\cdot\mid\varphi(\theta_{1})\right)\) and \(\mathcal{D}_{2}=p\left(\cdot\mid\varphi(\theta_{2})\right)\), and a target distribution parameter \(\phi\in\Phi\), we have \(|\mathsf{KL}\left(\phi||\varphi(\theta_{1})\right)-\mathsf{KL}\left(\phi|| \varphi(\theta_{2})\right)|\leq L_{\mathsf{KL}}\left\|\theta_{1}-\theta_{2}\right\|\) with a constant \(L_{\mathsf{KL}}>0\)._ The above Assumption 2c is about the continuity on the distribution parameter \(\phi\in\Phi\). Intuitively, this assumption ensures that if the parameters of two distribution are close, then their total variation distance is close as well. With this assumption, we can show that the distance between two distribution parameters \(\left\|\phi_{1}-\phi_{2}\right\|\) can be bounded by the KL divergence between the corresponding data distributions. **Lemma 3**.: _With Assumption 2c, we have \(\left\|\phi_{1}-\phi_{2}\right\|\leq L_{\phi}\sqrt{\mathsf{KL}(\phi_{1}||\phi _{2})}\) for some constant \(L_{\phi}>0\)._ Intuitively, the above result ensures that given a target distribution parameter \(\phi\), as long as a model \(\theta\) whose corresponding data distribution is close (i.e., \(\mathsf{KL}(\phi||\varphi(\theta))\) is small) to the distribution with the parameter \(\phi\), then \(\varphi(\theta)\) is close to \(\phi\). We will use Lemma 3 in the proof of our main theorem in Section 5. ### Algorithm for \(\mathsf{LearnModel}\) When \(\mathsf{KL}(\phi||\varphi(\cdot))\) is convex and Lipschitz over the model \(\theta\), its minimizer can be computed using algorithms similar to Algorithm 1. In our problem, given a target data distribution with the parameter \(\phi\), we can use the observed data samples to approximately compute the \(\mathsf{KL}(\phi||\varphi(\theta))\) when deploying a model \(\theta\). Indeed, we assume an existence of an oracle \(\mathsf{EstimateKL}(\phi,(z_{t}^{(i)})_{i\in[n_{t}]})\) which takes the observed samples \((z_{t}^{(i)})_{i\in[n_{t}]}\) realized from the induced data distribution \(\mathcal{D}(\theta)\) and the target data distribution parameter \(\phi\) as input to approximate the value \(\mathsf{KL}(\phi||\varphi(\theta))\). We remark that such oracle has been widely used in the literature on KL divergence estimation Rubenstein et al. [17]. **Definition 1** (Oracle \(\mathsf{EstimateKL}\)).: _There exists an oracle \(\mathsf{EstimateKL}\) that given any target parameter \(\phi\in\Phi\), error tolerance \(\epsilon_{\mathsf{KL}}>0\) and error probability \(p_{\mathsf{KL}}>0\), and \(N_{\mathsf{KL}}(\epsilon_{\mathsf{KL}},p_{\mathsf{KL}})\) samples \(z_{1},\ldots,z_{N_{\mathsf{KL}}(\epsilon_{\mathsf{KL}},p_{\mathsf{KL}})}\) from a distribution with parameter \(\phi^{\prime}\), returns an estimated \(\mathsf{KL}\) divergence \(\widetilde{\mathsf{KL}}(\phi||\phi^{\prime})\) satisfying \(\left\|\widetilde{\mathsf{KL}}(\phi||\phi^{\prime})-\mathsf{KL}(\phi||\phi^{ \prime})\right\|\leq\epsilon_{\mathsf{KL}}\) with probability at least \(1-p_{\mathsf{KL}}\)._ With the above defined oracle \(\mathsf{EstimateKL}\) to approximately compute the KL divergence, we are now ready to present our inner algorithm, which we term it as \(\mathsf{LearnModel}\): Similar as before, for analysis purpose, we also define regret of \(\mathsf{LearnModel}\) in \(S\), the total number of rounds \(\mathsf{LearnModel}\) has to go through in order to output a \(\epsilon_{\mathsf{LM}}\)-suboptimal model parameter w.r.t the \(\mathsf{KL}\) objective function: \[\mathcal{R}_{S}(\mathsf{LearnModel},\mathsf{KL})=\sum_{s=1}^{S}\left[\widetilde {\mathsf{KL}}(\phi||\varphi(\theta_{s}^{+}))+\widetilde{\mathsf{KL}}(\phi|| \varphi(\theta_{s}^{-}))-2\mathsf{KL}(\phi||\vartheta^{*}(\phi))\right]\] where \(\vartheta^{*}(\phi)\) is the model that can induce the target distribution \(\phi\). Using the similar arguments in Theorem 2, we first show the following regret guarantee for \(\mathsf{LearnModel}\): **Theorem 3** (High-probability regret bound for Algorithm 2 with \(S\) rounds).: _When \(\mathsf{LearnModel}\) is run for \(S\) steps and invokes \(\mathsf{EstimateKL}\) with arguments \(\epsilon_{\mathsf{KL}}>0\) and \(p_{\mathsf{KL}}>0\), we have \(\forall p>0\)_ \[\mathcal{R}_{S}(\mathsf{LearnModel},\mathsf{KL})=O\left(\sqrt{d_{\Phi}S}+ \sqrt{\epsilon_{\mathsf{KL}}d_{\Phi}}\cdot S+\sqrt{S\log\frac{1}{p}}\right)\] _with probability at least \(1-p-2Sp_{\mathsf{KL}}>0\)._ ## 5 Putting Things Together As shown in the previous section, both the outer algorithm (\(\mathsf{MinimizePR}\) - in Section 3) and inner algorithm (\(\mathsf{LearnModel}\) - in Section 4) achieve a sublinear regret w.r.t the total number of steps (\(T\) and \(S\)) when outputting an \(\epsilon\)-optimal solutions. In this section, we combine the results in Section 3 and Section 4 to conclude the analysis for \(\mathsf{MinimizePR}\) (Algorithm 1) for convex \(\mathsf{PR}^{\dagger}(\phi)\). The main result of this section is summarized as follows: **Theorem 4** (Regret of \(\mathsf{MinimizePR}\) in \(N\)).: _Under Assumption 2, and given access an oracle \(\mathsf{EstimateKL}\), there exists a choice of \(\epsilon_{\mathsf{KL}},p_{\mathsf{KL}}>0\) in Algorithm 2 such that for every \(p>0\),_ \[\mathcal{R}_{N}(\mathsf{MinimizePR},\mathsf{PR})=\widetilde{O}\left((d_{ \Theta}+d_{\Phi})N_{\mathsf{KL}}(\epsilon_{\mathsf{KL}},p_{\mathsf{KL}})^{1 /6}N^{5/6}\sqrt{\log\frac{1}{p}}\right)\] _with probability at least \(1-p\)._ Proof Sketch of Theorem 4.: Let \(T\) be the number of steps executed by the outer algorithm \(\mathsf{MinimizePR}\), and \(S\) the number of steps in \(\mathsf{LearnModel}\). Let \(N_{\mathsf{KL}}(\epsilon_{\mathsf{KL}},p_{\mathsf{KL}})\) (or \(N_{\mathsf{KL}}\) for short) denote the number of samples used by \(\mathsf{EstimateKL}\). Since \(\mathsf{MinimizePR}\) calls \(\mathsf{EstimatePR}\) and \(\mathsf{LearModel}\)\(2T\) times, and \(\mathsf{LearModel}\) calls \(\mathsf{EstimateKL}\)\(2S\) times, the overall number of samples involved in the whole process is \(N=2(2N_{\mathsf{KL}}S+1)T\). Following the regret definition, we can break down the regret into the regret from calling \(\mathsf{EstimatePR}\) in the outer algorithm and the regret from calling \(\mathsf{EstimateKL}\) in \(\mathsf{LearModel}\). Using the fact that \(\mathsf{PR}^{\dagger}\) is lipschitze in the distribution parameter \(\phi\) and the distance between any two distribution parameters can be bounded by the KL divergence between the corresponding data distributions (Lemma 3), we show that the total regret in \(N\) can be expressed as: \[\mathcal{R}_{N}(\mathsf{MinimizePR},\mathsf{PR})=O\left(\sqrt{N}+N_{ \mathsf{KL}}T\cdot\sqrt{S\cdot\mathcal{R}_{S}(\mathsf{LearModel},\mathsf{KL})}\right.\] \[\left.+(N_{\mathsf{KL}}S+1)\cdot\mathcal{R}_{T}(\mathsf{MinimizePR},\mathsf{ PR})\right)\] where \(\mathcal{R}_{T}(\mathsf{MinimizePR},\mathsf{PR})\) and \(\mathcal{R}_{S}(\mathsf{LearModel},\mathsf{KL})\) are obtained from Theorem 2 and Theorem 3 as functions of \(\epsilon_{\mathsf{LM}},\epsilon_{\mathsf{KL}},S,T\) and \(D_{\Theta}\) and \(D_{\Phi}\). Then by balancing the terms and set \(\epsilon_{\mathsf{LM}}\) and \(\epsilon_{\mathsf{KL}}\) according to the convergence analysis for both \(\mathsf{MinimizePR}\) and \(\mathsf{LearModel}\) (Claim 9 and Claim 10), we can get an express of the total regret. The complete proof can be found in Appendix E.
2301.13167
A complete classification of categoricity spectra of accessible categories with directed colimits
We provide a complete classification of all the possible categoricity spectra, in terms of internal size, that can appear in a large accessible category with directed colimits, assuming the Singular Cardinal Hypothesis ($SCH$), and providing as well explicit threshold cardinals for eventual categoricity. This includes as a particular case the first complete classification of categoricity spectra of abstract elementary classes (AEC's) entirely in $ZFC$. More specifically, we have the following theorem: Let $\mathcal{K}$ be a large $\kappa$-accessible category with directed colimits. Assume the Singular Cardinal Hypothesis $SCH$ (only if the restriction to monomorphisms is not an AEC). Then the categoricity spectrum $\mathcal{C}at(\mathcal{K})=\{\lambda\geq \kappa: \mathcal{K} \text{ is $\lambda$-categorical}\}$ is one of the following: 1) $\mathcal{C}at(\mathcal{K})=\emptyset$. 2) $\mathcal{C}at(\mathcal{K})=[\alpha, \beta]$ for some $\alpha, \beta \in [\kappa, \beth_{\omega}(\kappa))$. 3) $\mathcal{C}at(\mathcal{K})=[\chi, \infty)$ for some $\chi \in [\kappa, \beth_{(2^{\kappa})^+})$. This solves in particular Shelah categoricity conjecture for AEC's. There are examples of each of the three cases of the classification, showing that they indeed occur.
Christian Espindola
2023-01-30T18:37:51Z
http://arxiv.org/abs/2301.13167v1
# A complete classification of categoricity spectra of accessible categories with directed colimits ###### Abstract We provide a complete classification of all the possible categoricity spectra, in terms of internal size, that can appear in a large accessible category with directed colimits, assuming the Singular Cardinal Hypothesis (\(SCH\)), and providing as well explicit threshold cardinals for eventual categoricity. This includes as a particular case the first complete classification of categoricity spectra of abstract elementary classes (AEC's) entirely in \(ZFC\). More specifically, we have: **Theorem**.: _Let \(\mathcal{K}\) be a large \(\kappa\)-accessible category with directed colimits. Assume the Singular Cardinal Hypothesis \(SCH\) (only if the restriction to monomorphisms is not an AEC). Then the categoricity spectrum \(\mathcal{C}\!at(\mathcal{K})=\{\lambda\geq\kappa:\mathcal{K}\text{ is }\lambda\text{-categorical}\}\) is one of the following:_ 1. \(\mathcal{C}\!at(\mathcal{K})=\emptyset\)_._ 2. \(\mathcal{C}\!at(\mathcal{K})=[\alpha,\beta]\) _for some_ \(\alpha,\beta\in[\kappa,\underline{\mathfrak{l}}_{\omega}(\kappa))\)_._ 3. \(\mathcal{C}\!at(\mathcal{K})=[\chi,\infty)\) _for some_ \(\chi\in[\kappa,\underline{\mathfrak{l}}_{(2^{\kappa})^{+}})\)_._ This solves in particular Shelah categoricity conjecture for AEC's. There are examples of each of the three cases of the classification, showing that they indeed occur. ## 1 Introduction This short paper is a sequel to the work of the author in [1] in which a generalization of Shelah's eventual categoricity conjecture (Conjecture 4.2 in the introduction of [1]) is proven in the more general context of accessible categories with directed colimits. When all morphisms are monomorphisms in such categories of models, an analogous form of Shelah's presentation theorem exhibits them as a projective class of an infinite quantifier logic, for which even the Hanf number for model existence has no known explicit bound in \(ZFC\) (the only known bound is a strongly compact cardinal, and in fact in some models of \(ZFC\) the Hanf number for \(\mathcal{L}_{\omega_{1},\omega_{1}}\) exceeds the first measurable cardinal). In the special case of those accessible categories which have directed colimits, however, we will prove that one can find \(ZFC\) bounds for the Hanf number of model existence. Grossberg has emphasized the importance of also having explicit threshold cardinals for the eventual categoricity phenomenon, and we now intend to use the same setup and results of [1], together with Morley's method, to provide the provably best possible explicit thresholds. Shelah categoricity conjecture asks to prove, in \(ZFC\), that the threshold for eventual categoricity in an AEC \(\mathcal{K}\) is \(\underline{\mathfrak{l}}_{(2^{LS(\mathcal{K}))^{+}}}\) (see Conjecture 4.3 b) in the introduction of [1]). We will prove this conjecture here. An example from Shelah mentioned in [21] shows that this threshold is best possible. Assuming \(SCH\), we are also going to provide a proof of a direct generalization of Shelah categoricity conjecture to the more general context of accessible categories with directed colimits. If \(\mathcal{K}\) is such a category, we show that such that if \(\mathcal{K}\) is categorical in some \(\lambda\geq\underline{\mathfrak{l}}_{(2^{LS(\mathcal{K}))^{+}}}\) (i.e., it has only one object of some high enough internal size up to isomorphism), then \(\mathcal{K}\) is \(\lambda^{\prime}\)-categorical for every \(\lambda^{\prime}\geq\underline{\mathfrak{l}}_{(2^{LS(\mathcal{K})})^{+}}\). When considering cardinalities of models of infinitary theories \(\mathbb{T}\) of \(\mathcal{L}_{\kappa,\theta}\) that axiomatize \(\mathcal{K}\), the result implies, under \(SCH\), the following infinitary version of Morley's categoricity theorem: **Theorem 1.1**.: _(Morley's categoricity theorem for infinitary theories) Let \(\phi\) be a \(\mathcal{L}_{\kappa,\theta}\) sentence whose category of models and \(\mathcal{L}_{\kappa,\theta}\)-elementary embeddings has directed colimits. Let \(S\) be the class of cardinals \(\lambda\) which are of cofinality at least \(\theta\) but are not successors of cardinals of cofinality less than \(\theta\). Assume the weakening of the Singular Cardinal Hypothesis \(SCH_{\theta,\geq 2^{<\theta}}\). Then, if \(\phi\) is \(\lambda\)-categorical for some \(\lambda\geq\beth_{(2^{\kappa})^{+}}\) in \(S\), then \(\phi\) is \(\lambda^{\prime}\)-categorical for every \(\lambda^{\prime}\geq\beth_{(2^{\kappa})^{+}}\) in \(S\). Moreover:_ 1. _if the directed colimits are concrete, we can spare the assumption_ \(SCH_{\theta,\geq 2^{<\theta}}\) _and take_ \(S\) _as the class of all cardinals._ 2. _if_ \(\phi\) _is compact and the morphisms of our category are_ \(\mathcal{L}_{\omega,\omega}\)_-elementary embeddings, we can replace_ \(\beth_{(2^{\kappa})^{+}}\) _with_ \(\kappa\)_._ Here \(SCH_{\theta,\geq 2^{<\theta}}\) is defined as "for all \(\mu\geq 2^{<\theta}\) there is a set of cardinals \(\lambda_{i}\leq\mu\) unbounded below \(\mu\) such that, for each \(i\), \(\nu^{<\theta}\leq\lambda_{i}\) for all \(\nu<\lambda_{i}\)", see Remark 2.3 of [14]. Also, we know from [13] that there are examples showing that the exceptions in the class \(S\) are needed. The case \(\theta=\omega\) in Theorem 1.1 is Shelah categoricity conjecture for \(\mathcal{L}_{\kappa,\omega}\), since in this case \(SCH_{\omega,\geq 2^{<\omega}}\) is provable in \(ZFC\). When the directed colimits are concrete, since we restrict to monomorphisms, the result is Shelah categoricity conjecture for AEC's, since \(SCH_{\theta,\geq 2^{<\theta}}\) can also be removed by the methods of [13]. The case when \(\phi\) is compact (i.e., when it has the property that \(\phi\) is consistent with an arbitrary set of first-order finitary formulas if and only if it is consistent with each of its finite subsets, see [15]) is precisely a proper generalization of Morley's categoricity theorem (for countable first-order theories) and of Shelah's categoricity theorem (for uncountable first-order theories). Compact sentences of infinitary logic forms a much wider class than these two particular cases, since they include (but are not limited to) all those conjunctive sentences (i.e., sentences where only conjunctions are infinitary but disjunctions are finitary). Thus, Theorem 1.1 is a vast generalization of those conjectures and results to the realm of infinite quantifier theories and provides new proofs of the known theorems for finitary first-order theories. As it turns out, the existence of directed colimits is what allows for a smooth classification theory. The main tool for this will be Theorem 3.2, which at the same time extends work of Shelah for \(\mathcal{L}_{\omega_{1},\omega}\) showing, under the Weak Generalized Continuum Hypothesis (\(WGCH\)) that categoricity in the first \(\omega\) cardinals implies categoricity everywhere (see [13]). We remove here the set-theoretic hypothesis and generalize this result to AEC's, at the price of asking for categoricity in the first \(\beth_{\omega}\) cardinals. By the example of Shelah and Villaveces in [12], this seems to be close to optimal, since they showed that categoricity can fail above \(\beth_{n}(\lambda)\) for any \(n\in\omega\) while holding at the first \(n\) cardinals above the Lowenheim-Skolem number \(\lambda\) (though it is open whether categoricity holds up to \(\beth_{n}(\lambda)\) or the gap could be reduced further). Theorem 3.2 also uses higher dimensional amalgamation properties, which are shown to be a consequence of categoricity by means of a simple categorical proof, thereby simplifying the methods of [10]. Finally, we state the classification of categoricity spectra in AEC's, in \(ZFC\), and assuming \(SCH\) also in accessible categories with directed colimits (the set theoretic assumption is needed to guarantee that the existence spectrum contains an end tail of cardinals). This uses Lemma 2.1, some weaker versions of which in the context of AEC's have appeared in the literature. We give here a categorical proof based, among other things, on a form of Lawvere's duality for algebraic theories. The proof extends the result to \(\mu\)-AEC's with directed colimits, and greatly simplifies the arguments given for AEC's, in such a way that it can be applied to deduce eventual categoricity without needing to use amalgamation, using instead an observation on the double negation topology. As a word of warning, we emphasize that all the methods, results and notation from the authors' previous paper [10] are assumed here throughout, so the reader is advised to go through that paper first before continuing with this sequel. ## 2 Saturation and stability We start by stating the following lemma of independent interest: **Lemma 2.1**.: _Let \(\mathcal{K}\) be a \(\mu\)-AEC with directed colimits and amalgamation that is \(\rho\)-stable for each \(\rho<\kappa\). Then \(\kappa\)-saturated models are closed under directed colimits._ Proof.: Consider the topos \(\mathbf{Set}[\mathbb{T}_{\kappa}^{B}]_{\lambda}/[M,-]\cong\mathbf{Set}^{ \mathcal{K}_{\geq\kappa,<\lambda}^{B}}/[M,-]\cong\mathbf{Set}^{M/\mathcal{K}_ {\geq\kappa,<\lambda}^{B}}\), (where \(\mathcal{K}_{\geq\kappa,<\lambda}^{B}\) consists of the models in \(\mathcal{K}_{\geq\kappa,<\lambda}\) and all its \(\kappa\)-Boolean homomorphisms, and where \(\mathbb{T}_{\kappa}^{B}\) is \(\mathbb{T}_{\kappa}\) plus all those instances of excluded middle for \(\kappa\)-coherent formulas). We have a stable surjection \(\mathbf{Set}^{M/\mathcal{K}_{\geq\kappa,<\lambda}^{B}}\twoheadrightarrow \mathbf{Set}^{M/\mathcal{K}_{\geq\kappa,<\lambda}^{\geq}}\); this can be seen by considering first the stable surjection \(\mathbf{Set}^{\mathcal{K}_{\geq\kappa,<\lambda}^{B}}\cong\mathbf{Set}[ \mathbb{T}_{\kappa}^{B}]_{\lambda}\twoheadrightarrow\mathbf{Set}[\mathbb{T}_ {\kappa}]_{\lambda}\cong\mathbf{Set}^{\mathcal{K}_{\geq\kappa,<\lambda}^{ \geq}}\). Then we consider the pullback functor to the slice \(\mathbf{Set}^{\mathcal{K}_{\geq\kappa,<\lambda}}\twoheadrightarrow\mathbf{Set}[ \mathbb{T}_{\kappa}]_{\lambda}/[M,-]\cong\mathbf{Set}^{M/\mathcal{K}_{\geq \kappa,<\lambda}}\), which is a geometric morphism along whose direct image we take the following (pseudo-)pullback: The (pseudo-)pullback is precisely \(\mathbf{Set}[\mathbb{T}_{\kappa}^{B}]_{\lambda}/[M,-]\cong\mathbf{Set}^{M/ \mathcal{K}_{\geq\kappa,<\lambda}^{B}}\), as can be verified using the universal property of the slice. More generally, if we have now a sequence of embeddings \(M_{0}\rTo M_{1}\rTo M_{1}\rTo M\) with directed colimit \(M\), then \(\mathbf{Set}^{M/\mathcal{K}_{\geq\kappa,<\lambda}^{\geq}}\) will be the limit of the chain formed by the \(\mathbf{Set}^{M/\mathcal{K}_{\geq\kappa,<\lambda}^{\geq}}\) and induced by those embeddings. Since pullbacks preserve limits, this implies that \(\mathbf{Set}^{M/\mathcal{K}_{\geq\kappa,<\lambda}^{B}}\) will be the limit of the chain formed by the \(\mathbf{Set}^{M/\mathcal{K}_{\geq\kappa,<\lambda}^{B}}\); in particular (considering functors from the presheaves to \(\mathbf{Set}\) preserving limits and colimits), the Cauchy completion of the slice \(M/\mathcal{K}_{\geq\kappa,<\lambda}^{B}\) is the (pseudo-)limit in \(\mathcal{C}at\) of the Cauchy completion of the slices \(M_{i}/\mathcal{K}_{\geq\kappa,<\lambda}^{B}\) (note, in turn, that the Cauchy completions of the slices are equivalent to the slices of the Cauchy completion \(\overline{\mathcal{K}_{\geq\kappa,<\lambda}^{\geq}}\)). Note that for any \(\kappa\)-small model \(P\) and \(\kappa\)-geometric theory \(\mathbb{S}_{\kappa}\) of models of size at least \(\kappa\) containing \(P\), the pullback, in the \(2\)-category of \(\kappa^{+}\)-toposes and \(\kappa^{+}\)-geometric morphisms, of the double negation subtopos \(\mathcal{S}\) of \(\mathbf{Set}[\mathbb{S}_{\kappa}]_{\kappa^{+}}\) along the surjection \(s:\mathbf{Set}[\mathbb{S}_{\kappa}^{B}]_{\kappa^{+}}\twoheadrightarrow\mathbf{ Set}[\mathbb{S}_{\kappa}]_{\kappa^{+}}\) must be \(\mathcal{S}\) itself, so \(\mathcal{S}\) embeds into \(\mathbf{Set}[\mathbb{S}_{\kappa}^{B}]_{\kappa^{+}}\). Moreover, the embedding is dense: take a nonzero subterminal object \(A\) in \(\mathbf{Set}[\mathbb{S}_{\kappa}^{B}]_{\kappa}\); then it is nonzero in some model \(M\) which, by density, must embed into the model of size \(\kappa\). Then, if \(\mathcal{M}\) is the category of models of \(\mathbb{S}\), the \(\kappa^{+}\)-coherent sentence \(\psi=\lim ev_{\phi_{i}}\) in \(\mathbf{Set}^{\mathcal{M}_{\kappa}^{B}}\), where \(\lim ev_{\phi_{i}}\cong[M,-]\) in \(\mathbf{Set}^{\mathcal{M}_{\kappa}^{\kappa}}\), is non zero in \(\mathbf{Set}^{\mathcal{M}_{\kappa}^{sat}}\) and it implies \(A\), which is thus also non zero (and thus nonzero in \(\mathcal{S}\), as we claimed). Note also that, as a side consequence of this proof, double negation commutes with \(\kappa^{+}\)-small conjunctions in \(\mathbf{Set}^{\mathcal{M}_{\kappa}^{B}}\). Assume now that all \(M_{i}\) are \(\kappa\)-saturated. Without loss of generality we can also assume that \(\kappa=\delta^{+}\) is a successor, since for limit \(\kappa\) the saturated model is a directed colimit of smaller saturated models. Let us now prove that \(M\) must be \(\kappa\)-closed (whence also \(\kappa\)-saturated). So consider an embedding \(f:M\mathbin{\hbox to 0.0pt{\vbox{\hbox{\scalebox{.5}{$\bullet$}}\vss}}}N\); since \(\mathcal{K}\) is \(\rho\)-stable with respect to Galois types over some \(\delta\)-saturated submodel \(P\), it is \(\rho\)-stable with respect to \(\kappa\)-Boolean types of the same kind, so that an application of the omitting types theorem from [10] to the \(\kappa\)-Boolean theory of models of size at least \(\kappa\) containing \(P\), \(\mathbb{S}^{B}_{\kappa}\), shows that all subobject lattices of \((\mathbf{x},\top)\) in \(\mathbf{Set}[\mathbb{S}^{B}_{\kappa}]_{\kappa}\), for \(\mathbf{x}\) a nonempty finite tuple, are atomic and thus Boolean. On the other hand, note that \(\mathbf{Set}[\mathbb{S}_{\kappa}]_{\kappa}\) is two-valued since it is a subtopos of \(\mathbf{Set}[\mathbb{S}_{\delta}]_{\kappa}\) and that this latter is equivalent to the slice \(\mathbf{Set}^{\mathcal{K}_{\delta}}/\phi_{P}\), where \(\phi_{P}\) is the \(\kappa\)-geometric existential sentence corresponding to the diagram of \(P\), which is two-valued since \(\phi_{P}\) is an atom, as every model from \(\mathcal{K}_{\delta}\) embeds in \(P\). Therefore, the mentioned subobject lattices in \(\mathbf{Set}[\mathbb{S}^{B}_{\kappa}]_{\kappa}\) coincide with those in \(\mathbf{Set}[\mathbb{S}_{\kappa}]_{\kappa}\), and this entails, in particular, that \(\mathbf{Set}[\mathbb{S}^{B}_{\kappa}]_{\kappa}\) is two-valued. This readily implies that the colimit coprojections as well as their composition with \(f\) are \(\kappa\)-Boolean, and so we have \(\kappa\)-Boolean embeddings \(M_{i}\mathbin{\hbox to 0.0pt{\vbox{\hbox{\scalebox{.5}{$\bullet$}}\vss}}}N\), which induce a cone between the slices \(N/\overline{\mathcal{K}^{B}_{\geq\kappa,<\lambda}}\) and \(M_{i}/\overline{\mathcal{K}^{B}_{\geq\kappa,<\lambda}}\). By the universal property of the limit, there is an induced functor \(N/\overline{\mathcal{K}^{B}_{\geq\kappa,<\lambda}}\mathbin{\hbox to 0.0pt{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}\vss}}}M/\overline{\mathcal{K}^{B}_{\geq\kappa,< \lambda}}\), which provides a natural transformation \([N,-]\mathbin{\hbox to 0.0pt{\vbox{\hbox{\scalebox{.5}{$\bullet$}}\vss}}}[M,-]\). By Yoneda, this must correspond to a morphism \(M\mathbin{\hbox to 0.0pt{\vbox{\hbox{\scalebox{.5}{$\bullet$}}\vss}}}N\) in \(\mathcal{K}^{B}_{\geq\kappa,<\lambda}\), and since this must be \(f\), it follows that \(f\) is \(\kappa\)-closed, as we wanted to show. ## 3 Categoricity and tameness We start by showing that in any \(\mu\)-AEC with amalgamation and no maximal models, categoricity in a high enough cardinal implies eventual tameness. We consider the same setup of section 8 in [10], which we reproduce for the sake of convenience. Given a \(\mu\)-AEC \(\mathcal{K}\) with Lowenheim-Skolem number \(\kappa\) and \(\mu\leq\kappa^{+}\), following Baldwin-Boney-Vasey, we add a \(\kappa^{+}\)-small arity predicate \(P\) whose interpretation in a model \(M\) consists of the image of the underlying structure of a model \(N\) of size \(\kappa\) embedded in \(M\) through a morphism in the \(\mu\)-AEC. This particular expansion, which gives rise to an isomorphic AEC, has the property that morphisms coincide with substructure embeddings. Moreover, its models of size at least \(\kappa\) can be axiomatized as follows, extending further the language with the symbol \(\subseteq\): \[\top\vdash_{\mathbf{x}}\exists\mathbf{y}\left(\bigvee_{M_{0}\in S}\psi_{M_{0 }}(\mathbf{y})\wedge\mathbf{x}\subseteq\mathbf{y}\wedge P(\mathbf{y})\right)\] Here \(S\) is a skeleton of the subcategory of models of size \(\kappa\), \(T\) is the set of pairs \((M_{0},M_{1})\) with a morphism in the \(\mu\)-AEC and \(M_{0},M_{1}\in S\), while \(\psi_{M_{0}},\psi_{M_{0},M_{1}}\) are conjunctions of atomic and negated atomic formulas of the extended language such that \(\psi_{M_{0}}(\mathbf{z})\) holds if and only if \(\mathbf{z}\) is isomorphic to \(M_{0}\), and \(\psi_{M_{0},M_{1}}(\mathbf{z},\mathbf{w})\) holds if and only if \((\mathbf{z},\mathbf{w})\) is isomorphic to \((M_{0},M_{1})\). Assuming now categoricity at \(\kappa\), we can get an axiomatization of an isomorphic \(\mu\)-AEC which can be entirely rewritten through sequents in the \((2^{\kappa})^{+}\)-\(Reg_{\neg}\) fragment. This is an intuitionistic fragment of first-order logic which contains no disjunctions, obtained from the \((2^{\kappa})^{+}\)-regular fragment by adding \(\bot\), together with the axioms \(\bot\vdash_{\mathbf{x}}\phi\) and the axioms for \(\neg\) that make it into a negation operator. Indeed, in the first sequent above the disjunction reduces to a single disjunct since we have categoricity at \(\kappa\), while the last three sequents above have the general form of universal sentences \(\forall\mathbf{z}\bigvee_{i\in I}\bigwedge_{j\in J}\psi_{ij}\), and each such sentence is equivalent to the set of sequents \(\{\exists\mathbf{z}\bigwedge_{i\in I}\neg\psi_{if(i)}\vdash\bot\}_{f\in J^{I}}\). The \((2^{\kappa})^{+}\)-\(Reg_{-}\) fragment contains the \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) subfragment, not containing the symbol \(\neg\). The syntactic category \(\mathcal{C}\) of any \((2^{\kappa})^{+}\)-\(Reg_{-}\) theory can be studied through the category \(\mathcal{K}_{\geq(2^{\kappa})^{+}}^{\tau}\) of its \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) models (models of the \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) internal theory of \(\mathcal{C}\), also known as the \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) Morleyization of the \((2^{\kappa})^{+}\)-\(Reg_{-}\) theory). These latter are in particular \((2^{\kappa})^{+}\)-regular models for the extended signature in which there is an extra propositional symbol \(\bot\) and one predicate symbol \(S\) for each negated atomic formula \(\neg R\) and where the axioms of the theory contain all axioms obtained from formally replacing \(\neg R\) by \(S\) in each \((2^{\kappa})^{+}\)-\(Reg_{-}\) axiom and, additionally, all those axioms of the form \(\bot\vdash_{\mathbf{x}}\phi\) and \(R\wedge S\vdash_{\mathbf{x}}\bot\). If \((\mathcal{C})_{\lambda^{+}}^{\tau}\) is the syntactic category of the \(\lambda^{+}\)-\(Reg_{\bot}\) theory with the same axioms as the \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) theory of \(\mathcal{C}\), then its \(\lambda^{+}\)-classifying topos \(\mathcal{S}h((\mathcal{C})_{\lambda^{+}}^{\tau},\tau)\) (where \(\tau\) is the \(\lambda^{+}\)-\(Reg_{\bot}\) coverage) will be precisely equivalent to the presheaf topos \(\mathbf{Set}^{\mathcal{K}_{\geq(2^{\kappa})^{+},\leq\lambda}^{\tau}}\), as can be seen as a special case of Theorem 4.1 from [11] when \(\lambda\) is big enough. In particular, the embedding \((\mathcal{C})_{\lambda^{+}}^{\tau}\rTo\mathbf{Set}^{\mathcal{K}_{\geq(2^{ \kappa})^{+},\leq\lambda}^{\tau}}\) will preserve \(\neg\) since it can be identified with Yoneda embedding, which preserves any right adjoint to pullback functors that might exist, see [1]). Using the compactness of \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) logic, it is also easy to verify that the canonical functor \(F:\CTo(\mathcal{C})_{\lambda^{+}}^{\tau}\) also preserves \(\neg\). For if given a \(\lambda^{+}\)-regular sentence \(\exists\mathbf{x}\bigwedge_{i<\lambda}\phi_{i}\) we have \(\exists\mathbf{y}\bigwedge_{i<\lambda}\phi_{i}\wedge R\vdash_{\mathbf{x}}\bot\) in \(\lambda^{+}\)-\(Reg_{\bot}\) logic, there must be a \((2^{\kappa})^{+}\)-regular sentence \(\exists_{i\in T}y_{i}\bigwedge_{i\in T}\phi_{i}\), for some subset \(T\subset\lambda\) of size at most \(2^{\kappa}\), such that \(\exists_{i\in T}y_{i}\bigwedge_{i\in T}\phi_{i}\wedge R\vdash_{\mathbf{x}}\bot\) in \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) logic, from which our result follows. It follows, in fact, that the evaluation functor, the composite of Yoneda embedding with \(F\), preserves \(\neg\),1 which in particular means that the interpretation of \(S\) in the presheaf topos will be precisely that of \(\neg R\). Note that, if we add to the \((2^{\kappa})^{+}\)-\(Reg_{-}\) axiomatization above all instances of excluded middle for atomic formulas, we get an axiomatization of (an isomorphic copy of) the \(\mu\)-AEC, a fact which we will use in the following: Footnote 1: It is also possible to give a direct proof of this fact, using the compactness of \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) logic, with the same arguments as in the proof of Joyal’s theorem, according to which \(ev:\CTo\mathbf{Set}^{Mod(\mathcal{C})}\) preserves universal quantification when \(Mod(\mathcal{C})\) is the category of coherent models of the Heyting category \(\mathcal{C}\). This is worked out in the author PhD thesis for the more general disjunction-free fragment. **Theorem 3.1**.: _Let \(\mathcal{K}\) be a \(\mu\)-AEC with directed colimits which is categorical in \(\kappa\) and in \(\lambda>2^{\kappa}\). Then \(\mathcal{K}\) is \((2^{\kappa},<\lambda)\)-tame._ Proof.: Note first that we can take the model \(M\) of size \(\lambda\) as a monster model for \(\mathcal{K}_{\geq(2^{\kappa})^{+},<\lambda}\), since by the arguments of [11], we have amalgamation there. Now Galois types in \(M\) correspond to \(\lambda^{+}\)-geometric syntactic types, as shown in [11] (indeed, Galois types over \(M_{0}\) correspond to syntactic types containing the complete formula that realizes the type of the tuple given by the underlying set of \(M_{0}\)). Thus, it is enough to show that a \(\lambda\)-coherent existential sentence of the form \(\exists\mathbf{x}\phi(\mathbf{x},\mathbf{d},\mathbf{c})\), with constants \(\mathbf{c}\) from the submodel \(M_{0}\), the set of parameters of the type, where \(\mathbf{d}\) is a finite tuple and where \(\phi\) is a conjunction of atomic formulas, holds in \(M\) if (and only if) every \((2^{\kappa})^{+}\)-small approximation \(\exists\mathbf{x}^{\prime}\psi(\mathbf{x}^{\prime},\mathbf{d},\mathbf{c}^{ \prime})\) holds there. So suppose this latter condition holds. Let \(N\) be a \((2^{\kappa})^{+}\)-pure submodel containing \(\mathbf{c}\) and \(\mathbf{d}\), and consider the following theory: to the \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) Morleyization of the sequents in \(Reg_{-}\) logic that axiomatize \(\mathcal{K}\), add the diagram of \(N\), sequents expressing those negated existential sentences with constants from \(N\) holding there, and sequents expressing that the \((2^{\kappa})^{+}\)-small approximations \(\exists\mathbf{x}^{\prime}\psi(\mathbf{x}^{\prime},\mathbf{d},\mathbf{c}^{ \prime})\) hold. Clearly, every \((2^{\kappa})^{+}\)-small subset has a model (the obvious expansion of the monster model) and so the whole theory has a \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) model. This means that there is a \((2^{\kappa})^{+}\)-pure embedding of into a \((2^{\kappa})^{+}\)-\(Reg_{\bot}\) model of \(\exists\mathbf{x}\phi(\mathbf{x},\mathbf{c})\). By a similar proof to that of Grossberg conjecture in [11], we can see that \(M\) is injective with respect to \((2^{\kappa})^{+}\)-pure embeddings in \(\mathcal{K}_{\geq(2^{\kappa})^{+},<\lambda}^{r}\), and thus we get that \(\exists\mathbf{x}\phi(\mathbf{x},\mathbf{d},\mathbf{c})\) must hold there, as we wanted to prove. Assuming categoricity in a sufficiently large initial segment, we can also derive tameness: **Theorem 3.2**.: _Let \(\mathcal{K}\) be a \(\mu\)-AEC with directed colimits which is \(\mu\)-categorical for \(\kappa\leq\mu<\dal_{\omega}(\kappa)\). Then \(\mathcal{K}\) is \(2^{\kappa}\)-tame. In particular, categoricity in \([\kappa,\dal_{\omega}(\kappa))\) implies categoricity everywhere above \(\kappa\)._ Proof.: Note first that our whole analysis before Theorem 3.1 could be upgraded to the case in which we know that we have categoricity in \(\mu\) for \(\kappa\leq\mu<\chi:=\dal_{\omega}(\kappa)\). In this case, it is possible to have an axiomatization in \(\chi^{+}\)-\(Reg_{\!-}\) logic by adding a \(\mu^{+}\)-arity predicate \(P_{\mu}\) for each \(\mu<\chi\) and proceeding similarly to the axiomatization above. Let \(\mathbb{T}^{r}\) be the \(\chi^{+}\)-\(Reg_{\!\perp}\) Morleyization of the the following theory in the disjunction-free fragment: to the \(\chi^{+}\)-\(Reg_{\!-}\) axiomatization of the models of size at least \(\chi\), we add all instances of the axioms \(\bigwedge_{i<\chi}\neg\phi_{i}\vdash_{\mathbf{x}}\neg\neg\bigwedge_{i<\chi} \phi_{i}\), where \(\phi_{i}\) are \(<\chi\)-\(Reg_{\!\perp}\) formulas. Let also \(\mathcal{K}^{B,r}_{\geq\chi,<\lambda}\) be its category of \(\chi^{+}\)-\(Reg_{\!\perp}\) models of size at least \(\chi\) and less than \(\lambda\) with \(\chi\)-Boolean homomorphisms. It is enough to prove that the subtopops \(\mathbf{Set}^{\mathcal{K}_{\geq\chi,<\lambda}}\hookrightarrow\mathbf{Set}^{ \mathcal{K}^{B,r}_{\geq\chi,<\lambda}}\) is dense. As in the proof of Grossberg conjecture from [10], we know similarly that this subtopops is obtained by adding the \(\lambda\)-topology generated by instances of excluded middle, so we just need to prove that the sequents \(\bigwedge_{i<\lambda}\neg\phi_{i}\vdash_{\mathbf{x}}\neg\neg\bigwedge_{i< \lambda}\phi_{i}\) hold in \(\mathbf{Set}^{\mathcal{K}^{B,r}_{\geq\chi,<\lambda}}\), for which it is in turn enough to prove that \(\mathcal{K}^{B,r}_{\geq\chi,<\lambda}\) has directed bounds. Since it has bounds of chains of cofinality at least \(\chi^{+}\), it is enough to consider chains of cofinality than \(\chi^{+}\). So let \(\{M_{i}\}_{i<\alpha}\) be such a chain; we need to prove that there is a model containing the union of the diagrams in the chain. By \(\chi^{+}\)-\(Reg_{\!\perp}\) compactness, we can assume without loss of generality that all models in the chain have size at most \(\chi\), in which case it is enough to take a \(\chi^{+}\)-saturated model as a directed bound.2 To show such a model exists, we will prove in the next paragraph that \(\mathcal{K}^{B,r}_{\chi}\) has amalgamation at \(\chi\). On the other hand, note that the double negation subtopops of \(\mathbf{Set}^{\mathcal{K}^{B,r}_{\chi}}\) satisfies all sequents in \(\mathbb{T}^{r}\) due to axioms added to the axiomatization, and that it has a \(\chi^{+}\)-point since it \(\chi^{+}\)-classifies a \(\chi^{+}\)-\(Reg_{\!\perp}\) theory: it is obtained as the quotient of \(\mathbb{T}^{r}\) by all sequents expressing that morphisms \([M,-]\relrel[M^{\prime},-]\) are epimorphisms. Thus, if \(M\) is a \(\chi^{+}\)-point of the double negation subtopos, i.e., a \(\chi^{+}\)-saturated model of \(\mathbb{T}^{r}\), then \(M\) is clearly a directed bound for the given chain, as we wanted. Footnote 2: If the chain has cofinality bigger than \(\omega\), at limit levels we use the existence of weakly initial models of the \(\chi^{+}\)-\(Reg_{\!\perp}\) theory of the union of models below that level. It remains to prove our claim. This can be seen through an argument using 3-amalgamation of models of \(\mathbb{T}^{r}_{n}:=\mathbb{T}^{r}\cap\mathcal{L}_{\dal_{n}(\kappa)^{+}}\) of size \(\dal_{n}(\kappa)^{+}<\chi\) and \(\dal_{n}(\kappa)^{+}\)-pure morphisms. First, 3-amalgamation follows by taking as the amalgam at each level \(\dal_{n}(\kappa)^{+}\) a weakly initial model of the pushout, in the doctrine of \(\dal_{n}(\kappa)^{+}\)-coherent categories, of the \(\dal_{n}(\kappa)^{+}\) theories of the models that constitute the amalgamation diagram, which must be consistent since amalgamation holds. Since the \(\dal_{n}(\kappa)^{+}\)-theory of each pushout is axiomatized by \(\dal_{n}(\kappa)^{+}\)-\(Reg_{\!\perp}\) axioms, they have weakly initial models \(M_{n}\). Let us see that we can choose the homomorphisms between them to form a directed diagram whose colimit diagram \(D\) is consistent with \(\mathbb{T}^{r}\). Indeed, the diagram of the smallest weak initial model is consistent with \(\mathbb{T}^{r}\) and so it has a model \(M\); it suffices then to add to each \(\dal_{n}(\kappa)^{+}\)-theory of the pushouts also the diagrams of the weakly initial models of size \(\dal_{n-1}(\kappa)^{+}\) that it contains: this provides a canonical homomorphism and we can use again weak initiality of these and successively embed them into \(M\). Whence \(M\) is the desired amalgam. This concludes the proof. ## 4 Shelah categoricity conjecture for AEC's We now get to the following: **Theorem 4.1**.: _(Shelah categoricity conjecture for AEC's). Let \(\mathcal{K}\) be an AEC. If \(\mathcal{K}\) is categorical in some \(\lambda\geq\dal_{(2^{LS(\mathcal{K}))^{+}}}\), then \(\mathcal{K}\) is \(\lambda^{\prime}\)-categorical for every \(\lambda^{\prime}\geq\dal_{(2^{LS(\mathcal{K}))^{+}}}\)._ Proof.: Assume that the AEC is \(\lambda\)-categorical for some \(\lambda\geq\raisebox{-1.72pt}{\hbox{\rule{0.0pt}{6.5pt}\rule{6.5pt}{0.0pt}} \rule[6.5pt]{0.0pt}{0.0pt}}_{(2^{LS(\mathcal{K})})^{+}}\). Consider the double negation subtopos of \(\mathbf{Set}^{\mathcal{K}_{\kappa}}\), corresponding to the dense topology, when \(\kappa=LS(\mathcal{K})\). The first observation is that the dense topology is generated by those covers \(\{M\mathop{\hbox{\rule{0.0pt}{6.5pt}\rule{6.5pt}{0.0pt}}\rule[6.5pt]{0.0pt}{0.0 pt}}\rule[6.5pt]{0.0pt}{0.0pt}}\rule[6.5pt]{0.0pt}{0.0pt}\rule[6. to the same thing, a compatible family \([A,\pi(M)]\rTo G(\pi(M))\) indexed by \(\mathcal{K}^{\prime}\)-models \(M\). But by naturality this latter family determines completely a compatible family \([A,M]\rTo G(\pi(M))\) (given that \(\pi(\pi(M))=\pi(M)\)), which is the same as an element in \([F,\pi^{*}(G)]\). Note as well that \(\pi^{*}\) must be an embedding which is clearly dense, and so both presheaf toposes will have the same \(\lambda\)-saturated models. Consider now the \(\kappa^{+}\)-AEC \(\mathcal{K}^{\prime\prime}\) consisting of \(\kappa^{+}\)-saturated models. By Lemma 2.1, we can see that \(\mathcal{K}^{\prime\prime}\) inherits the concrete directed colimits of \(\mathcal{K}^{\prime}\). Also, since Galois types over submodels of size \(\mu\) correspond to \(\mu^{+}\)-geometric types containing the complete formula satisfied by the underlying set of the submodel, we can use the same proof idea of Proposition 3.5 from [10] to derive stability from categoricity of \(\mathcal{K}\) in \(\lambda\), for which it is enough to note that the usual Ehrenfeucht-Mostowski model \(M\) with \(\lambda\) indiscernibles, which is a directed colimit (in \(\mathcal{K}\)) of those with finitely many indiscernibles (which are as well models of \(\mathcal{K}\)) omits a type over some submodel if and only if \(\pi(M)\) omits such type. Indeed, the directed colimit (in \(\mathcal{K}^{\prime\prime}\)) must embed into each one of the models which is a Skolem hull of the directed union of models, and thus a realization of the type there would entail a realization of the type in all possible Skolem hulls, whence by the transfinite transitivity rule up to double negation the type would be already realized in the directed union (i.e. in \(M\)).5 Finally, by Theorem 4.1 applied to the AEC \(\mathcal{K}^{\prime\prime}\) we get thus categoricity in every \(\lambda\geq\kappa^{+}\), and since by Morley's omitting types theorem applied to \(\mathcal{K}^{\prime}\) we have that all models of size at least \(\nu\) for some \(\nu<\operatorname{\Xi}_{(2^{\kappa})^{+}}\) belong to \(\mathcal{K}^{\prime\prime}\), we get in particular that \(\mathcal{K}\) is categorical in those cardinals in \(S\) above \(\operatorname{\Xi}_{(2^{\kappa})^{+}}\), as we wanted. Footnote 5: Since our theories and types are \(\lambda\)-coherent, we can eliminate the double negation by conservativity of the classical fragment over the coherent fragment Finally, we consider the case in which the sentence defining the category of models is a compact sentence, and we consider those embeddings that are first-order elementary. This is a \(\mu\)-AEC with directed colimits, amalgamation and no maximal models. The corresponding finitely accessible category \(\mathcal{K}^{\prime}\) will also have, then, amalgamation and no maximal models. As we have proven above, \(\mathcal{K}^{\prime}\) will be eventually categorical. Since by the considerations in the proof of Lemma 2.1 we have a stable surjection \(\operatorname{\mathbf{Set}}^{M/\mathcal{K}^{B}_{\geq\kappa,<\lambda}}\twoheadright \operatorname{\mathbf{Set}}^{M/\mathcal{K}^{\prime}\geq\kappa,<\lambda}\), each model \(M\) of \(\mathcal{K}^{\prime}\) has a proper \(\omega\)-pure (in fact, \(\omega\)-Boolean) extension, as otherwise \(\operatorname{\mathbf{Set}}^{M/\mathcal{K}^{B}_{\geq\kappa,<\lambda}}\) would be two-valued and Boolean, forcing \(\operatorname{\mathbf{Set}}^{M/\mathcal{K}_{\geq\kappa,<\lambda}}\) to be two-valued and Boolean, which is impossible since \(M\) is not maximal. It follows that each model has a \(\omega\)-pure morphism into the model of size \(\lambda\), and thus every such model is \(\omega\)-saturated; in particular, \(\omega\)-coherent formulas are either \(0\) or \(1\). By the arguments in section 9 of [11], any sentence of the form \(\forall\mathbf{x}(\vartheta\rTo\eta)\), where \(\vartheta\) and \(\eta\) are \(\omega\)-coherent, which is valid in the \(LS(\mathcal{K})^{+}\)-saturated model is provable in the \(LS(\mathcal{K})^{+}\)-classifying topos of models of size at least \(LS(\mathcal{K})^{+}\) (note that \(\forall\mathbf{x}(\vartheta\rTo\eta)\) will also be equivalent to a \(\omega\)-coherent formula, as can be seen by compactness arguments using the definition of universal quantification in the syntactic category). This readily implies, as can be seen by the proof in [13] for conjunctive formulas in saturated models, that any \(\omega_{1}\)-coherent formula is equivalent to the conjunction of their approximations (in particular, \(\omega_{1}\)-coherent sentences are either \(0\) or \(1\)). This allows thus to prove that now sentences of the form \(\forall\mathbf{x}(\vartheta\rTo\eta)\), where \(\vartheta\) and \(\eta\) are \(\omega_{1}\)-coherent, which are valid in the \(LS(\mathcal{K})^{+}\)-saturated model is provable in the \(LS(\mathcal{K})^{+}\)-classifying topos. Continuing with this process, we finally get that all \(LS(\mathcal{K})^{+}\)-coherent sentences are either \(0\) or \(1\), which is enough to get categoricity at \(LS(\mathcal{K})^{+}\) and hence everywhere above \(LS(\mathcal{K})^{+}\). This finishes the proof. ## 6 Classification of categoricity spectra We end with: **Theorem 6.1**.: _Let \(\mathcal{K}\) be a large \(\kappa\)-accessible category with directed colimits. Assume the Singular Cardinal Hypothesis \(SCH\) (only if the restriction to monomorphisms is not an AEC). Then the categoricity spectrum \(\mathcal{C}at(\mathcal{K})=\{\lambda\geq\kappa:\mathcal{K}\) is \(\lambda\)-categorical\(\}\) is one of the following:_ 1. \(\mathcal{C}at(\mathcal{K})=\emptyset\)_._ 2. \(\mathcal{C}at(\mathcal{K})=[\alpha,\beta]\) _for some_ \(\alpha,\beta\in[\kappa,\underline{\square}_{\omega}(\kappa))\)_._ 3. \(\mathcal{C}at(\mathcal{K})=[\chi,\infty)\) _for some_ \(\chi\in[\kappa,\underline{\square}_{(2^{\kappa})^{+}})\)_._ Proof.: Our proof idea shares the same guidelines as the one in [20], except that the amalgamation hypothesis is not used and \(WGCH\) is replaced with \(SCH\) or, in AEC's, eliminated completely (all this at the price of replacing \(\kappa^{+\omega}\) with \(\underline{\square}_{\omega}(\kappa)\)). If case 1 does not occur, suppose first there is some categoricity cardinal \(\lambda\geq\underline{\square}_{\omega}(\kappa)\), and proceed as in the proof of Theorem 4.1 to define a \(\kappa^{+}\)-AEC with directed colimits categorical in \(\kappa^{+}\) and \(\lambda\). By Theorems 3.1 and 3.2, it must be \(2^{\kappa^{+}}\)-tame and so must be the original category of models of \(\phi\), which allows us to conclude that case 3 occurs. If categoricity occurs only below \(\underline{\square}_{\omega}(\kappa)\), it must be a segment as proven in [10]. Indeed, we do not need amalgamation by using the same argument as in the proof of Theorem 4.1. On the other hand, the assumption of maximal models in the proof of Theorem 9.1 in [10] can be dropped by building the tree of theories so that those theories \(\Gamma\) that admit a maximal model are not extended to \(\Gamma^{\prime}\) by adding a new constant and so they skip this modification of the construction; in the end, in the tree of theories, branches reach to the level \(\lambda\) or they reach a node in which the theory is classical (corresponding to a maximal model). In either case, instances of excluded middle can be shown to hold at the topos \(\mathbf{Set}[\mathbb{T}_{\kappa^{+}}]_{\kappa^{+}}\), making it Boolean. To sum up, categoricity cannot alternate, which leaves us in the case 2 and thus finishes the proof. Examples culled from the literature on AEC's show that each of the three cases in the classification can indeed occur (see e.g. the examples of [20]). The non-trivial case is 2, which occurs in the Shelah-Villaveees example in [21].
2304.07914
Reading multiplicity in unfoldings from epsilon-neighborhoods of orbits
We consider generic 1-parameter unfoldings of parabolic vector fields. It is known that the box dimension of orbits of their time-one maps is discontinuous at the bifurcation value. Here, we expand asymptotically the Lebesgue measure of the epsilon-neighborhoods of orbits of the time-one maps in a Chebyshev scale, uniformly with respect to the bifurcation parameter. We use the so-called Ecalle-Roussarie-type compensators. We read from the expansion the number of hyperbolic points born in the unfolding of the parabolic point (i.e. the codimension of the bifurcation).
Renato Huzak, Pavao Mardešić, Maja Resman, Vesna Županović
2023-04-16T23:14:15Z
http://arxiv.org/abs/2304.07914v1
# Reading multiplicity in unfoldings ###### Abstract. We consider generic \(1\)-parameter unfoldings of parabolic vector fields. It is known that the box dimension of orbits of their time-one maps is discontinuous at the bifurcation value. Here, we expand asymptotically the Lebesgue measure of the \(\varepsilon\)-neighborhoods of orbits of the time-one maps in a Chebyshev scale, _uniformly with respect to the bifurcation parameter_. We use the so-called _Ecalle-Roussarie-type compensators_. We read from the expansion the number of hyperbolic points born in the unfolding of the parabolic point (i.e. the _codimension_ of the bifurcation). Key words and phrases:unfoldings, epsilon-neighborhoods, compensators, Chebyshev scale 2020 Mathematics Subject Classification: 37G10, 34C23, 28A80, 37C45, 37M20 All authors are supported by the Croatian Science Foundation grant PZS-2019-02-3055 and the bilateral Hubert-Curien Cogito grant 2021-22. Maja Resman is supported by the Croatian Science Foundation grant no. UIP-2017-05-1020. Pavao Mardesic and Maja Resman also express their gratitude to the Fields Institute for supporting their stay in the scope of the _Thematic Program on Tame Geometry, Transseries and Applications to Analysis and Geometry 2022_. Pavao Mardesic was partially supported by EIPHI Graduate School (contract ANR-17-EURE-0002). ## 1. Introduction We consider generic \(1\)-parameter unfoldings of parabolic vector fields \(X_{\nu}\), whose flow is given by \[\frac{dx}{dt}=F(x,\nu),\ \nu\in\mathbb{R},\ \nu\to 0.\] Here, two hyperbolic singular points are born from a parabolic singular point at \(\nu=0\). By \(f_{\nu}\), we denote the time-one map of \(X_{\nu}\) locally at a singular point. It was noted in [1, 4, 9] that the asymptotic expansion at \(\varepsilon=0\) of the Lebesgue measure \(\ell(\mathcal{O}_{f}(x_{0})_{\varepsilon})\) of the \(\varepsilon\)-neighborhood of an orbit of the Poincare map \(f\) with the initial point \(x_{0}\) of a vector field reveals some intrinsic properties of the field. In particular, the cyclicity in Hopf bifurcation [1], or the cyclicity of a generic bifurcation of the saddle loop [9]. The same applies for a time-one map \(f\). The first term of the expansion of \(\ell(\mathcal{O}_{f}(x_{0})_{\varepsilon})\) and its coefficient are closely related to the notions of _box dimension_ and _Minkowski content_ of the orbit (see e.g. [7] for more details). In these papers a jump in the box dimension was observed at the bifurcation value. More precisely, the box dimension is equal to \(1/2\) for the parabolic orbit and \(0\) for the hyperbolic orbits. The underlying reason is that the asymptotic expansions of the Lebesgue measure \(\ell(\mathcal{O}_{f_{\nu}}(x_{0})_{\varepsilon})\) in \(\varepsilon\) are _not uniform_ with respect to the bifurcation parameter \(\nu\), and the limit \(\lim_{\nu\to 0}\) does not commute with the asymptotic expansion of \(\varepsilon\mapsto\ell(\mathcal{O}_{f_{\nu}}(x_{0})_{\varepsilon})\), as \(\varepsilon\to 0\). In particular, the Minkowski content tends to infinity as \(\nu\to 0\). The \(\varepsilon\)-neighborhood \(\mathcal{O}_{f_{\nu}}(x_{0})_{\varepsilon}\) consists of two parts: the tail and the nucleus. Here, for simplicity, we consider only the _tail_. Note that the Lebesgue measures of \(\varepsilon\)-neighborhoods of an orbit or of its tail carry the same information for real orbits, see [17] and [19]. In this paper, we study the simplest bifurcation of continuous systems on the real line, the so-called saddle-node bifurcation (see e.g. [5]). We provide an asymptotic expansion, _uniform_ in the parameter \(\nu\), of the Lebesgue measure of the \(\varepsilon\)-neighborhoods of attracting orbits \(\ell(\mathcal{O}_{f_{\nu}}(x_{0})_{\varepsilon})\) in appropriate _Chebyshev scales_. The _Chebyshev_ scales are generalizations of the Taylor scale, admitting a differentiation-division algorithm. For more details, see e.g. [8]. Our scale uses an _Ecalle-Roussarie-type compensator_ in variable \(\varepsilon\) and parameter \(\nu\), see [6]. In Theorems B.1 and B.2 we give uniform asymptotic expansions of \(\ell(\mathcal{O}_{f_{\nu}}(x_{0})_{\varepsilon})\) in an appropriate Chebyshev scale. In the main Theorem A (Section 1.1), we show \(1-1\) correspondence between the asymptotic expansions of the time-one map \(f_{\nu}\) in the phase space and an asymptotic expansion of the Lebesgue measure of the tail of its orbit in the \(\varepsilon\)-space. This shows that we can read the codimension of the bifurcation (here \(2\)) from the uniform asymptotic expansion of the Lebesgue measure of the tail of an orbit of the time-one map. That is, by determining how many first terms of the scale _vanish_ at the moment of bifurcation. In Theorem C the expansions from Theorems B.1 and B.2 are regrouped so that terms from the same group give birth to the same term at the bifurcation value \(\nu=0\). The idea of uniform asymptotic expansions in Chebyshev scales using compensators was introduced in [6] where the Chebyshev scale for the first return map \(P_{\lambda}\) \(\lambda\in[0,\delta)\), in a generic unfolding of the saddle loop was given. The _codimension_ of the loop is brought into \(1-1\) correspondence with cyclicity. We treat here only \(1\)-parameter bifurcations, since greater number of independent parameters is expected to generate the difficult problem of _independent compensators_. The same problem arises for polycycles with more resonant saddles of different hyperbolicity ratios, as compared to the saddle loops. See e.g. [15]. ### The main results We consider an analytic system \[\frac{dx}{dt}=F(x,\nu), \tag{1.1}\] with \(F\) real, with a non-hyperbolic singular point \(x=0\) at the bifurcation value \(\nu=0\) (i.e. \(F(0,0)=0\), \(F_{x}(0,0)=0\)), satisfying the generic assumptions: \[F_{\nu}(0,0)\neq 0,\ F_{xx}(0,0)\neq 0. \tag{1.2}\] Under these assumptions, the parabolic point at \(x=0\) bifurcates at \(\nu=0\) into two hyperbolic points on the real axis: one attracting and one repelling, for \(\nu\in(0,\delta)\), or \(\nu\in(-\delta,0)\) depending on the sign of \(F_{\nu}\) and \(F_{xx}\) in (1.2). For details, see Section 2.1. Take \(x_{0}\) in the attracting basin of the parabolic point, sufficiently close to \(0\). Consider the time-one map \(f_{\nu}\) of (1.1) and its orbit with initial point \(x_{0}\): \[\mathcal{O}_{f_{\nu}}(x_{0}):=\{f_{\nu}^{n}(x_{0}):\ n\in\mathbb{N}_{0}\}.\] By \(\mathcal{O}_{f_{\nu}}(x_{0})_{\varepsilon}\), we denote its \(\varepsilon\)-neighborhood, \(\varepsilon>0\). By [16], \[\mathcal{O}_{f_{\nu}}(x_{0})_{\varepsilon}=N_{\varepsilon,\nu}\cup T_{ \varepsilon,\nu},\] where the so-called _tail_\(T_{\varepsilon,\nu}\) is the union of finitely many disjoint intervals, and the so-called _nucleus_\(N_{\varepsilon,\nu}\) the union of intervals that overlap. By \(\ell\) we denote the Lebesgue measure. In [9, 18], we studied the length \(\ell(\mathcal{O}_{f_{\nu}}(x_{0})_{\varepsilon})=\ell(N_{\varepsilon,\nu})+ \ell(T_{\varepsilon,\nu})\). However, the essential information is carried already by \(\ell(T_{\varepsilon,\nu})\)[19, 17]. Hence, we investigate here only this term. We now state the main theorem. **Theorem A**.: _Let (1.1) be the generic \(1\)-parameter unfolding of a parabolic fixed point, satisfying the generic assumptions (1.2), and let \(f_{\nu}\) be its time-one map. Let \(\mathcal{O}_{f_{\nu}}(x_{0})\) be its attractive orbit._ _There exists a compensator variable \(\eta(\varepsilon,\nu)\) and an asymptotic expansion of the length of the tail \(\ell(T_{\nu,\varepsilon})\) in a Chebyshev scale, uniform in \(\nu\in[0,d)\), as \(\eta\to 0\)._ _There is a \(1-1\) correspondence between the expansion of the length \(\ell(T_{\nu,\varepsilon})\) in the \(\eta\) variable and the Taylor expansion of the displacement function \(g_{\nu}:=\mathrm{id}-f_{\nu}\) in the phase \(x\)-variable, in the following sense. For every value of the parameter \(\nu\), the number of vanishing terms of the expansions of \(\ell(T_{\nu,\varepsilon})\) and \(g_{\nu}\), in the corresponding scales, is the same._ Recall that the number of terms in a Chebyshev expansion for an unfolding vanishing at a bifurcation value gives the multiplicity of the zero point in the unfolding. Zero points of the displacement function \(g_{\nu}\) correspond to the fixed points of the first return map, that is, to the singularities of the vector field. As a direct consequence of Theorem A, the codimension of the bifurcation (equal to \(2\)) can be read from the expansion of \(\ell(T_{\varepsilon,\nu})\). To be more precise, the first two terms of the expansion vanish at the bifurcation value \(\nu=0\). The precise form of the expansion is given in Theorem B.1 for the model vector field case, and in Theorem B.2 for general vector fields of the form (1.1) under generic assumptions. Note that, due to computational reasons, the expansions are given for the continuous counterpart \(\ell^{c}(T_{\varepsilon,\nu})\) of the length first and then, in Corollary 3.9, for the standard length \(\ell(T_{\varepsilon,\nu})\). Here, some additional oscillatory terms appear in the expansion due to the step function nature of the discrete critical time separating the tail and the nucleus, see Subsection 2.2. Finally, in Theorem C in Section 4, the terms of the expansion are re-grouped in a way that we put in a block all terms that merge to the same term of the expansion at the bifurcation value \(\nu=0\). As a consequence, we get the asymptotic expansions in \(\varepsilon\to 0\) of \(\ell(T_{\varepsilon,\nu})\) in the case \(\nu>0\) and \(\nu=0\). The expansions have qualitatively different terms, which, in particular, results in a jump in the box dimension at the moment of bifurcation. ## 2. Notation and main objects ### Normal forms for the saddle-node bifurcation The saddle-node bifurcation is a generic \(1\)-parameter bifurcation of \(1\)-dimensional vector fields. Indeed, by [2, Theorem 3.2], any real smooth system \(\frac{dx}{dt}=F(x,\nu)\) having at \(\nu=0\) a _non-hyperbolic1_ singular point \(x=0\) (i.e. undergoing a bifurcation of the singular point at \(\nu=0\)) and which satisfies generic assumptions \(F_{xx}(0,0)\neq 0\) and \(F_{\nu}(0,0)\neq 0\) is locally _topologically_ equivalent to the system: Footnote 1: _Non-hyperbolic_ means that \(F_{x}(0,0)=0\). \[\frac{dx}{dt}=-x^{2}+\nu,\ \nu\in(-\delta,\delta). \tag{2.1}\] Here, the singularity is translated to \(x=0\) for the bifurcation value of the parameter \(\nu=0\). Therefore, (2.1) is a good qualitative model for the dynamics of the original system. For \(\nu<0\) there are no real singular points. For \(\nu>0\), there are two singular hyperbolic points: attracting \(\sqrt{\nu}\) and repelling \(-\sqrt{\nu}\). For \(\nu=0\) the zero is a parabolic singular point, attractive from the right and repulsive from the left. Choose an initial point \(x_{0}\neq 0\). For the small values of \(\nu\), \(x_{0}\) lies outside \([-\sqrt{\nu},\sqrt{\nu}]\). Note also that, for \(\nu>0\), by translation \(y=x+\alpha\), where \(\alpha:=\sqrt{\nu}>0\), we get the _transcritical bifurcation_\(\dot{y}=-y^{2}+2\alpha y\). We suppose \(x_{0}>0\), so that (for \(\nu\) sufficiently small and positive) it lies in the attractive basin for the bifurcation. Otherwise, if we choose \(x_{0}<0\), we consider the inverse field \(\dot{x}=x^{2}-\nu\), so that \(x_{0}\) lies again in the basin of attraction. However, by [3], the so-called _formal model by weak formal equivalence_ for a \(1\)-parameter bifurcation \(\frac{dx}{dt}=F(x,\nu)\) under above stated generic conditions is a bit more complicated: \[\frac{dx}{dt}=F_{mod}(x,\nu),\ F_{mod}(x,\nu):=\frac{-x^{2}+\nu}{1+\rho(\nu) x},\ \nu\in(-\delta,\delta). \tag{2.2}\] Here, for the parabolic value of the parameter \(\nu=0\), the value \(\rho(0)\) is the _residual formal invariant_. Another formal invariant is the multiplicity \(k=2\). Weak formal equivalence of the field \(x^{\prime}=F(x,\nu)\) to the model \(x^{\prime}=F_{mod}(x,\nu)\) means that there exists a formal change of variables, fibered in \(\nu\)[3], \[\hat{\Phi}(x,\nu)=(h(\nu),\hat{\varphi}_{\nu}(x)), \tag{2.3}\] where \(h\) is an analytic diffeomorphism such that \(h(0)=0\) and \(\hat{\varphi}_{\nu}\in\mathbb{R}[[x]]\), \(\nu\in(-\delta,\delta)\), is a formal diffeomorphism up to a translation (i.e. has no-zero linear term), conjugating one to the other. For the corresponding time-one maps it holds that: \[f^{mod}_{h(\nu)}=\hat{\varphi}_{\nu}\circ f_{\nu}\circ\hat{\varphi}_{\nu}^{-1}.\] Note that the notion _weakly_ refers to the possible bijective change \(h(\nu)\) of parameter \(\nu\). Indeed, by Malgrange preparation theorem, any analytic \(F(x,\nu)\) such that \(F(0,0)=0,\ \partial_{x}F(0,0)=0\), \(F_{\nu}(0,0)\neq 0\) and \(\partial_{xx}F(0,0)\neq 0\) can, by translation and homothety in \(x\) (both with coefficients analytic in \(\nu\)) be written as a product \((-x^{2}+a(\nu))U(x,\nu)\), where \(U\) is a unity2[3] and \(a\) an anaytic diffeomorphism at \(0\) such that \(a(0)=0\), \(a^{\prime}(0)\neq 0\), or further as \(-x^{2}+h(\nu)+O_{\nu}(x^{3})\), where \(O_{\nu}(x^{3})\) is analytic in \(x,\nu\) and \(h\) is an analytic diffeomorphism in \(\nu\). All terms, except for the residual term, can further be eliminated by a formal reduction \(\hat{\varphi}_{\nu}(x)\), \(\nu\in(-\delta,\delta)\). Footnote 2: non-vanishing in a small disc around \((0,0)\) ### The continuous-time length of the tail of orbits In this subsection we show how to calculate the length of the tail of the orbit \(\mathcal{O}_{f_{\nu}}(x_{0})\), \(\ell(T_{\varepsilon,\nu})\). Recall that [16] \[T_{\varepsilon,\nu}:=n_{\varepsilon}^{\nu}\cdot 2\varepsilon,\] where \(n_{\varepsilon}^{\nu}\in\mathbb{N}\) is the _discrete critical time_ separating the tail \(T_{\varepsilon,\nu}\) and the nucleus \(N_{\varepsilon,\nu}\) determined by the inequalities: \[f_{\nu}^{n_{\varepsilon}^{\nu}}(x_{0})-f_{\nu}^{n_{\varepsilon}^{\nu}+1}(x_{0 })\leq 2\varepsilon,\ \ f_{\nu}^{n_{\varepsilon}^{\nu}-1}(x_{0})-f_{\nu}^{n_{ \varepsilon}^{\nu}}(x_{0})>2\varepsilon.\] Since the critical time \(\varepsilon\mapsto n_{\varepsilon}^{\nu}\) is a step function, for a fixed \(\nu\), the function \(\varepsilon\mapsto\ell(T_{\varepsilon,\nu})\) does not have a full asymptotic expansion as \(\varepsilon\to 0\). Therefore, we replace \(n_{\varepsilon}^{\nu}\) by the so-called _continuous critical time_\(\tau_{\varepsilon}^{\nu}\in\mathbb{R}\) satisfying: \[f_{\nu}^{\tau_{\varepsilon}^{\nu}}(x_{0})-f_{\nu}^{\tau_{\varepsilon}^{\nu}+ 1}(x_{0})=2\varepsilon.\] Here, \(\{f_{\nu}^{t}:t\in\mathbb{R}\}\) is the _flow_ of the field \(X_{\nu}=F(x,\nu)\frac{d}{dx}\) given by (2.1). Note that \(f_{\nu}:=f_{\nu}^{1}\) is the time-one map of \(X_{\nu}\). The continuous critical time \(\tau_{\varepsilon}^{\nu}\) can be understood as the time needed to move along the field from the initial point \(x_{0}\) to the point \(x\) whose displacement function value \(g_{\nu}(x)\) is exactly equal to \(2\varepsilon\), for every \(\varepsilon>0\). We now define the _continuous-time length of \(T_{\varepsilon,\nu}\)_ (see [12]) by: \[\ell^{c}(T_{\varepsilon,\nu}):=\tau_{\varepsilon}^{\nu}\cdot 2\varepsilon. \tag{2.4}\] Equivalently, \[\ell^{c}(T_{\varepsilon,\nu})= (\Psi_{\nu}(g_{\nu}^{-1}(2\varepsilon))-\Psi_{\nu}(x_{0}))\cdot 2\varepsilon, \tag{2.5}\] where the Fatou coordinate \(\Psi_{\nu}\), defined up to an additive constant, is the (sectorial) trivialization coordinate for the flow of field \(X_{\nu}=F(x,\nu)\frac{d}{dx}\) from (2.1), satisfying \[\Psi_{\nu}(f_{\nu}^{t}(x_{0}))-\Psi_{\nu}(x_{0})=t,\ t\in\mathbb{R},\] or, equivalently, \[\Psi_{\nu}^{\prime}(x)=\frac{1}{F(x,\nu)}.\] For details of the definition and the relation between the two definitions, see [12]. We consider only the case when \(\nu\geq 0\). In the case that \(\nu<0\), the fixed points lie on the imaginary axis, and are not hyperbolic but indifferent, that is, their linear part is a rotation. On the real line the vector field passes from \(-\infty\) to \(+\infty\) in a finite time, and there are no singular points on the real line. This case will be a subject of further research, see Section 5. All our theoretical results in the following sections will first be given for the continuous length \(\ell^{c}(T_{\varepsilon,\nu})\). However, from the orbit it is natural to _read_ the standard length \(\ell(T_{\varepsilon,\nu})\), as the sum of the lengths of \(2\varepsilon\)-intervals centered at points of the orbit of the time-one map \(f_{\nu}\) before they start overlapping. By [10, 14], for a fixed \(\nu\), this function does not allow the full asymptotic expansion in \(\varepsilon\to 0\), due to oscillatory terms. Nevertheless, by Corollary 3.9, the Chebyshev scale given in Theorems B.1 and B.2 can easily be adapted for the standard length \(\ell(T_{\varepsilon,\nu})\). Uniform asymptotic expansions of the length functions \(\ell(T_{\varepsilon,\nu})\) and \(\ell^{c}(T_{\varepsilon,\nu})\) The Subsections 3.1 (Theorem B.1) and 3.2 (Theorem B.2 and Corollary 3.9) respectively give the Chebyshev scales for \(\ell(T_{\varepsilon,\nu})\) and \(\ell^{c}(T_{\varepsilon,\nu})\) for the formal model and for the generic non-model saddle-node families respectively. The model case is needed for the proof in the general, non-model case, simply by formal changes of variables. ### Formal model families Consider the model family (2.2), for \(\nu\in[0,\delta)\). Let \(f_{\nu}^{mod}\) be its time-one map and \(g_{\nu}^{mod}:=\mathrm{id}-f_{\nu}^{mod}\) its displacement function. In the sequel we first define three compensators that we use in the uniform expansions in Theorem B.1. We use the name _compensator_ for the elementary expressions in variable \(x\) and parameter \(\nu\), i.e. expressions that cannot be further asymptotically expanded uniformly for the whole unfolding. **Definition 3.1**.: [6] Let \(\nu\) and \(x\) be small. The function \[\omega(x,\nu):=\frac{x^{-\nu}-1}{\nu}\] is called the _Ecalle-Roussarie compensator_. Note that, pointwise, \(\omega(x,\nu)\to-\log x\), as \(\nu\to 0\). The convergence becomes uniform in \(x\), if we multiply by \(x^{\delta}\), \(\delta>0\). **Definition 3.2** (The inverse compensator).: For \(x>0\) and \(\nu\in(-\delta,\delta)\), we call the function \[\alpha(x,\nu):=\frac{1}{\nu}\log\left(1+\frac{\nu}{x}\right).\] the _inverse compensator_. The name comes from Definition 3.1 and the fact that \[\alpha(x,\nu)=-\log\circ\omega^{-1}\left(\frac{1}{x},\nu\right),\] where \(\omega^{-1}\) is the inverse of \(\omega\) with respect to the variable \(x\). Pointwise, \(\alpha(x,\nu)\to\frac{1}{x}\), as \(\nu\to 0\). For every \(\delta>0\), we get \(x^{1+\delta}\alpha(x,\nu)\to x^{\delta}\), as \(\nu\to 0\), _uniformly in \(x>0\) The asymptotic behavior, as \(x\to 0\), is qualitatively different in the case \(\nu=0\) and \(\nu\neq 0\): \[\alpha(x,\nu)=\begin{cases}\frac{1}{x},&\nu=0,\\ \frac{1}{\nu}(-\log x)+\frac{\log\nu}{\nu}+\mathbb{R}_{\nu}[[x]],&\nu\neq 0. \end{cases} \tag{3.1}\] **Definition 3.3** (The square root compensator).: For \(\nu\) small by absolute value and \(x>0\), we define \[\tilde{\eta}(x,\nu):=\sqrt{x+\nu}-\sqrt{\nu},\] and call \(\tilde{\eta}\) the _square root-type compensator_. The asymptotic expansion of \(\tilde{\eta}\), as \(x\to 0\), changes qualitatively as \(\nu\) changes from zero: \[\tilde{\eta}(x,\nu)=\begin{cases}\sqrt{x},&\nu=0,\\ \frac{x}{\sqrt{\nu}}+\sqrt{\nu}\,\frac{x^{2}}{\nu^{2}}\,\mathbb{R}\{\frac{x} {\nu}\},&\nu>0,\ x\to 0.\end{cases} \tag{3.2}\] Note that \(\tilde{\eta}\) is _small_, for small \(x\). Moreover, it can easily be checked that \(\tilde{\eta}(x,\nu)\to\sqrt{x}\) uniformly in \(x\), as \(\nu\to 0+\). Let \[a(\nu):=\frac{1-e^{-\nu}}{\nu},\ |\nu|<\delta.\] Note that \(a\in\mathbb{R}\{\nu\}\) and \(a(0)=1\). Therefore \(a\) is bounded. Let \(\ell^{c}(T_{\varepsilon,\nu})\), \(\nu\in[0,\delta)\), be the continuous lengths of the tails for the orbits of time-one maps \(f^{mod}_{\nu}\) for the unfolding (2.2), with initial condition \(x_{0}>0\), as defined in Subsection 2.2 and let \(\theta_{a}(x)=x+a\) denote the translation by \(a\in\mathbb{R}\). **Theorem B.1** (Chebyshev scale for the formal model case).: _Let_ \[\eta(2\varepsilon,\nu):=\theta_{-\sqrt{\nu}}\circ(g^{mod}_{\nu})^{-1}(2 \varepsilon),\ \varepsilon\geq 0. \tag{3.3}\] _In the compensator variable \(\eta\geq 0\), the continuous lengths \(\eta\mapsto\ell^{c}(T_{\varepsilon,\nu})\), admit an asymptotic expansion, uniform in parameter \(\nu\in[0,\delta)\), in the following Chebyshev scale:_ \[\left\{I(\nu,\eta)\eta,\ I(\nu,\eta)\eta^{2},\ I(\nu,\eta)\eta^{3},\dots \right\},\] _as \(\eta\to 0\), where_ \[I(\nu,\eta):=\alpha(\eta,2\sqrt{\nu})+\frac{\rho(\nu)}{2}\log\left(\eta^{2}+2 \sqrt{\nu}\cdot\eta\right)-\Psi^{mod}_{\nu}(x_{0}). \tag{3.4}\] _Furthermore, there exist \(\delta,\,d>0\) such that \(I(\nu,\eta)\) is non-zero in the uniform neighborhood \(\eta\in[0,d)\) for \(\nu\in[0,\delta)\). More precisely, the expansion is given by:_ \[\ell(T^{c}_{\varepsilon,\nu})=I(\nu,\eta)g^{mod}_{\nu}(\eta+ \sqrt{\nu})=\] \[\sim\left(1-e^{-\frac{2\sqrt{\nu}}{1-\rho(\nu)\sqrt{\nu}}}\right) \cdot I(\nu,\eta)\eta+\] \[+e^{-\frac{2\sqrt{\nu}}{1-\rho(\nu)\sqrt{\nu}}}\,a\big{(}\frac{2 \sqrt{\nu}}{1-\rho(\nu)\sqrt{\nu}}\big{)}\,\frac{1+\rho(\nu)\sqrt{\nu}}{(1- \rho(\nu)\sqrt{\nu})^{2}}\cdot I(\nu,\eta)\eta^{2}+o_{\nu}(I(\nu,\eta)\eta^{2 }),\ \eta\to 0+. \tag{3.5}\] **Remark 3.4**.: Note that a similar expansion is obtained if we choose initial point \(x_{0}<0\), and the inverse orbit converging to the other (repelling) fixed point \(-\sqrt{\nu}\). Then \(g^{mod}_{\nu}=\mathrm{id}-(f^{mod}_{\nu})^{-1}\), and \(\eta=\theta_{\sqrt{\nu}}\circ(g^{mod}_{\nu})^{-1}(2\varepsilon)\) is negative. At the bifurcation value \(\nu=0\), the first non-zero coefficient of expansion (3.5) is the third one. This means, by the theory of Chebyshev scales, that at most two zero points bifurcate in \(\eta\mapsto\ell^{c}(T_{\varepsilon,\nu})\) from the zero point \(\eta=0\) at \(\nu=0\). That is, the zero point \(\eta=0\) in \(\ell^{c}(T_{\varepsilon,\nu})\) is of multiplicity at most \(2\) in this bifurcation. The variable \(\eta\) is a _small_ variable (in the sense that it is tends to \(0\) as \((\varepsilon,\nu)\to 0\)) and behaves asymptotically as the square root compensator \(\tilde{\eta}\) from Definition 3.3. Indeed, by Lemma 4.1, it follows that \[\lim_{(\varepsilon,\nu)\to(0,0)}\frac{\eta(2\varepsilon,\nu)}{\tilde{\eta}(2 \varepsilon,\nu)}=\frac{\eta(2\varepsilon,\nu)}{\tilde{\eta}(2\varepsilon,c^{ 2}(\nu))}\frac{\tilde{\eta}(2\varepsilon,c^{2}(\nu))}{\tilde{\eta}(2 \varepsilon,\nu)}=1, \tag{3.6}\] since \(\lim_{\nu\to 0}\frac{\nu}{c^{2}(\nu)}=1\). The proof of Theorem B.1 is given at the end of the subsection. For the proof, we need Lemmas 3.5 and 3.7. **Lemma 3.5** (The Fatou coordinate).: The Fatou coordinate for family (2.2) is (up to an additive constant) equal to: \[\Psi^{mod}_{\nu}(x)= \alpha(x-\sqrt{\nu},2\sqrt{\nu})+\frac{\rho(\nu)}{2}\log(x^{2}-\nu)\] \[= \left(\alpha(x,2\sqrt{\nu})+\frac{\rho(\nu)}{2}\cdot\log(2\sqrt{ \nu}\cdot x+x^{2})\right)\circ\theta_{-\sqrt{\nu}}(x). \tag{3.7}\] Moreover, \[\Psi^{mod}_{\nu}(x)\sim_{x\to\sqrt{\nu}}\begin{cases}\frac{1}{x},&\nu=0,\\ \left(\frac{\rho(\nu)}{2}-\frac{1}{2\sqrt{\nu}}\right)\log(x-\sqrt{\nu}),&\nu \neq 0.\end{cases} \tag{3.8}\] Proof.: The Fatou coordinate \(\Psi^{mod}_{\nu}\) is computed as antiderivative in variable \(x\) (determined up to an additive constant) of \(\frac{1}{F_{mod}(\nu,x)}\). We get: \[\Psi^{mod}_{\nu}(x) =\frac{1}{2\sqrt{\nu}}\ln\frac{x+\sqrt{\nu}}{x-\sqrt{\nu}}+ \frac{\rho(\nu)}{2}\log(x^{2}-\nu)\] \[=\frac{1}{2\sqrt{\nu}}\ln\left(1+\frac{2\sqrt{\nu}}{x-\sqrt{\nu} }\right)+\frac{\rho(\nu)}{2}\log(x^{2}-\nu), \tag{3.9}\] and substitute \(\alpha\) from Definition 3.2. In case \(\nu\neq 0\), and \(x\to\sqrt{\nu}\), we have: \[\log(x^{2}-\nu)= \left(\log(x-\sqrt{\nu})+\log(2\sqrt{\nu})+\log\left(1+\frac{x- \sqrt{\nu}}{2\sqrt{\nu}}\right)\right)=\log(x-\sqrt{\nu})+O(1). \tag{3.10}\] Therefore, by (3.9) and (3.10), we deduce (3.8). **Remark 3.6**.: Note that the above formula (3.9) is valid, for all values of \(\nu\in(-\delta,\delta)\). In case \(\nu<0\), \(\sqrt{\nu}=\sqrt{|\nu|}i\) is imaginary. The formula can be rewritten as: \[\Psi^{mod}_{\nu}(x)=\begin{cases}\frac{1}{x}+\rho(0)\cdot\log x,&\nu=0,\\ \frac{1}{2\sqrt{\nu}}\ln\left(1+\frac{2\sqrt{\nu}}{x-\sqrt{\nu}}\right)+ \frac{\rho(\nu)}{2}\log(x^{2}-\nu),&\nu>0,\\ -\frac{1}{\sqrt{|\nu|}}\arctan\frac{x}{\sqrt{|\nu|}}+\frac{\rho(\nu)}{2}\log( x^{2}-\nu),&\nu<0.\end{cases}\] **Lemma 3.7** (The time-one map and the displacement function).: For family (2.2), the following Taylor expansions at the fixed point3\(x=\sqrt{\nu}\) hold: Footnote 3: The similar expansion can be done around another, symmetric fixed point \(x\to-\sqrt{\nu}\). 1. For the time-one map \(f_{\nu}^{mod}\in\operatorname{Diff}(\mathbb{R},0)\): (3.11) 2. For the displacement function \(g_{\nu}^{mod}:=\operatorname{id}-f_{\nu}^{mod}\in\operatorname{Diff}(\mathbb{R},0)\): (3.12) \[g_{\nu}^{mod}(x)= \left(1-e^{-\frac{2\sqrt{\nu}}{1-\rho(\nu)\sqrt{\nu}}}\right) \cdot(x-\sqrt{\nu})+\] \[\qquad+e^{-\frac{2\sqrt{\nu}}{1-\rho(\nu)\sqrt{\nu}}}\cdot a\big{(} \frac{2\sqrt{\nu}}{1-\rho(\nu)\sqrt{\nu}}\big{)}\cdot\frac{1+\rho(\nu)\sqrt{ \nu}}{(1-\rho(\nu)\sqrt{\nu})^{2}}\cdot(x-\sqrt{\nu})^{2}+\] \[\qquad\qquad+(x-\sqrt{\nu})^{3}\cdot\mathbb{R}_{\nu}\{(x-\sqrt{ \nu})\},\qquad\nu\in[0,\delta),\ x\to\sqrt{\nu},\] 3. For the displacement function \(g_{\nu}^{mod}:=\operatorname{id}-f_{\nu}^{mod}\in\operatorname{Diff}( \mathbb{R},0)\): (3.13) \[g_{\nu}^{mod}(x)= \left(1-e^{-\frac{2\sqrt{\nu}}{1-\rho(\nu)\sqrt{\nu}}}\right) \cdot(x-\sqrt{\nu})+\] \[\qquad+(x-\sqrt{\nu})^{3}\cdot\mathbb{R}_{\nu}\{(x-\sqrt{\nu}) \},\quad\nu\in[0,\delta),\ x\to\sqrt{\nu}.\] The Taylor coefficients of \(f_{\nu}^{mod}\) and \(g_{\nu}^{mod}\) belong to \(\mathbb{R}\{\sqrt{\nu}\}\) (are analytic at \(0\) in \(\sqrt{\nu}\)). Note that, for \(\nu=0\), \(f_{\nu}^{mod}\) has a parabolic double fixed point \(x=0\), and, for \(\nu>0\), two single hyperbolic fixed points at \(\pm\sqrt{\nu}\). Proof.: The time-one maps \(f_{\nu}^{mod}=\operatorname{Exp}(F^{mod}(x,\nu)\frac{d}{dx})\).id of system (2.2) are parabolic (\(\nu=0\)) or hyperbolic (\(\nu>0\)) analytic germs at \(x=0\). Moreover, \(f_{\nu}^{mod}\to f_{0}^{mod}\) uniformly on some interval \(x\in(-d,d)\), \(d>0\), as \(\nu\to 0+\). It can be checked by the operator exponential formula above that the coefficients \(a_{k}(\nu)\) of monomials \(x^{k}\), \(k\geq 0\), in the Taylor expansion of \(f_{\nu}^{mod}(x)=\sum_{k=0}^{\infty}a_{k}(\nu)x^{k}\) converge towards coefficients \(a_{k}(0)\) of \(f_{0}^{mod}(x)=\sum_{k=0}^{\infty}a_{k}(0)x^{k}\), as \(\nu\to 0\). Therefore, to get (3.11), it suffices to get the Taylor expansion of \(f_{\nu}^{mod}\) at \(\sqrt{\nu}\) in the case when \(\nu>0\). Suppose therefore \(\nu>0\). The time-one map \(f_{\nu}\) is obtained from the Fatou coordinate by the Abel equation \(f_{\nu}=(\Psi_{\nu}^{mod})^{-1}(\Psi_{\nu}^{mod}+1)\). First we compute (the first few terms) of the inverse \((\Psi_{\nu}^{mod})^{-1}\). When \(\nu\neq 0\) and \(x\to\sqrt{\nu}\), by (3.9) \(\Psi_{\nu}^{mod}\) admits the following expansion, as \(x\to\sqrt{\nu}\): \[\Psi_{\nu}^{mod}(x)=\left(\frac{\rho(\nu)}{2}-\frac{1}{2\sqrt{ \nu}}\right)\log(x-\sqrt{\nu})+\left(\frac{\rho(\nu)}{2}+\frac{1}{2\sqrt{\nu} }\right)\log(2\sqrt{\nu})+\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+\left(\frac{\rho(\nu)}{2}+\frac{1}{2\sqrt{\nu}}\right)\log(1+\frac{x- \sqrt{\nu}}{2\sqrt{\nu}})=\] \[\sim \left(\frac{\rho(\nu)}{2}-\frac{1}{2\sqrt{\nu}}\right)\log(x- \sqrt{\nu})+\left(\frac{\rho(\nu)}{2}+\frac{1}{2\sqrt{\nu}}\right)\log(2 \sqrt{\nu})+\mathbb{R}_{\nu}[[(x-\sqrt{\nu})]].\] Denote the above coefficients by: \(K_{\pm}(\nu):=\frac{\rho(\nu)}{2}\pm\frac{1}{2\sqrt{\nu}}\), \(K(\nu):=K_{+}(\nu)\log(2\sqrt{\nu})\). Then, for the expansion of the inverse (where \(\frac{y}{K_{-}(\nu)}\to-\infty\)), we get: \[(\Psi_{\nu}^{mod})^{-1}(y)\sim\sqrt{\nu}+e^{\frac{y-K(\nu)}{K_{-}(\nu)}}-\frac {K_{+}(\nu)}{K_{-}(\nu)\cdot 2\sqrt{\nu}}e^{2\frac{y-K(\nu)}{K_{-}(\nu)}}+e^{3\frac{y-K( \nu)}{K_{-}(\nu)}}\mathbb{R}_{\nu}\big{[}\big{[}e^{\frac{y-K(\nu)}{K_{-}(\nu)}} \big{]}\big{]}.\] Therefore, \[f_{\nu}^{mod}(x)=(\Psi_{\nu}^{mod})^{-1}(1+\Psi_{\nu}^{mod}(x))\] \[\sim\sqrt{\nu}+e^{-\frac{2\sqrt{\nu}}{1-\rho(\nu)\sqrt{\nu}}}\cdot( x-\sqrt{\nu})-\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+(x-\sqrt{ \nu})^{3}\cdot\mathbb{R}_{\nu}\{(x-\sqrt{\nu})\},\ x\to\sqrt{\nu}.\] Note that \(\rho(\nu)\) is a bounded function on \(\nu\in[0,\delta)\), so \(\rho(\nu)\sqrt{\nu}\to 0\), as \(\nu\to 0+\). Proof of Theorem b.1.: We use the formula for the continuous tail (2.5), and change the variable from \(\varepsilon\) to \(\eta=\theta_{-\sqrt{\nu}}\circ(g_{\nu}^{mod})^{-1}(2\varepsilon)\): \[\ell^{c}(T_{\varepsilon,\nu})=\left(\Psi_{\nu}^{mod}((g_{\nu}^{mod})^{-1}(2 \varepsilon))-\Psi_{\nu}^{mod}(x_{0})\right)\cdot 2\varepsilon.\] We denote \(I(\eta,\nu):=\Psi_{\nu}^{mod}\left((g_{\nu}^{mod})^{-1}(2\varepsilon)\right) -\Psi_{\nu}^{mod}(x_{0})\). Inserting \(\eta=\theta_{-\sqrt{\nu}}\circ(g_{\nu}^{mod})^{-1}(2\varepsilon)\) in \(\Psi_{\nu}^{mod}\) given in (3.7) in Lemma 3.5, we get \(I(\eta,\nu)\), as in (3.4). On the other hand, we have that \(2\varepsilon=g_{\nu}^{mod}\circ\theta_{\sqrt{\nu}}(\eta)=g_{\nu}^{mod}(\eta+ \sqrt{\nu})\), and the expansion (3.5) follows from (3.12) of Lemma 3.7. Note that there exist \(d,\,\delta>0\) such that \(\Psi_{\nu}^{mod}(y)\) is injective on \(y\in[0,d)\), \(d>0\) small, for all \(\nu\in[0,\delta)\). Indeed, \((\Psi_{\nu}^{mod})^{\prime}(y)=\frac{\rho(\nu)(y+\sqrt{\nu})-1}{y^{2}+2\sqrt{ \nu}y}\) is strictly negative, for \(\nu\geq 0\), \(y>0\) sufficiently small. Choosing \(x_{0}\) inside this interval \([0,d)\), \(\Psi_{\nu}^{mod}(\theta_{\sqrt{\nu}}\circ g_{\nu}^{-1}(2\varepsilon))=\Psi_{ \nu}^{mod}(x_{0})\), if and only if \[\eta=\theta_{-\sqrt{\nu}}\circ(g_{\nu}^{mod})^{-1}(2\varepsilon)=x_{0}.\] Therefore, for \(\eta\in[0,x_{0})\) and \(\nu\in[0,\delta)\), the function \(I(\nu,\eta)\) has no zeros. The result now follows. ### Generic saddle-node families Let (1.1), \(\nu\in[0,\delta)\), be a generic saddle-node family, satisfying generic conditions (1.2). Let \(f_{\nu}\) denote its time-one maps, and let \(f_{\nu}^{\rm mod}\) denote the time one maps of its formal model (2.2). Let \(\hat{\Phi}(\nu,x):=(h(\nu),\hat{\varphi}_{\nu}(x))\) be the analytic change of variables conjugating (1.1) to its model (2.2), where \(h(0)=0\) and \(h\) is an analytic diffeomorphism at \(0\), and \(\hat{\varphi}_{\nu}(x)=a_{0}(\nu)+a_{1}(\nu)x+o_{\nu}(x^{2})\) a formal diffeomorphism at \(x=0\), with \(a_{1}(\nu)\neq 0\), \(\nu\in[0,\delta)\). Then, by (2.3), the initial time-one map \(f_{\nu}\) verifies: \[f_{\nu}=\hat{\varphi}_{\nu}^{-1}\circ f_{h(\nu)}^{\rm mod}\circ\hat{\varphi}_ {\nu}.\] Let \(x_{1,2}^{\nu}\) denote the fixed points of \(f_{\nu}\). It holds that \(x_{1,2}^{\nu}\to 0\), as \(\nu\to 0\). There exist two analytic diffeomorphisms defined in sectorial domains \(V_{2,1}^{\nu}\subseteq\mathbb{C}\) centered respectively at \(x_{1,2}^{\nu}\) and bisected respectively by \((\mathbb{R}_{\mp},0)\), with asymptotic expansion \(\hat{\varphi}_{\nu}\)[3]. Without loss of generality, we assume that the fixed point \(x_{1}^{\nu}\) is positive and attractive. By \(\varphi_{\nu}\) we denote the sectorial diffeomorphism on the sector \(V_{1}^{\nu}\) centered at \(x_{2}^{\nu}<0\). As a consequence, for the respective sectorial Fatou coordinate on \(V_{1}^{\nu}\), it holds: \[\Psi_{\nu}=\Psi_{h(\nu)}^{\rm mod}\circ\varphi_{\nu}. \tag{3.13}\] As in the introduction, without loss of generality, we assume that \(F_{\nu}(0,0)>0\) so that \(h(\nu)>0\), for \(\nu>0\). Note that \(\varphi_{\nu}(x_{1}^{\nu})=\sqrt{h(\nu)}\), or \(-\sqrt{h(\nu)}\) and we suppose that \(\varphi_{\nu}(x_{1}^{\nu})=\sqrt{h(\nu)}\). Let \(\ell^{c}(T_{\varepsilon,\nu})\), \(\nu\in[0,\delta)\), be the continuous lengths of the tails of the \(\varepsilon\)-neighborhoods of orbits \(\mathcal{O}_{f_{\nu}}(x_{0})\), for \(x_{0}>0\) sufficiently small. Then, there exists \(\delta>0\) such that \(x_{0}\in V_{1}^{\nu}\), \(\nu\in[0,\delta)\). Analogously, we could have considered initial point \(x_{0}<0\) in \(V_{2}^{\nu}\), \(\nu\in[0,\delta)\), and the orbit of the inverse \(\mathcal{O}_{f_{\nu}^{-1}}(x_{0})\). Let \(\hat{\Phi}=(h,\hat{\varphi}_{\nu})\), \(\nu\in[0,\delta)\), be the formal normalizing change of variables reducing a given saddle-node field (1.1) to its model (2.2), and let \(C(\nu)\neq 0\) be the coefficient of the linear term of the formal series \(\hat{\varphi}_{\nu}\) at the point \(x_{1}^{\nu}\). Let \(\varphi_{\nu}\) be the sectorially analytic realization of \(\hat{\varphi}_{\nu}\) on \(V_{1}^{\nu}\). Let \[k_{\nu}:=\theta_{-\sqrt{h(\nu)}}\circ\varphi_{\nu}\circ\theta_{x_{1}^{\nu}}, \tag{3.14}\] and analogously for formal expansion \(\hat{k}_{\nu}\). Then \(k_{\nu}(x)=C(\nu)x+o_{\nu}(x),\ x\in V_{1}^{\nu},\ x\to 0\), in particular, for \(x\in(\mathbb{R}_{>0},0)\), \(\nu\in[0,\delta)\). **Theorem B.2** (Chebyshev scale for the generic case).: _Let_ \[\eta(2\varepsilon,\nu):=\theta_{-x_{1}^{\nu}}\circ g_{\nu}^{-1}(2\varepsilon),\ \varepsilon\geq 0, \tag{3.15}\] _where \(g_{\nu}:=\mathrm{id}-f_{\nu}\). In the variable \(\eta\geq 0\), the continuous length \(\eta\mapsto\ell^{c}(T_{\varepsilon,\nu})\) admits a uniform asymptotic expansion in the Chebyshev scale:_ \[\left\{I(h(\nu),k_{\nu}(\eta))\eta,\ I(h(\nu),k_{\nu}(\eta))\eta^{2},\ I(h( \nu),k_{\nu}(\eta))\eta^{3},\ldots\right\}, \tag{3.16}\] _as \(\eta\to 0\), where \(I(\nu,\eta)\) is as given in (3.4). There exist \(\delta,\ d>0\) such that the common term \(I(h(\nu),k_{\nu}(\eta))\) is non-zero, for \(\eta\in[0,d)\) and \(\nu\in[0,\delta)\)._ _More precisely, the expansion is given by:_ \[\ell^{c}(T_{\varepsilon,\nu})\sim I(h(\nu),k_{\nu}(\eta))\cdot g _{\nu}(\eta+x_{1}^{\nu})=\] \[= \left(1-e^{-\frac{2\sqrt{h(\nu)}}{1-\rho(h(\nu))\sqrt{h(\nu)}}} \right)\cdot I(h(\nu),k_{\nu}(\eta))\eta+\] \[\qquad\qquad+c_{2}(\nu)\cdot I(h(\nu),k_{\nu}(\eta))\eta^{2}+o_{ \nu}(I(h(\nu),k_{\nu}(\eta))\eta^{2}),\ \eta\to 0. \tag{3.17}\] _Here, \(c_{2}(0)\neq 0\)._ The following lemma is used in the proof of Theorem B.2. **Lemma 3.8**.: The coefficient of the linear term in the expansion of \(g_{\nu}\) around \(x_{1}^{\nu}\) is the same as the coefficient of the linear term in the expansion of \(g_{h(\nu)}^{\mathrm{mod}}\) around \(\sqrt{h(\nu)}\). There are no free coefficients. Moreover, the coefficient of the quadratic term in the expansion of \(g_{\nu}\) around \(x_{1}^{\nu}\) at the bifurcation value \(\nu=0\) is nonzero. Proof.: Since \(\varphi_{0}^{\prime}(0)\neq 0\), due to the continuity of \((x,\nu)\mapsto\varphi_{\nu}^{\prime}(x)\), it follows that \(\varphi_{\nu}^{\prime}(x_{1}^{\nu})\neq 0\), for \(\nu\) sufficiently small. For \(\nu=0\), when \(\varphi_{0}\) is only sectorial at the point \(x_{1}^{0}=0\), the values \(\varphi_{0}(0)\) and \(\varphi_{0}^{\prime}(0)\) are taken in the sense of one-sided limit, that is, in the sense of appropriate coefficients of the formal expansion \(\hat{\varphi}_{0}\). Therefore, the following expansion holds: \[\varphi_{\nu}(x)=\sqrt{h(\nu)}+C(\nu)(x-x_{1}^{\nu})+o_{\nu}(x-x_{1}^{\nu}),\ C(\nu)\neq 0. \tag{3.18}\] That is, \(\varphi^{\prime}_{\nu}(x_{1}^{\nu})\neq 0\). It follows by (2.3) and (3.18) that \[f^{\prime}_{\nu}(x_{1}^{\nu})=(\varphi_{\nu}^{-1})^{\prime}(f_{h( \nu)}^{\rm mod}\circ\varphi_{\nu})(x_{1}^{\nu})\cdot(f_{h(\nu)}^{\rm mod})^{ \prime}(\varphi_{\nu}(x_{1}^{\nu}))\cdot\varphi^{\prime}_{\nu}(x_{1}^{\nu})=\] \[=\frac{1}{\varphi^{\prime}_{\nu}(x_{1}^{\nu})}\cdot(f_{h(\nu)}^{ \rm mod})^{\prime}(\sqrt{h(\nu)})\cdot\varphi^{\prime}_{\nu}(x_{1}^{\nu})=(f_ {h(\nu)}^{\rm mod})^{\prime}(\sqrt{h(\nu)}),\] \[f^{\prime\prime}_{0}(0)=\frac{\varphi^{\prime\prime}_{0}(0)}{C( 0)}\left((f_{0}^{\rm mod})^{\prime}(0)-(f_{0}^{\rm mod})^{\prime}(0)^{2}\right) +C(0)(f_{0}^{\rm mod})^{\prime\prime}(0)= \tag{3.19}\] \[=C(0)(f_{0}^{\rm mod})^{\prime\prime}(0)=2C(0)\neq 0.\] The last line follows since \((f_{0}^{\rm mod})^{\prime}(0)=1\) (tangent to the identity). Proof of Theorem b.2.: We have: \[\ell^{c}(T_{\varepsilon,\nu})=(\Psi_{\nu}(g_{\nu}^{-1}(2\varepsilon))-\Psi_{ \nu}(x_{0}))\cdot 2\varepsilon. \tag{3.20}\] Put \(\eta:=\theta_{-x_{1}^{\nu}}\circ g_{\nu}^{-1}(2\varepsilon)\) as in (3.15). Therefore, \(g_{\nu}^{-1}(2\varepsilon)=\theta_{x_{1}^{\nu}}\circ\eta\). By (3.13), we get \[\Psi_{\nu}(g_{\nu}^{-1}(2\varepsilon))=\Psi_{h(\nu)}^{\rm mod}\circ\varphi_{ \nu}(\theta_{x_{1}^{\nu}}\circ\eta). \tag{3.21}\] Let \(k_{\nu}=C(\nu)y+o_{\nu}(y)\) be as defined in (3.14). We then have, by (3.4) and (3.7): \[\Psi_{\nu}(g_{\nu}^{-1}(2\varepsilon))=\left(\Psi_{h(\nu)}^{\rm mod}\circ \theta_{\sqrt{h(\nu)}}\right)(k_{\nu}(\eta))=I(h(\nu),k_{\nu}(\eta))+\Psi_{h( \nu)}^{mod}(x_{0}). \tag{3.22}\] On the other hand, \(2\varepsilon=g_{\nu}(\eta+x_{1}^{\nu})\). Using Lemma 3.8 to get the first terms of the expansion of \(g_{\nu}\) and inserting it together with (3.22) in (3.20), we get expansion (3.17). Finally, since \(h(0)=0\), \(k_{\nu}(\eta)=O(\eta)\), \(h\) and \(k_{\nu}\) are diffeomorphisms of some positive open neighborhoods of \(0\), \(k_{\nu}\) depends continuously on \(\nu\) and \(I(\nu,\eta)\) is non-zero, for \(\eta\in[0,d)\) and \(\nu\in[0,\delta)\), the same holds, for \((\nu,\eta)\mapsto I(h(\nu),k_{\nu}(\eta))\), possibly in smaller neighborhoods. Note that it is more convenient in applications to consider the standard length \(\ell(T_{\varepsilon,\nu})\) instead of continuous \(\ell^{c}(T_{\varepsilon,\nu})\). Let \(G:[0,+\infty)\to[0,+\infty)\) be periodic function of period \(1\), on \([0,1)\) given by \(G(s)=1-s,\ s\in(0,1)\) and \(G(0)=0\). We have the following corollary: **Corollary 3.9** (Expansion of the standard length \(\ell(T_{\varepsilon,\nu})\)).: Under assumptions of Theorem 3.2, the length \(\eta\mapsto\ell(T_{\varepsilon,\nu})\) admits a uniform asymptotic expansion in the Chebyshev scale (3.16), but with \(I(h(\nu),k_{\nu}(\eta))\) replaced by \[\left({\rm id}+G\right)\left(I(h(\nu),k_{\nu}(\eta))\right),\] which also does not have zero points for any \(\nu\) on some uniform interval \(\eta\in[0,d)\), \(d>0\). The asymptotic expansion is given by (3.17), up to the same modification. Proof.: This follows directly using the relation between continuous \(\tau_{\varepsilon}^{\nu}\) and discrete critical time \(n_{\varepsilon}^{\nu}\), which is its integer part: \[n_{\varepsilon}^{\nu}-\tau_{\varepsilon}^{\nu}=G(\tau_{\varepsilon}^{\nu}),\ \nu\in[0,\delta).\] For details, see Subsection 2.2 and [10]. Therefore, \(\ell(T_{\varepsilon,\nu})-\ell^{c}(T_{\varepsilon,\nu})=G(\tau_{\varepsilon}^{ \nu})\cdot 2\varepsilon\), and the claim follows from (3.16) and (3.17) in Theorem B.2. Moreover, since \(I(h(\nu),k_{\nu}(\eta))\) is big (uniformly in \(\nu\)), for small \(\eta\in[0,d)\), and \(G\) is bounded inside \([0,1]\), it follows that \((id+G)\Big{(}I(h(\nu),k_{\nu}(\eta))\Big{)}\) is nonzero in some small neighborhood \(\eta\in[0,d)\), uniform in \(\nu\) Proof of Theorem A.: After the change of variables \(\eta:=\theta_{-x_{1}^{\nu}}\circ g_{\nu}^{-1}(2\varepsilon)\), we have \[\ell^{c}(T_{\varepsilon,\nu})(\eta)=(\Psi_{\nu}(\eta+x_{1}^{\nu})-\Psi_{\nu}(x_{ 0}))\cdot g_{\nu}(\eta+x_{1}^{\nu}).\] Since the first factor in brackets, for any \(\nu\in[0,d)\), does not have zero points on some uniform positive interval around \(\eta=0\), the multiplicity (from the right) of the zero point \(0\) of \(\ell^{c}(T_{\varepsilon,0})\) in the variable \(\eta\) in the unfolding is the same as the multiplicity (from the right) of the zero point \(0\) of the displacement function \(g_{0}(\eta)\) in the unfolding, which corresponds to the multiplicity (from the right) of the singular point \(0\) of the parabolic vector field in the unfolding (1.1). Precise forms of the expansions of the length function \(\ell^{c}(T_{\varepsilon,\nu})\) for all parameter values The expansion (3.17) in Theorem B.2 is valid for the whole bifurcation. The variable \(\eta(2\varepsilon,\nu)\) is a _compensator variable_ that behaves asymptotically qualitatively differently, as \(\varepsilon\to 0\), depending on the case \(\nu=0\) or \(\nu>0\), see (3.2) and (3.6). By (3.6), the variable \(\eta\) given in (3.15) behaves essentially as the simpler compensator \(\tilde{\eta}\) defined in Definition 3.3. In Lemma 4.1, we expand \(\eta\) from (3.15) in a simpler square root compensator variable \(\tilde{\eta}\) and expand \(\ell^{c}(T_{\varepsilon,\nu})\) in a Chebyshev scale in this simpler compensator variable \(\tilde{\eta}\) instead of \(\eta\). In Theorem C we then re-group the terms of this new expansion so that, for \(\nu>0\), all terms in the same block merge to the same term of the asymptotic expansion in \(\varepsilon\) at the bifurcation value \(\nu=0\). Hence, we show that confluence of singularities leads to confluence of asymptotic terms in the expansion of \(\varepsilon\mapsto\ell^{c}(T_{\varepsilon,\nu})\), as \(\nu\to 0\). In particular, from Theorem C, in Subsection 4.1, we deduce the expansions of the continuous length \(\ell^{c}(T_{\varepsilon,\nu})\) in \(\varepsilon\), as \(\varepsilon\to 0\), for each of the qualitatively different cases, \(\nu>0\) and \(\nu=0\). In Theorem C, we use another compensator variable: \[\kappa(x,\nu):=\frac{1}{x+\nu}. \tag{4.1}\] It is related to the compensator \(\alpha\) from Definition 3.2 by the following formula: \[\frac{d}{dx}\alpha(x,\nu)=-\frac{1}{x}\kappa(x,\nu).\] Evidently, \(\kappa(x,\nu)\to\frac{1}{x}\), as \(\nu\to 0\), moreover, for every \(\delta>0\), \(x^{\delta}\kappa(x,\nu)\to x^{-1+\delta}\) uniformly as \(\nu\to 0\). Let \(\nu\mapsto h(\nu)\), \(\nu\mapsto C(\nu)\), \(\nu\mapsto c_{2}(\nu)\), \(\nu\in[0,\delta)\), be as in Theorem B.2, and let \[r(\nu):=\frac{1-e^{-\frac{2\sqrt{\pi}}{1-\rho(\nu)\sqrt{\nu}}}}{2C(0)},\ \nu\in[0,\delta).\] **Theorem C**.: _The expansion of the continuous length \(\ell^{c}(T_{\varepsilon,\nu})\) in the variable \(\tilde{\eta}:=\tilde{\eta}\left(\frac{2\varepsilon}{C(0)},r(h(\nu))\right)\), as \(\tilde{\eta}\to 0\), can be written in the form:_ \[\ell^{c}(T_{\varepsilon,\nu}) \sim\left(1-e^{-\frac{2\sqrt{h(\nu)}}{1-\rho(h(\nu))\sqrt{h(\nu)} }}\right)\cdot\Bigg{\{}[\alpha(C(\nu)\tilde{\eta},2\sqrt{h(\nu)})\cdot\tilde{ \eta}]+\] \[+\frac{\rho(h(\nu))}{2}\sum_{k=0}^{\infty}a_{k}(\nu)\Big{[}\log \tilde{\eta}\cdot\tilde{\eta}^{k+1}+\log\big{(}\tilde{\eta}+\frac{2\sqrt{h(\nu )}}{C(\nu)}\big{)}\cdot\tilde{\eta}^{k+1}\Big{]}+\] \[+\sum_{k=1}^{\infty}\Big{[}a_{k}(\nu)\cdot\alpha\big{(}C(\nu) \tilde{\eta},2\sqrt{h(\nu)}\big{)}\cdot\tilde{\eta}^{k+1}+N_{k}^{\nu}\big{(} \tilde{\eta},\kappa\big{(}C(\nu)\tilde{\eta},2\sqrt{h(\nu)}\big{)}\big{)} \Big{]}\Bigg{\}}+\] \[+c_{2}(\nu) \cdot\Bigg{\{}[\alpha(C(\nu)\tilde{\eta},2\sqrt{h(\nu)})\cdot \tilde{\eta}^{2}]+\] \[+\frac{\rho(h(\nu))}{2}\sum_{k=0}^{\infty}b_{k}(\nu)\Big{[}\log \tilde{\eta}\cdot\tilde{\eta}^{k+2}+\log\big{(}\tilde{\eta}+\frac{2\sqrt{h(\nu )}}{C(\nu)}\big{)}\cdot\tilde{\eta}^{k+2}\Big{]}+\] \[+\sum_{k=1}^{\infty}\Big{[}b_{k}(\nu)\cdot\alpha\big{(}C(\nu) \tilde{\eta},2\sqrt{h(\nu)}\big{)}\cdot\tilde{\eta}^{k+2}+M_{k+1}^{\nu}\big{(} \tilde{\eta},\kappa\big{(}C(\nu)\tilde{\eta},2\sqrt{h(\nu)}\big{)}\big{)} \Big{]}\Bigg{\}}. \tag{4.2}\] _Here, \(c_{2}(0)\neq 0\), \(a_{0}(\nu)=1\) and \(\beta_{0}(\nu)=1\), for all \(\nu\in[0,\delta)\), and \(N_{k}^{\nu},\,M_{k}^{\nu}\) are homogenous polynomials of degree \(k\) whose coefficients depend on \(\nu\)._ The expansion is written in such a form that the terms that give the same power-logarithmic asymptotic term in \(\tilde{\eta}\) for \(\nu=0\), with their respective coefficients in \(\nu\), are grouped together as a block inside square brackets. Note that, for a fixed \(\nu>0\), each block is possibly _infinite_ in the sense that it can be further expanded asymptotically in a convergent power-logarithmic series in \(\tilde{\eta}\), as \(\tilde{\eta}\to 0\). For simplicity, in Theorem C each block is written in a closed form, as a true function of \(\tilde{\eta}\). Moreover, by (3.2), \(\tilde{\eta}=\sqrt{\frac{2}{C(0)}}\varepsilon^{1/2}\) for \(\nu=0\), and \(\tilde{\eta}\) expands as an integer power series in \(\varepsilon\), for \(\nu>0\), so complete expansions in the original variable \(\varepsilon\to 0\), for \(\nu=0\) and \(\nu>0\) are given in Subsection 4.1. In the proof of Theorem C we need the following lemmas: **Lemma 4.1** (The compensator variable \(\eta\) expressed by \(\tilde{\eta}\)).: Let \(\eta\) be as defined in (3.15) for the field (1.1) and let \(\tilde{\eta}\) be as in Definition 3.3. Then: \[\eta(2\varepsilon,\nu)=\chi_{\nu}\Big{(}\tilde{\eta}\big{(}\frac{2\varepsilon} {C(0)},r(h(\nu))\big{)}\Big{)},\ \nu\in[0,\delta),\] where \(r(\nu):=\frac{1-e^{-\frac{2\sqrt{\varepsilon}}{1-\rho(\nu)\sqrt{\nu}}}}{2C(0)}\), and \(\chi_{\nu}\) is a germ of a real diffeomorphism tangent to the identity. Consequently, \(\eta\) possesses a Taylor expansion in the variable \(\tilde{\eta}\big{(}\frac{2\varepsilon}{C(0)},r(h(\nu))\big{)}\), and \[\lim_{(\varepsilon,\nu)\to(0,0)}\frac{\eta(2\varepsilon,\nu)}{\tilde{\eta}( \frac{2\varepsilon}{C(0)},r(h(\nu)))}=1.\] Proof.: From (3.12) and Lemma 3.8, we write \(g_{\nu}\) as: \[g_{\nu}(x) =\left(\left(1-e^{-\frac{2\sqrt{h(\nu)}}{1-\rho(h(\nu))\sqrt{h(\nu) }}}\right)\cdot(x-x_{1}^{\nu})+C(0)(x-x_{1}^{\nu})^{2}\right)\circ\] \[\circ\left((x-x_{1}^{\nu})+\sum_{i=2}^{\infty}c_{i}(\nu)(x-x_{1}^ {\nu})^{i}\right)= \tag{4.3}\] \[=P_{\nu}\circ\psi_{\nu}\circ\theta_{-x_{1}^{\nu}}(x),\] where \(P_{\nu}(x)=\left(1-e^{-\frac{2\sqrt{h(\nu)}}{1-\rho(h(\nu))\sqrt{h(\nu)}}} \right)\cdot x+C(0)x^{2}\), and \(\psi_{\nu}:=\mathrm{id}+\sum_{i=2}^{\infty}c_{i}(\nu)x^{i}\) is a germ of a real diffeomeorphism, tangent to the identity at \(0\). Note that \(1-e^{-\frac{2\sqrt{h(\nu)}}{1-\rho(h(\nu))\sqrt{h(\nu)}}}\) is the linear coefficient of the expansion of \(g_{\nu}\) around its zero point \(x_{1}^{\nu}\), and \(C(0)\) the quadratic coefficient of the expansion of \(g_{0}\) around its zero point \(0\). The coefficients \(c_{i}(\nu)\) are explicitely determined by the above equality and the coefficients of the expansion of \(g_{\nu}\). Inverting explicitely, we get \[P_{\nu}^{-1}(2\varepsilon)=\sqrt{r^{2}(h(\nu))+\frac{2\varepsilon}{C(0)}}-r(h (\nu))=\tilde{\eta}\left(\frac{2\varepsilon}{C(0)},r(h(\nu))\right),\] where \(\tilde{\eta}\) is as defined before in Definition 3.3, and \(r(\nu):=\frac{1-e^{-\frac{2\sqrt{h}}{1-\rho(\nu)\sqrt{h}}}}{2C(0)}.\) By (4.3), \[\eta(2\varepsilon,\nu)=\theta_{-x_{1}^{\nu}}\circ g_{\nu}^{-1}(2\varepsilon) =\varphi_{\nu}\Big{(}\tilde{\eta}\big{(}\frac{2\varepsilon}{C(0)},r(h(\nu)) \big{)}\Big{)}, \tag{4.4}\] where \(\chi_{\nu}:=\psi_{\nu}^{-1}\in\mathrm{Diff}_{\mathrm{id}}(\mathbb{R},0)\) tangent to the identity. Note that \(\psi_{\nu}\) is analytic, since the above equality (4.3) can equivalently be written as: \[P_{\nu}^{-1}\circ g_{\nu}\circ\theta_{x_{1}^{\nu}}=\psi_{\nu}.\] Now, \(P_{\nu}^{-1}\circ g_{\nu}\circ\theta_{x_{1}^{\nu}}\) is an analytic germ at \(0\) tangent to the identity for every \(\nu\in[0,\delta)\). Indeed, for \(\nu=0\), \(P_{0}^{-1}=\sqrt{x}\),and it follows by the binomial expansion, since \(g_{0}\) is an analytic germ of multiplicity \(2\). For \(\nu>0\), \(g_{\nu}\) is an analytic germ tangent to the identity at \(x_{1}^{\nu}\), and \(P_{\nu}\) is an analytic diffeomorphism tangent to the identity at \(0\), so the composition \(P_{\nu}^{-1}\circ g_{\nu}\circ\theta_{x_{1}^{\nu}}\) is an analytic diffeomorphism at \(0\) tangent to the identity. **Lemma 4.2** (Properties of the compensator \(\kappa\)).: The following properties of the compensator \(\kappa\) defined in (4.1) hold: \((i)\) \[\frac{d}{dx}\kappa^{k}=-k\kappa^{k+1},\ k\geq 1,\] \((ii)\) \[\frac{d^{k}}{dx^{k}}\alpha(x,\nu)=P_{k+1}\left(\frac{1}{x},\kappa(x,\nu) \right),\ k\in\mathbb{N}_{\geq 1},\] where \(P_{k}\) is a homogenous polynomial in two variables of degree \(k\), with coefficients independent of \(\nu\), \[\frac{d}{dx}\log(x+\nu)=\kappa(x,\nu),\ \frac{d^{k+1}}{dx^{k+1}}\log(x+\nu)=(-1)^{k}k! \cdot\kappa(x,\nu)^{k+1},\ k\geq 1.\] Note that, for every homogenous polynomial \(Q_{k}\) of degree \(k\in\mathbb{N}_{\geq 1}\) it holds that \(Q_{k}\left(\frac{1}{x},\kappa(x,\nu)\right)\to c_{k}\frac{1}{x^{k}}\) pointwise, as \(\nu\to 0\), where \(c_{k}\in\mathbb{R}\). **Lemma 4.3**.: Let \(I(\nu,\eta)\) be as in (3.4) and let \(k_{\nu}\), \(C(\nu)\), \(h(\nu)\) be as defined in Theorem B.2. Let \(\tilde{\eta}\) be as in Theorem C. The following expansion in \(\tilde{\eta}\) holds: \[I(h(\nu),k_{\nu}(\eta))=\alpha(C(\nu)\tilde{\eta},2\sqrt{h(\nu) })+\frac{\rho(h(\nu))}{2}\Big{(}\log\tilde{\eta}+\log\Big{(}C(\nu)\tilde{\eta} +2\sqrt{h(\nu)}\Big{)}\,\Big{)}+\] \[\qquad+\sum_{k=0}^{\infty}N_{k}^{\nu}\left(\tilde{\eta},\kappa \left(C(\nu)\tilde{\eta},2\sqrt{h(\nu)}\right)\right).\] Here, \(N_{k}^{\nu}\) are homogenous polynomials of degree \(k\) whose coefficients depend on \(\nu\), \(k\geq 0\). Proof.: By (3.4), \[I(h(\nu),k_{\nu}(\eta))=\alpha(k_{\nu}(\eta),2\sqrt{h(\nu)})+ \frac{\rho(h(\nu))}{2}\log\big{(}k_{\nu}^{2}(\eta)+\\ +2\sqrt{h(\nu)}\cdot k_{\nu}(\eta)\big{)}-\Psi_{h(\nu)}^{mod}(x_{0 }).\] Recall from Theorem B.2 that \(k_{\nu}(\eta)=C(\nu)\eta+o_{\nu}(\eta)\) on \(V_{1}^{\nu}\) is a sectorial diffeomorphism, with asymptotic expansion as \(\tilde{\eta}\to 0+\) in \(\mathbb{R}[[\tilde{\eta}]]\), for every \(\nu\in[0,\delta)\). On the other hand, by Lemma 4.1, \(\eta=\chi_{\nu}(\tilde{\eta})\), where \(\chi_{\nu}\) is a diffeomorphism tangent to the identity. Therefore, putting \(K_{\nu}:=k_{\nu}\circ\chi_{\nu}\), we get \[k_{\nu}(\eta)=K_{\nu}(\tilde{\eta}),\ K_{\nu}=C(\nu)\cdot\mathrm{id}+h.o.t.,\] where \(K_{\nu}\) is a sectorial diffeomorphism, with asymptotic expansion as \(\tilde{\eta}\to 0+\) in \(\mathbb{R}[[\tilde{\eta}]]\), for every \(\nu\in[0,\delta)\). We expand, using Lemma 4.2 and denoting by \(\partial_{1}\) the partial derivative with respect to the first variable: \[\alpha(k_{\nu}(\eta),2\sqrt{h(\nu)})=\alpha(K_{\nu}(\tilde{\eta}),2\sqrt{h(\nu)})=\\ =\alpha(C(\nu)\tilde{\eta},2\sqrt{h(\nu)})+\partial_{1}\alpha(C( \nu)\tilde{\eta},2\sqrt{h(\nu)})\cdot(K_{\nu}(\tilde{\eta})-C(\nu)\tilde{\eta })+\\ +\frac{1}{2}\partial_{1}^{2}\alpha(C(\nu)\tilde{\eta},2\sqrt{h( \nu)})\cdot(K_{\nu}(\tilde{\eta})-C(\nu)\tilde{\eta})^{2}+o_{\nu}((K_{\nu}( \tilde{\eta})-C(\nu)\tilde{\eta})^{2})\\ =\alpha(C(\nu)\tilde{\eta},2\sqrt{h(\nu)})+\\ +\sum_{k=1}^{\infty}P_{k+1}\left(\frac{1}{C(\nu)\tilde{\eta}}, \kappa\left(C(\nu)\tilde{\eta},2\sqrt{h(\nu)}\right)\right)\cdot(K_{\nu}( \tilde{\eta})-C(\nu)\tilde{\eta})^{k}=\\ =\alpha(C(\nu)\tilde{\eta},2\sqrt{h(\nu)})+\sum_{k=0}^{\infty}H_{ k}^{\nu}\left(\tilde{\eta},\kappa\left(C(\nu)\tilde{\eta},2\sqrt{h(\nu)} \right)\right).\] Here, the coefficients of \(P_{k}\) do not depend on \(\nu\). The last line is obtained re-grouping the terms triangularly, where \(H_{k}^{\nu}\) are homogenous polynomials of degree \(k\) whose coefficients depend on \(\nu\), \(k\geq 0\). Furthermore, by Taylor expansion of the logarithmic term and Lemma 4.2, we get: \[\log\left(k_{\nu}^{2}(\eta)+2\sqrt{h(\nu)}\cdot k_{\nu}(\eta)\right)=\] \[=\log\left(K_{\nu}^{2}(\tilde{\eta})+2\sqrt{h^{-1}(\nu)}\cdot K_{ \nu}(\tilde{\eta})\right)=\log\left(K_{\nu}(\tilde{\eta})\right)+\log(K_{\nu}( \tilde{\eta})+2\sqrt{h(\nu)})=\] \[=\log(\tilde{\eta})+r_{\nu}(\tilde{\eta})+\log\left(C(\nu)\tilde{ \eta}+2\sqrt{h(\nu)}\right)+\] \[\quad+\kappa\left(C(\nu)\tilde{\eta},2\sqrt{h(\nu)}\right)\left(K _{\nu}(\tilde{\eta})-C(\nu)\tilde{\eta}\right)+\] \[\quad+\sum_{k=2}^{\infty}(-1)^{k-1}(k-1)!\cdot\kappa^{k}\left(C( \nu)\tilde{\eta},2\sqrt{h(\nu)}\right)\cdot\left(K_{\nu}(\tilde{\eta})-C(\nu )\tilde{\eta}\right)^{k}=\] \[=\log\tilde{\eta}+r_{\nu}(\tilde{\eta})+\log\left(C(\nu)\tilde{ \eta}+2\sqrt{h(\nu)}\right)+\sum_{k=3}^{\infty}M_{k}^{\nu}\left(\tilde{\eta}, \kappa\big{(}C(\nu)\tilde{\eta},2\sqrt{h(\nu)}\big{)}\right).\] Here, \(r_{\nu}\) is a sectorially analytic diffeomorphism, with asymptotic expansion as \(\tilde{\eta}\to 0+\) in \(\mathbb{R}[[\tilde{\eta}]]\), for every \(\nu\in[0,\delta)\). Also, \(M_{k}^{\nu}\) are homogenous two-variable polynomials of degree \(k\) with coefficients depending on \(\nu\). Proof of Theorem C.: We use the expansion (3.17), from Theorem B.2, the fact that \(\eta=\tilde{\eta}+O_{\nu}(\tilde{\eta}^{2})\), which follows by Lemma 4.1 and Lemma 4.3. The expansion follows after regrouping in a same block the terms (with their respective coefficients in \(\nu\)) that merge to the same asymptotic term in \(\tilde{\eta}\) for \(\nu=0\). Note that \(c_{2}(0)\neq 0\), so \(c_{2}(\nu)\neq 0\), for \(\nu\in[0,\delta)\), by continuity. Therefore, all terms in expansion (3.17) after \(c_{2}(\nu)\cdot I(h(\nu),k_{\nu}(\eta))\eta^{2}\) can be factored through \(c_{2}(\nu)\). ### Expansions in cases \(\nu=0\) and \(\nu>0\) We now use the expansion (4.2) from Theorem C to get expansions in \(\varepsilon\), as \(\varepsilon\to 0\). In the case \(\nu=0\), \(\tilde{\eta}=\sqrt{\frac{2\varepsilon}{C(0)}}\), and (4.2) immeditely becomes: \[\ell^{c}(T_{\varepsilon,0}) \sim\frac{c_{2}(0)}{C(0)}\tilde{\eta}+\rho(0)\sum_{k=0}^{\infty}b _{k}(0)\tilde{\eta}^{k+2}\log\tilde{\eta}+\sum_{k=1}^{\infty}c_{k}\tilde{\eta} ^{k+2}\] \[=\frac{c_{2}(0)\sqrt{2}}{C(0)^{3/2}}\varepsilon^{\frac{1}{2}}+ \rho(0)\sum_{k=0}^{\infty}c_{k}\varepsilon^{\frac{k+2}{2}}\log\varepsilon+ \sum_{k=1}^{\infty}d_{k}\varepsilon^{\frac{k+2}{2}},\ \varepsilon\to 0,\ c_{k},\,d_{k}\in \mathbb{R}.\] Note that logarithmic terms appear only thanks to nontrivial residual invariant \(\rho(0)\) of the parabolic time-one map for \(\nu=0\). In the case \(\nu>0\), under notation of Theorem C, by (3.2) \[\tilde{\eta}=\frac{2\varepsilon}{C(0)\sqrt{r(h(\nu))}}+o(\varepsilon)\in \mathbb{R}_{\nu}\{\varepsilon\}.\] Furthermore, \[\alpha(C(\nu)\tilde{\eta},2\sqrt{h(\nu)})\sim -\frac{1}{2\sqrt{h(\nu)}}\log\tilde{\eta}+\frac{1}{2\sqrt{h(\nu)}} \log\frac{2\sqrt{h(\nu)}}{C(\nu)}+\tilde{\eta}\mathbb{R}_{\nu}\{\tilde{\eta}\}\] \[\sim -\frac{1}{2\sqrt{h(\nu)}}\log\varepsilon+\frac{1}{2\sqrt{h(\nu)}} \log\frac{C(0)\sqrt{r(h(\nu))h(\nu)}}{C(\nu)}+\varepsilon\mathbb{R}_{\nu}\{ \varepsilon\},\] \[\log\left(\tilde{\eta}+\frac{2\sqrt{h(\nu)}}{C(\nu)}\right)\sim \log\frac{2\sqrt{h(\nu)}}{C(\nu)}+\frac{C(\nu)}{2\sqrt{h(\nu)}} \tilde{\eta}+\tilde{\eta}^{2}\mathbb{R}_{\nu}\{\tilde{\eta}\}\] \[\sim \log\frac{2\sqrt{h(\nu)}}{C(\nu)}+\frac{C(\nu)}{C(0)\sqrt{r(h( \nu))h(\nu)}}\varepsilon+\varepsilon^{2}\mathbb{R}_{\nu}\{\varepsilon\},\] \[\kappa(C(\nu)\tilde{\eta},2\sqrt{h(\nu)})\sim \frac{1}{2\sqrt{h(\nu)}}-\frac{C(\nu)}{4h(\nu)}\tilde{\eta}+ \tilde{\eta}^{2}\mathbb{R}_{\nu}\{\tilde{\eta}\} \tag{4.5}\] \[\sim \frac{1}{2\sqrt{h(\nu)}}-\frac{C(\nu)}{2C(0)h(\nu)\sqrt{r(h(\nu)) }}\varepsilon+\varepsilon^{2}\mathbb{R}_{\nu}\{\varepsilon\},\ \varepsilon\to 0.\] Inserting (4) in (4), we see that there are no noninteger powers of \(\varepsilon\) in the expansion, but there are additional logarithmic terms that are not related to non-zero residual invariant, coming from compensators. The monomials in the expansion are \(\varepsilon^{k}\), \(k\in\mathbb{N}_{0}\), and \(\varepsilon^{k}\log\varepsilon\), \(k\geq 1\). More precisely: \[\ell^{c}(T_{\varepsilon,\nu})\sim\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+A_{1 }(\nu)\varepsilon+o_{\nu}(\varepsilon),\ \ \ \ \varepsilon\to 0.\] Note that the first term \(\varepsilon\log\varepsilon\) in the case \(\nu>0\) exists even if the residual invariant \(\rho(\nu)\) is zero. It is related to the hyperbolic nature of the orbit, as compared with parabolic nature in the case \(\nu=0\), where logarithms appear only thanks to the nonzero residual term. ## 5. Concluding remark: the case \(\nu<0\) Throughout this paper, we restrict the study of the unfolding (1.1) to parameter values \(\nu\in[0,\delta)\). We explain here the reasons for this restriction. For \(\nu>0\), there are two real singular points. The orbit of \(x_{0}>0\) is _blocked_ at one of these singular points. If \(\nu<0\), there are no _real_ singular points and the real orbit passes near zero and goes through to \(-\infty\) in a finite time. For \(\nu\geq 0\), we calculate the critical time using the Fatou coordinate. In order to have uniform expansion, we need some continuity of the Fatou coordinates with respect to \(\nu\) (continuity of the domain of definition and of the function). For \(\nu=0\), the Fatou coordinates are defined on Ecalle-Voronin (petal-type) domains for the parabolic singular point. For \(\nu>0\), we use the Fatou coordinate defined in a full neighborhood of one (right-most) hyperbolic singular point, which extends until the left singular point, where it ramifies. Its domain and the Fatou coordinate itself converge to the Ecalle-Voronin Fatou coordinate, as \(\nu\to 0\) (see also Glutsyuk [11].) For \(\nu<0\), the _real_ orbit of \(x_{0}>0\) passes between the two (complex) singular points, and we must use a different domain of the Fatou coordinate, passing between the two singular points, which must be continuous up to \(\nu=0\). Here, the natural domain of definition is a _crescent-like_ domain with the two tips at the two complex critical points. This approach was studied by Lavaurs in [13] and resumed in [3]. In [3], the authors precise the difference between the two charts which they call Lavaurs and Glutsyuk charts. In this case, there is no natural real point at which the expansion is done. One rather obtains one _right_ Fatou coordinate tending to the right Ecalle-Voronin Fatou coordinate, for \(\nu\to 0\), and similarly on the left-hand side. Moreover, opening of the passage between the two singular points for \(\nu<0\) gives a mapping between the two domains: as each crescent-like fundamental domain, for \(\nu<0\), corresponds holomorphically (by passing to the quotient) to a Riemann sphere with two marked points, a global mapping between these crescents corresponds to a global mapping of \(\mathbb{CP}^{2}\) preserving two points (\(0\) and \(\infty\)). Hence, to a linear mapping determined by one number, which is called the _Lavaurs period_. Note that the Lavaurs period does not have a limit as \(\nu\) tends to zero. The study of the critical time for \(\nu<0\) will certainly involve all this different concepts. We plan to pursue this complex approach for \(\nu<0\) in a forthcoming investigation.
2302.13437
Suitability of Quantized DEVS LIM Methods for Simulation of Power Systems
The suitability of the QDL method for analyzing the performance of ac power systems has been evaluated by application to a microgrid. The QDL method is based on a combination of Quantized State Systems (QSS) methods and the Latency Insertion Method (LIM). The accuracy and computational intensity of QDL simulations were evaluated relative to an industry-standard reference method. The key advantages expected of the QDL approach -- including high computational efficiency when the system is operating in steady-state and, when not in steady-state, the need to update only those states that have been affected by quantum level changes of connected states. The expected advantages were largely realized, but with some complications that remain unresolved and require further research such as limit cycle oscillations that emerge in some states after disturbances that should have returned to stationarity. Also, the method in its current state may not be feasible for fault analysis because of the low computational efficiency that results when large disturbances (e.g. faults) create large excursions simultaneously in many system states. The relative strengths and weaknesses of the method are discussed, and some improvements to the method are proposed to overcome the weaknesses. The revelation of observed problems is intended to inspire additional research to overcome those problems.
Navid Gholizadeh, Joseph M. Hood, Roger A. Dougal
2023-02-26T23:30:16Z
http://arxiv.org/abs/2302.13437v1
# Suitability of Quantized Devs-Lim Methods for Simulation of Power Systems ###### Abstract Suitability of the QDL method for analyzing the performance of ac power systems has been evaluated by application to a microgrid. The QDL method is based on a combination of Quantized State Systems (QSS) methods and the Latency Insertion Method (LIM). The accuracy and computational intensity of QDL simulations were evaluated relative to an industry-standard reference method. The key advantages expected of the QDL approach -- including high computational efficiency when the system is operating in steady-state and, when not in steady-state, the need to update only those states that have been affected by quantum level changes of connected states. The expected advantages were largely realized, but with some complications that remain unresolved and require further research such as limit cycle oscillations that emerge in some states after disturbances that should have returned to stationarity. Also, the method in its current state may not be feasible for fault analysis because of the low computational efficiency that results when large disturbances (e.g. faults) create large excursions simultaneously in many system states. The relative strengths and weaknesses of the method are discussed, and some improvements to the method are proposed to overcome the weaknesses. The revelation of observed problems is intended to inspire additional research to overcome those problems. Simulation, Power Systems, Stiff systems, Quantized State Systems **Acronyms** **QDEVS:** Quantized Discrete Event System Specification **QSS:** Quantized State System **LIQSS:** Linear Implicit QSS **LIM:** Latency Insertion Method **QDL:** Quantized DEVS-LIM Method **Definitions** **Computational Intensity**: the number of updates of a QDL atom per unit of simulated time **Normalized RMS Deviation:** For the \(i\)th state variable, the RMS difference between values computed by the reference method and the QDL method, with the QDL output resampled at the time step of the reference simulation, and normalized to the dynamic range of the reference simulation over the N samples: \[D_{\textit{rms, normalized}}=\frac{\sqrt{\sum_{j=1}^{N}\frac{\left(y_{ij}-q_{ij} \right)^{2}}{N}}}{\max(y_{i})-\min(y_{i})}\] Where \(j\) is the index of time series. **Maximum Absolute Percentage Deviation:** For the \(i\)th state variable, the largest absolute percentage deviation between the value computed by the reference method and the QDL method over the time series of N data points: \[D_{max}=\frac{\max\lvert y_{ij}-q_{ij}\rvert}{y_{ij}}\times 100\%\] **Reference solution**: the time-domain solution of a system model (described by a set of DAEs) obtained by use of a 5th order implicit Runge-Kutta method of the Radau IIA family as described in [10] and commonly used for power system solutions. **Quantization step size**: A constant that defines the set of discrete values the output of a state variable may assume. The output value of a state variable is \(\boldsymbol{\Delta Q}\cdot\boldsymbol{k}\), where \(\boldsymbol{\Delta Q}\) is the quantization step size, and \(\boldsymbol{k}\) is an integer (\(\boldsymbol{k}\in\mathbb{Z}\)). **QDEVS**: Quantized Discrete Event System Specification **LIQSS**: Linear Implicit QSS. Uses a linear approximation of the future state derivatives to predict time to reach next quantized state. **QDL Atom**: A computational unit (or programming object) that stores and updates the continuous value of a single state variable, and its quantized output value. **QDL solution**: The time-domain solution of a system model obtained by applying the QDL method For decades, power system simulation has been accomplished using well-established modeling and simulation methods such as State Space, Modified Nodal Analysis (MNA), and time-slicing numerical integration algorithms. Recent developments in Quantized Discrete Event System (QDEVS) methods, however, appear to present opportunities to innovate power system simulation. In [1] the authors reported how Quantized State System (QSS) methods [2] could be combined with the Latency Insertion Method [3] to solve otherwise intractable problems in electric circuits. The purpose of the research reported here is to evaluate the suitability of those combined methods, abbreviated as QDL (Quantized-DEVS with Latency Insertion), for analyzing the performance of electric power systems, with emphasis on accuracy and efficiency when simulating both slow and fast transient phenomena following common system perturbations such as changes in load level or control setpoints. The use of the quantized state methods for simulating power systems is motivated by these facts: 1. Some types of power systems often run for long times in near steady state conditions. In these steady states, quantized states should require few or zero updates, so a QDL simulation of the system should be able to advance rapidly through time in between disturbances. 2. If some parts of a system do not remain steady, only the states that move enough to change their quantized outputs will cause changes to propagate to other parts of the system. Stationary states whose inputs do not change should remain relatively stationary. 3. Choice of quantizer size may permit trade-offs between simulation speed and accuracy. This study itself was further motivated by a desire to discover the types of problems that might be encountered when applying the method so that the community of interested researchers can offer solutions to the problems and ideas for further improvement. The reference system for this study was a small three-phase ac electric power network having the following components: 1. A generator set, comprising a turbine engine with a speed governor driving a synchronous machine having a controlled exciter, 2. A set of three loads, including an induction motor, a constant impedance ac load, and an ac/dc converter supplying a resistive load, and 3. Several power cables that connect the three-buses of the electrical network. The reference power system exhibits a range of slow and fast dynamics that effectively exercise the QDL method. The slowest dynamics are associated with the mechanical states, the fast dynamics are associated with electrical states, and speeds of electro-magnetic and magneto-mechanical dynamics fall in between. In order to realize the benefits of the QDL method, the sinusoidally-varying physical system variables were transformed into a rotating reference frame (e.g. Park transformation [21]) so that the system's state derivatives can be zero in steady-state. The set of system equations are stiff, which poses challenges for conventional time-slice simulation methods with respect to efficiency and numerical stability but presents an opportunity for the strengths of the QDL method to be demonstrated. ## 2 Background Key concepts of the QDL method are repeated here for completeness, and because ref 1, a conference publication, may not be widely available. QDL is a novel method for time-domain modeling and simulation that combines the principles of Quantized State System (QSS) family of integration methods, Quantized Discrete Event Systems (QDEVS) specification, and the Latency Insertion Method (LIM) modeling approach. LIM allows one to cast a power system model into a QDEVS form. Once a system is described in terms of the QDEVS specification, it can be solved using various QSS integration methods. ### Quantized State System Methods Quantized states are important to achieving our simulation objectives. Any variable whose movement is smaller than its output quantization size will not induce new updates in states connected to it. This contributes to low computing load and fast advancing of simulation time. Discrete event methods are likewise important. When new events are sparse, simulation time can rapidly advance. The well-known Quantized Discrete Event Specification (QDEVS) [4] provides a useful framework for our research. Here we reiterate a few of the principles of QDEVS to put our work into perspective. The Quantized State System (QSS) methods are a series of integration methods based on the QDEVS specification, and described in [4]. These QSS methods provide a QDEVS-compliant way to simulate continuous systems. The QSS approach assumes that a generic continuous State Equation System \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x}(\mathbf{t}),\mathbf{u}(\mathbf{t}))\] can be approximated by a Quantized State System (QSS) in the form \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{q}(\mathbf{t}),\mathbf{u}(\mathbf{t}))\] where \(\mathbf{q}\) is a quantized state vector that follows piecewise constant trajectories and is related to the state vector \(\mathbf{x}\) by the quantum size \(\Delta Q\). References [13][14] define the structure and implementation of atomic DEVS models and general-purpose simulators for QSS systems. The QSS approach guarantees a bounded error[5], so analytically stable systems cannot become numerically unstable when being simulating by a fully-coupled QSS algorithm[14]. The family of QSS integration methods [4] is extensive. The simplest formulation, QSS1, developed in [4], [5], relies on explicit integration and uses first-order estimates of state derivatives to predict the time at which the continuous state \(x_{i}(t)\) will increase or decrease by amount \(\Delta\)Q (quantization step size) from the current quantized value \(q_{i}(t)\) to the next higher or lower quantized value. Although QSS1 has some advantages, such as being easy to implement, its disadvantage is that it uses only a first order approximation of the state trajectory to calculate the time to the next event; to get accurate results, \(\Delta\)Q has to be relatively small, which produces a large number of steps. QSS2[6] and QSS3[7] use second and third order approximations to more accurately estimate the next event time, however, the computational cost grows with the square root and cubic root (respectively) of the desired accuracy. Furthermore, with stiff systems, these explicit integration methods create fictitious high frequency oscillations[15] that generate large numbers of steps that are costly in computational time and memory size, even when a system is nominally in steady state. Because we are interested in simulating stiff power systems that include both fast electrical dynamics and slow mechanical dynamics, we chose instead to use the Linear Implicit Quantized State System (LIQSS) methods which were specifically developed to address the concurrent existence of slow and fast dynamics that are inherent to stiff systems [15]. LIQSS methods implement classic implicit integration techniques into the QSS methods. Similar to the way that several variations of QSS methods were developed, so also were variations of LIQSS such as LIQSS1, LIQSS2 and LIQSS3 which perform first, second and third order approximations respectively[15]. The LIQSS2[15], [16], mLIQSS2[15] methods all offer improvements and performance and stability over the original LIQSS1. Despite the benefits of LIQSS2 or 3 or mLIQSS, two reasons compelled us to use the simpler LIQSS1[15], [16] in this study. First, it was easier to implement the necessary models using LIQSS1, and second, we anticipated that using the first-order method would make it easier to distinguish latency effects from integration effects. If latency methods usefully improve simulator performance for first-order methods, then extensions can later be made to higher-order variations of LIQSS, perhaps with additional gains in performance and stability. **2.2 Latency Insertion Method** QSS methods do not intrinsically handle the algebraic constraints that are required to enforce energy conservation in electric circuits. For that purpose, the originators of the QDL method [1] relied on the Latency Insertion Method (LIM) [3]. The Latency Insertion Method replaces algebraic coupling in the system equations with time latency, while still enabling the benefits of modularization and automatic enforcement of energy conservation constraints that are essential for electric circuits. Traditional power system modeling and simulation tools like general state space models [12] or nodal system model representations, such as the popular Modified Nodal Analysis (MNA) method are well-suited for power system modeling and simulation because they allow to modularize the software and they intrinsically enforce the conservation of energy as expressed by Kirchhoff's voltage and current circuit equations. However, these cannot be directly used with QSS integration methods because they require linear algebra solutions. LIM is especially valuable for systems containing nonlinear elements which are ubiquitous in power systems. When the number of nonlinear elements becomes large, traditional matrix-based solution methods become inefficient. The latencies required by LIM can be realized in either of two ways: by inserting small, fictitious latency at nodes and branches where negligible physical latency exists, or by exploiting latency (like capacitance or inductance) that naturally exists in system components, but which might otherwise have been eliminated in efforts to simplify a model. LIM permits full node-level system partitioning. To achieve generality, the LIM assumes that Thevenin or Norton transformations can convert any branch or node of a system to this topology. Figures 1 and 2 respectively show how a generic node and a generic branch are represented in the LIM approach. The generic LIM node model shown in Figure 1 includes a voltage-controlled current source (VCCS) and a current-controlled current source (CCCS). These dependent sources provide straight-forward means for modeling energy-conversion coupling circuits, which are common in power systems. \(\mathsf{C_{i}}\) is the capacitance at node \(i\) and is the element that provides the node voltage latency, \(\nu_{i}\) is the node voltage, \(H_{i}\) is the current source at node \(i\), \(R_{i}\) is the parallel resistance at node I, \(B_{ik}\) is the coefficient of the VCCS at node \(i\) controlled by the voltage at node k and \(S_{ip}\) is the coefficient of the CCCS feeding node \(i\) and controlled by current in branch \(P\). The KCL equation for the i\({}^{\text{th}}\) node is: \[\begin{split}&\hskip-14.226378pt\mathsf{C_{i}}\,\frac{d}{dt}\,v_{i}(t)+G_{ ij}\nu(t)-H_{ij}(t)-B_{ijk}\nu_{k}(t)-S_{ijpq}i_{pq}(t)=\sum_{M_{l}}^{k=1}i_{ik}(t) \end{split}\] Figure 1: Generic LIM node with dependent sources. The generic LIM branch model with dependent sources is shown in Figure 2 with physical interpretations analogous to those of the LIM node model. The KVL equation from the \(i^{th}\) node to the \(j^{th}\) node is: \[\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}vspace{ -0.2cm}\vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{-0.2cm}vspace{ -0.2cm}vspace{-0.2cm}vspace{-0. The simulation model of the reference system was formulated as follows: The dynamics of the prime mover and the speed governor were combined into a single turbo-governor model. The power converter was described by a non-linear model time-averaged over the switching period. The cables were represented using standard \(\pi\) lumped circuit models. All device equations were written in the direct and quadrature coordinates of a rotating reference frame according to the Park transformation which effectively transforms sinusoidal quantities into their phasor equivalents. The system model has 32 state variables, which makes it simple enough to implement using the novel simulation method, yet large and non-linear enough to demonstrate practical utility of the method for analyzing realistic power systems. The equations of the system model are computed by a set of atoms, where an atom is a computational unit (or programming object) that uses a QDL integration method to update, store, and broadcast a single state variable's continuous internal value and its quantized output value. A model of a single power system component will comprise as many atoms as it has state variables. For each component of the reference system, a model was created compliant to the QDL approach. The simple model of a power cable is shown next to illustrate the approach. QDL representations of the other system components are described in the appendix. ### QDL Formulation of Cable Model Figure 3: Reference power system schematic The QDL model of a power cable is derived by starting from the traditional lumped \(\pi\) circuit model, which introduces the preferred voltage latency at the end points (the capacitances C/2) and current latency from end to end (the inductance L). Figure 4 shows the circuit equations in terms of the rotating dq coordinates according to the method as described in [18], with separate equations for the direct and quadrature components of the currents and voltages. This yields the following two equations for the q-axis state variables: \[\frac{d}{dt}i_{q}(t)=\frac{-(R+\omega L)i_{q}(t)+\left(v_{q1}(t)-v_{q2}(t) \right)}{L}\] \[\frac{d}{dt}v_{q1}(t)=\frac{-\left(\frac{G+\omega C}{2}\right)v_{q1}(t)+\sum i _{bran}}{C/2}\] Where the cable parameters correspond to those shown in Figure 4, and \(\sum i_{branch}\) is the sum of currents entering node \(q_{1}\) from all external branches connected to the node. Similar equations describe the other states \(v_{q2},v_{d1},v_{d2}\) and \(i_{d}\).. Models for all other system components were similarly developed, with some details presented in the appendix, then assembled into the system model to be used in the experiments. ## 4 Simulation Experiments The electrical connections between the components in the system model were managed programmatically in the simulation framework by mapping QDL node ports to the connected QDL Figure 4: The LIM branch model for a lumped pi transmission line in the dq frame using a dynamic phasor representation branch ports and summing together the series LIM model contributions connected to a particular system node. The accuracy and performance of the QDL method were assessed by comparison to a benchmark solution. Conveniently, the QDL formulation of the power system can be used to provide the full state-space description of the system, including the system Jacobian matrix. Therefore, it was possible to run a benchmark simulation by integrating the system equations using a standard time-slicing integration method, specifically a 5th order implicit Runge-Kutta method of the Radau IIA family described in [10], a method well-suited for very stiff, non-linear systems. A fixed time step was chosen, with a size appropriate to easily accommodate the fastest eigenvalues of the system. Accuracy and performance of the QDL method was assessed by exercising the reference power system model through scenarios involving two different types of disturbances. The first disturbance was a load increase, implemented as a 20% step increase of the active power consumed by the RL load on bus 3. The second disturbance is a change of control input, implemented as a 2% increase of the voltage control setpoint of the exciter, from 1.00 per unit to 1.02 per unit. Simulation accuracy was assessed by comparing the deviation of the QDL solution from the reference solution and simulation performance was assessed by comparing the number of state updates required for the QDL solution to the number of timesteps in the reference solution. The performance assessment was necessarily a coarse measure; it compared the performance of new, far-from-optimized code to mature solver code, and there was not a one-to-one alignment of the metrics, but at least the results shed some light on the computational performance that is likely to be achievable with the QDL method. In general, the state trajectories computed by the QDL method were found to agree quite well with those computed by the reference simulation. The QDL method was expected to generate more frequent computing events immediately following a perturbation, and then less frequent events as the system approached a new steady state. This was sometimes found to be true. The QDL method was expected to update some variables at rates much different than for other variables, and this was also found to be true. But troublingly, the values and update rates of some variables were sometimes inconsistent with the reference simulation and inconsistent with our performance expectations. The expected and the unexpected behaviors will be explained in detail next. ## 5 Results of Simulation Experiments In the first scenario, at \(t=1\)second, the active power of the RL load was increased by 20 percent. Figure 5 shows the QDL solution of the speed of the synchronous machine (\(\omega_{r}\)), and the direct axis components of the machine's current and voltage (\(i_{ds}\), \(V_{ds}\)) in comparison to the quantities computed by the benchmark method prior to and just after the perturbation. For all three of these variables, the quantities computed by the two methods are nearly indistinguishable. In each graph there also appears a dotted line that shows the cumulative number of updates of the corresponding quantized state variable in the QDL solution. The scale of cumulative updates is the same in every graph so that one can immediately see that each variable has a unique update rate. Several observations can be drawn from the QDL results corresponding to Figure 5: 1. Immediately after the perturbation, the machine speed exhibited a prominent damped sinusoidal oscillation at a frequency of about 16.5 Hz (period about 0.06 s) having initial amplitude about 0.3 rad/s. The QDL solution very accurately tracked the reference solution in respect to the amplitude and phase of this speed oscillation. 2. For all three state variables, the largest absolute percentage deviation between the QDL result and the reference result was very small -- less than 0.0125%. This was the hoped-for result. This indicates that the QDL method is viable and lends confidence that the method was correctly implemented. 3. In the interval before the perturbation, zero state updates were generated for each of the three system variables. The system started in a perfect ac steady state, and it remained there, as expected, until the disturbance occurred. The computational cost of evaluating this time segment was essentially zero. This is exactly the benefit we hoped to achieve. 4. Beginning immediately after the perturbation at 1 second, the slope of the cumulative update line for each variable increased as each state variable moved through many quantization levels. Different states moved through different numbers of quantization levels. The electrical quantities \(i_{ds}\) and \(\nu_{ds}\) experienced approximately ten times more updates than the mechanical quantity \(\omega_{r}\). These were all as expected, considering the quantization sizes and the relative motions of these variables. 5. The number of updates of a state variable depended on the quantization size of that variable. For example, \(\nu_{ds}\) experienced about 32,000 updates between 1 and 2 seconds, which is consistent with a 30 V motion and a quantization size of 0.001 V, while \(\omega_{r}\)experienced \(\sim\)2000 updates with a quantization size of 0.0001 rad/s, which is also consistent with a total motion of approx. 0.16 rad/s. 6. Long after the perturbation, as \(\omega_{r}\)reached a new steady state, its update rate became smaller than immediately after the perturbation, but not zero. The reduction in slope of the update rate was as expected. Figure 6 shows the cumulative number of atom updates for several of the system states. From this graph, a number of additional conclusions can be drawn. 7) For many of the atoms, the number of update events was significantly less than the number of time steps in the reference solution (labeled as ODE time steps), Notably, for the induction machine speed, the number of updates was smaller by more than two orders of magnitude. 8) For most other atoms, the number of updates was approximately one order of magnitude smaller than the number of ODE time steps. 9) For the fastest-moving variables such as the quadrature axis component of current on cable 2-3 the number of updates was slightly more than the number of ODE time steps but note that the computational cost of updating these individual atoms was very small. 10) If one approximates the computational cost of the QDL simulation by the sum of the update events for all atoms, and if most atoms have update rates one order of magnitude smaller than the ODE solution, and with the total number of system states being of order 10, then the computational costs of the two methods are similar. This is noteworthy given that the QDL algorithms are far from optimized while the reference algorithm is extremely mature. Figure 5: From top to bottom, rotational speed of the induction machine, direct component of the current of the induction machine, direct component of the voltage of the induction machine respectively, prior to and after the increase of the active power consumed by the RL load. In the second scenario the system was perturbed by increasing the voltage reference of the voltage regulator on the generator (\(V_{ref}\)) by 2% from 1 per unit to 1.02 per unit at t = 1 second. This perturbation of the setpoint created a smaller and less-abrupt dynamic event in the power system compared to the step increase of load that was invoked in the first scenario. Figure 7 shows the same system variables as were plotted for the first experiment. The following additional observations can be drawn: 1. The machine speed exhibited a damped oscillation at the same 16.5 Hz system frequency as was seen in Figure 5, but with much lower amplitude - less than 0.01 rad/s (note the finer resolution on the vertical axis of Figure 7 compared to Figure 5). The amplitude computed by the QDL method was slightly larger than that computed by the reference method. A lower frequency mechanical mode at 0.4 Hz is also evident, about half-a-period of which can be seen in Figure 7, accurately tracked the reference solution with an amplitude of about 0.01 rad/sec. (This 0.4 Hz mode was not evident in Figure 5 due to the coarser scale of the speed axis. More details of this mode will be evident in subsequent figures.) These features were generally as expected, but the reader should pay attention to the slightly higher amplitude of the 16.5 Hz mode that was computed by the QDL method, as this mode will later in the paper be associated with unexpected behavior. Figure 6: Cumulative QDL atom updates vs. simulation time, induction machine states 2. The time evolutions of the three plotted system states were consistent with those of the reference solutions. The maximum deviation over the simulation duration was again less than 0.001%. This is as expected. 3. The cumulative update counts of the current and voltage variables at the end of two seconds were roughly the same as in the first experiment, which is consistent with the movements of those state variables through roughly the same range. The cumulative update of synchronous machine rotor speed \(\mathbf{\omega_{r}}\) was only 1/10 of the count compared to the first scenario, which is also consistent with the smaller range of the speed response in the second scenario. Despite the excellent performance demonstrated thus far, the experiments also revealed at least three interesting problems. A first problem was that some system variables never returned to a quiescent condition when the system should have reached a new steady state. Instead, persistent low-amplitude narrow-band oscillations developed, which caused persistent state update events that should not have been necessary. This diminished the expected high computational efficiency in what should have become a new steady state. A second problem was that seemingly random noise (narrowband but random amplitude) occurred in some system variables after the initial Figure 7: Induction machine speed, direct component of the current and direct component of the voltage respectively, after the AVR voltage setpoint was increased by 2% at 1.0 second (zoom to 0.8-2.0 seconds). perturbation, even though the underlying (longer term) behavior was correctly computed. A third problem was that the noisy behavior of some state variables obscured the underlying behavior, even when the underlying behavior was correct and could be recovered by low-pass filtering. Each of these three unexpected problems will be explained in more detail next. ## 6 Interesting Problems Figure 8 shows the speed of the induction machine (\(\omega_{\mathbf{r}}\)) after the step increase of the RL load, but at three different levels of time and amplitude resolution in order to make clear some of the coarse and fine details that were not apparent in Figure 5Figure 5. In Figure 8, one can see that the initial speed oscillation at 16.5 Hz was followed by a lower amplitude damped speed oscillation at 0.4 Hz. The 0.4 Hz component of the speed oscillation computed by the QDL method followed the reference solution rather well in amplitude and phase, but it seemed to be contaminated with a noise signal that had a center frequency curiously close to 16.5 Hz, which, recalling Figure 5, was the natural frequency of the (correctly computed) prompt speed oscillation.. In the upper graph of Figure 8, near time t =1 second, it is clear that the 16.5 Hz oscillation is rapidly damped, and that the general character of the speed response computed by QDL after damping agrees well with that computed by the reference simulation. This confirms that the correct prediction of machine speed shown in figure 5 continues not just for two seconds, but at least out to 10 seconds. The middle graph of Figure 8 expands the speed scale to show that the correct long-term behavior of the 0.4 Hz mode oscillation is somewhat contaminated by noise having a frequency around 16 Hz. Comparing the noise amplitude between 2 and 4 seconds to that between 14 and 20 seconds (the bottom graph of Figure 8) the noise amplitude appears to increase with time. As an aside, the upper graph of Figure 8 also shows with the higher resolution scale of the cumulative updates that the update rate of \(\mathbf{\omega_{\mathbf{r}}}\) was high (about 2000 updates during a fraction of a second immediately after the disturbance) during the transient response, and much lower after the prompt transient died away but it never returned to zero. Similar characteristics are evident in Figure 9, which shows the responses after increasing the AVR setpoint. The average of the speed trajectory computed by the QDL method accurately shows the expected amplitude, phase, and damping rate of the 0.4 Hz response, but it is contaminated by the same type of higher frequency noise that was seen in Figure 8, and the amplitude of this noise is not contaminated by the QDL method. Figure 8: QDL solution of the speed of the induction machine after load step at \(t\)=1s. In upper graph, the first 10 seconds, in middle graph, an expanded vertical scale to show details of the high frequency noise during the first 10 seconds, in lower graph the same vertical scale as middle graph, but showing the QDL solution at later time from 10 to 20 seconds. element seems to grow slowly with time as the system approaches the new steady state. Between t=18 and 20 seconds, the amplitude is the largest. Separate tests (not shown here) revealed that the noise did not grow further at later times but instead reached a limiting amplitude comparable to the amplitude that was seen around t=20s. Looking at the data presented in **Error! Reference source not found.** and observing that the average of the QDL simulation result appears to coincide with the reference simulation result, one might infer that post-processing of the QDL data with a low-pass filter would yield a result that better tracks the reference simulation. To test this, the QDL data was filtered with a 6\({}^{\mathrm{th}}\) order discrete Butterworth low-pass filter having a cutoff frequency of 100 Hz, applied in both the forward and backward directions to cancel the phase shift in order to preserve the transient response as much as possible. Figure 9: QDL solution of the induction machine speed. Increasing the AVR voltage setpoint after 1-second by 2 percent. The update count rises steadily after the perturbation. b): QDL solution of the induction machine speed plotted from 10 to 20 seconds. QDL low amplitude oscillations causes the cumulative update line slope to increase steadily. **Error:****Reference source not found.** shows a filtered version of the quadrature component of the synchronous machine current (\(i_{qS}\)) following the perturbation of the first scenario (step increase of RL load), over a sixty second interval, much longer than that shown in the prior figures. The higher-amplitude oscillation at 16.5 Hz that occurred immediately at the time of the disturbance is not quite discernable on this time scale, but it was present, and it was followed by the expected lower-amplitude damped oscillatory response at 0.4 Hz. The QDL solution accurately followed the reference solution for about the first ten seconds. After that, one can see that the solution computed by the reference method became fully damped to a new steady state, whereas the QDL simulation (low pass filtered) continued to oscillate indefinitely at 0.4 Hz. The QDL simulation method apparently introduced a pumping effect at the 0.4 Hz frequency characteristic of the mechanical mode. The curve of the filtered QDL solution can be approximated by the analytic function: \[x(t)\cong A\cdot e^{-\lambda t}\big{(}cos(\omega t+\varphi)+sin(\omega t+ \varphi)\big{)}+x_{dc}+B\cdot cos(\omega t+\varphi)\] Figure 10: quadrature component of the synchronous machine stator current (filtered, fc=100 Hz) with estimated parameters as \(A=72\), \(\lambda=0.4\)\(s^{-1}\), \(\omega=2.5\) rad/s, \(\varphi=3.14,x_{dc}=6677\) and \(B=5\). The QDL simulation correctly computed the nominal current of 6677 A and the amplitude and frequency of the decaying oscillations at 30A and 0.4 Hz respectively just after the disturbance, and the correct initial damping rate of 0.16 \(s^{-1}\), but the oscillations ultimately failed to die away; \(i_{qs}\)exhibited persistent oscillation with amplitude of about 9.96 A at the characteristic frequency of 0.4 Hz. Reference [13] revealed that oscillations of this type seem to have a complex relationship with the quantization step size, the initial conditions, and the nature (amplitude and rate) of the applied perturbation. The simulation of stiff systems using traditional QSS methods is known to result in high-frequency oscillations of quantized state quantities that should be nominally in a steady-state condition [2][10][11]. Natural frequency is prominent in both the QDL and Reference simulations immediately after the perturbation, but only in the Reference case do the oscillations die out. In the QDL method, steady state oscillations persist indefinitely. The more advanced QSS methods [14][15] should be investigated. Although this post-processing approach (low pass filtering of the noisy result) might be viable, it is clearly undesirable as it requires the person running the simulation to know which frequencies to keep and which to eliminate, which may not be possible if one has not also run another reference simulation to know which frequencies to eliminate. Obviously, if one already had the reference simulation, it would seem to obviate the need for a model computed by this new simulation method. While the machine speed computed by the QDL method does correctly track the reference solution in spite of some amount of noise, some other states exhibited such noisy behaviors that the noise obfuscated the underlying average behavior. The worst cases for noisy states were associated with cables, with the biggest deviation being found in the quadrature-axis current of the cable connecting bus 2 to bus 3. We suspect that the cables exhibited the largest deviations because they also had the highest natural frequencies. **Error! Reference source not found.** shows the RMS deviation of system variables, normalized to its own range of motion - a very stringent criteria for accuracy compared to normalizing to the nominal value - sorted from largest to smallest, averaged over the entire 60 second duration of the simulation. The normalized RMS deviation (defined by eq. 1) was largest in the cables and the AVR system, while the deviations in other state variables were below 4%. Applying the same low-pass filter, but with a cutoff frequency of 50 Hz instead of 100 Hz, produced the data shown in Figure 12, Figure 13 and Figure 14, each on a different time scale. The whole 60-second behavior of filtering is shown in Figure 12. The filtered version of the QDL solution (Figure 13) tracks the benchmark simulation very well through the initial transient response (although a small filtering artifact (not resolvable here) does appear at the instant of the perturbation). Figure 17 expands the last 5 seconds of the simulation showing the oscillations in the filtered signal. The filtering process reduced the normalized RMS deviation of the QDL solution from 9.30% to 1.02%. It is expected that filtering can be applied to the other states with similar results. It is recognized that this post-filtering approach is not desirable - it would be better to discover and resolve the underlying cause of the noisy behavior - but it appears to be effective. Figure 11: Normalized RMS deviation of each state variable from the reference solution. (normalized with respect to the dynamic range of motion of that state variable during the simulation Figure 12: Q-axis current for cable connecting buses 2 and 3, with QDL solution unfiltered and filtered, and the reference solution, over the entire 60s duration of the experiment. Figure 13: Q-axis current for cable connecting buses 2 and 3, with QDL solution unfiltered and filtered, and the reference solution, expanding detail in the period from 1 to 3s. Finally, we note a general weakness of the QDL method that is not presented here with data; the speed of computing is not good under the condition of large-signal perturbations such as would be induced by a fault analysis. The large excursion of all state variables away from their nominal operating points creates event storms as each variable moves through a very large number of quantized levels. As a result, it was found to be not possible to complete the simulation of a short-circuit fault. For the QDL method to become fully successful, strategies must be developed to eliminate or compensate the various shortcomings. It was not possible to resolve all of them within the initial scope of this study, but we felt it important to present the problems so that we and others can eventually resolve them. Some problems will require more research than others. For example, it is not expected that even the latest LIQSS methods (such as the modified, 2nd order mLIQSS2) will yield better performance when analyzing power system fault scenarios in which event storms are caused by large excursions of many system states. This type of problem will likely require development of new methods such as state jumping. Perhaps the QDL method will eventually prove ineffective for fault analysis simply because its most promising virtue (high efficiency in steady-state) is inherently its biggest liability in the opposite conditions (far from steady state). ## 7 Conclusion The QDL method using LIQSS1 integrator and quantizer was able to track the moving equilibrium of a relatively complex (32 state) non-linear power system. With the quantization step sizes used in this experiment, some of the state variables were computed to within 0.01% of the values computed by the reference simulation, and most of the state variables were computed to within a RMS deviation normalized to the range of motion, not to the nominal value (much more stringent), of 1% to 3% compared to the reference simulation. Proxy metrics for computational intensity (the number of QDL atom state updates, and the number of timesteps for the reference solution) imply Figure 14: Q-axis current for cable connecting buses 2 and 3, with QDL solution unfiltered and filtered, and the reference solution, expanding detail in the final 5s of the 60s experiment.
2304.09939
Bitcoin: A life in crises
In this study, we investigate the BTC price time-series (17 August 2010-27 June 2021) and show that the 2017 pricing episode is not unique. We describe at least ten new events, which occurred since 2010-2011 and span more than five orders of price magnitudes ($US 1-$US 60k). We find that those events have a similar duration of approx. 50-100 days. Although we are not able to predict times of a price peak, we however succeed to approximate the BTC price evolution using a function that is similar to a Fibonacci sequence. Finally, we complete a comparison with other types of financial instruments (equities, currencies, gold) which suggests that BTC may be classified as an illiquid asset.
Jevgeni Tarassov, Nicolas Houlié
2023-04-06T05:11:12Z
http://arxiv.org/abs/2304.09939v1
DESEARCH ARTICLE ###### Abstract In this study, we investigate the BTC price time-series (17 August 2010-27 June 2021) and show that the 2017 pricing episode is not unique. We describe at least ten new events, which occurred since 2010-2011 and span more than five orders of price magnitudes (SUS 1 -SUS 60k). We find that those events have a similar duration of approx. 50-100 days. Although we are not able to predict times of a price peak, we however succeed to approximate the BTC price evolution using a function that is similar to a Fibonacci sequence. Finally, we complete a comparison with other types of financial instruments (equities, currencies, gold) which suggests that BTC may be classified as an illiquid asset. **Citation:** Tarassov & Houilie (2022) Bitcoin: A life in crises. PLoS ONE 17(9): e0274165. [https://doi.org/10.1371/journal.pone.0274165](https://doi.org/10.1371/journal.pone.0274165) **Editor:** Baogui Xin Shandong University of Science and Technology, CHINA **Received:** June 3, 2021 **Accepted:** August 22, 2022 **Peter Review History:** PLOS recognizes the benefits of transparency in the peer review process; therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. The editorial history of this article is available here: [https://doi.org/10.1371/journal.pone.0274165](https://doi.org/10.1371/journal.pone.0274165) **Copyright:** 0 2022 Tarassov, Houilie. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. **Data Availability Statement:** The data used in this study can be found at the following URL: [https://finance.yahoo.com/quote/BTC-USD/](https://finance.yahoo.com/quote/BTC-USD/). **Funding:** The author(s) received no specific funding for this work. ## 1 Introduction As of today, the combined market value of the 5 most popular cryptocurrencies (CCs) is \(>\$1500\) bn (\(>\)8950 bn for BTC alone); a number similar to the market cap of Amazon, and larger than those of Tesla or Facebook. In the light of its high historical volatility (90d historical volatility \(\sim\) 100%), irregular market trading volumes, and because its main underlying is yet unknown, classifying BTC (and some of the other CCs) for risk assessment remains necessary. Although it was intensely scrutinized, it is yet unclear whether BTC should be treated as a commodity (volatile and liquid), a currency (stable and liquid), an equity (variably liquid and variably volatile), or whether it should be receive a singular definition for each investment context [1, 2, 3, 4, 5, 6, 7]. Our main objective is to contribute to this debate by focusing on the study of time-series. With the increasing speed and improving reliability of financial apps-based services, and with a strong editorial presence in economic arenas, crypto currencies are poised to gain an ever-greater base of users and services [8, 9]. In the wider context of blockchain development, CC may help in trading goods and services across political boundaries, avoiding fees imposed by financial intermediaries, and in hedging uncertainties on financial markets [10]. Even though some progress has been made through the understanding of the bitcoin (BTC) price structure [2, 11], the business model remains largely opaque. The overall opacity surrounding CC trades quickly triggered informal criticisms and, soon after, many warnings issued by a wide range of actors, from intelligence services [12] to market regulators [13]. In November 2017, the price of BTC has unexpectedly risen for a month by an average 1.8% daily, leading to the biggest and most widely publicized exponential price rise in **Competing interests:** The authors have declared that no competing interests exist. Despite the fact that holding BTC carries some volatility risk (implied volatility \(>\) 100%; [14], large cohorts of market participants seek to invest in BTC and crypto-currencies. This has left banks and other financial services firms scrambling to deploy financial services (custodianship, cold storage, BTC payment services) and investment products (like funds, derivatives, and complex structured notes) into the market to capitalize on this trend, which in turn exposed them to non-trivial challenges, both regulatory and economical. Most difficulties were due to the nature of the BTC price process, which did not lend itself to straightforward Black-Scholes-Merton modelling. This further encourages us to try and understand 1) the price dynamics of BTC and 2) under which assumption(s) the risk of holding BTC exposure should be modelled. In this study, we use numerical methods such as time-series analysis, which proved their efficiency in many other scientific fields such as finance forensics or geophysics. Time-series analysis allows the uncovering of intrinsic parameters (and their dynamics) of a time evolving phenomenon, and we treat the BTC price time-series as we would any seismological or meteorological records, further comparing them against a model we deem appropriate. In order to get the most of this approach, we carried out our analysis is in both time and frequency domains. In the time domain, trends, amplitudes and scattering can be quantified, while in the frequency domain, hidden discontinuities and periodicities can be explored. By combining both approaches, our aim has been to detect market price discontinuities, irregularities of prices, large changes of market capital (cash influx), large buy/sales and crowd effects in the context of a market that is not immune to other financial information available to the greater public. Finally, we fit BTC prices with a so-called _Hockey Stick Function_ (HSF) and suggest that one, same, recurring dynamic fuels all BTC price surges. We hope our findings will help to characterize the nature of BTC for risk mitigation purposes. ## 2 Data In this study, we use daily ('Open') price data freely available on the Yahoo Finance(r) website, in order to determine long-term dynamics of the BTC market value. The dataset used in this study starts on 17 Aug 2010 and ends on 27 June 2021. We compared these prices with other BTC price feeds, such as those provided by Bloomberg(r) and Market Map(r) (Morningstar FOREX prices), and found some differences at given times as observed before [15]. For instance, inter-exchange Bitcoin price differences did not exceed 500 USD during fall 2017-winter 2018 when the price passed USD 10'000 for the first time (Fig.1). As we focus on large changes of prices over long time spans (\(>\)2 days), we have made sure that using the other sources for prices would not have led us to different conclusions. ## 3 Results ### First level analysis As the analysis in the frequency domain does not require pre-processing steps (detrending, etc.), we first computed periodograms for both the complete and some subsets of the dataset. For this, we used openly distributed \(R\) packages [16, 17, 18]. The frequency domain approach allows the detection of discontinuities and periodic signals for a given time-series (for this reason, frequency analysis may be used to detect e.g. fraud such as price manipulations). Here, we use this tool in order to detect price peaks which may be hidden in either high-amplitude noise or low amplitudes. For example, in Fig.2B, we identify the events 1, 2 and 3 highlighting the correspondence between time and frequency analysis. Event 2 corresponds to the period during which BTC passed US150 (those events are also highlighted in Fig.1). Fig 2 shows that there is no periodicity of the BTC prices (i.e. we cannot see continuous horizontal lines, please refer to S1-S9 Figs for a purely periodic time-series), indicating that the determination of the asset price does not include a cyclical component. The discontinuities visible in Fig.2A can be linked to price peaks of various amplitudes, therefore suggesting that the price peak of fall 2017-winter 2018 is not unique. Finally, Fig.2, by zooming in on specific periods, also suggests the peak prices to be in fact composed of collections of BTC prices discontinuities (green vertical lines or red areas in Fig.2). All those observations suggest the BTC may suddenly become less liquid, although the demand remains high, leading to a price increase by gain of interest from the investors. As those price changes are mostly positive, we can exclude the hypothesis that price variations were due to the discovery of large numbers of BTC (through mining). In order to provide the reader with a point of reference regarding this technique, we computed periodograms (S1-S9 Figs) for synthetic time-series and popular stocks (ABB, Tesla-TSLA, Gamestop-GME). Secondly, we focus on the statistical characteristic of the price time-series. We first test whether BTC prices follow the Bendford's law using the "_BenfordTests_" R library [19]. This method is commonly used for forensics analysis of price structure and income tax data analysis [20] and in the case of BTC it may highlight variations of liquidity. As BTC prices range 4 orders of magnitudes, they qualify for this kind of analysis. Chi\({}^{2}\) test analysis shows that the prices of BTC do not comply with Benford's law (\(\chi^{2}=357\) for n = 8) as the occurrence of "6"s for the first digit is occurring too many times (approx. +50% excess; Fig.3). Further analyses would be necessary in order to identify the cause(s) of this observation. Figure 1: Price of BTC between 2010-08-17 and 2021-07-31 (a) for various periods (b-d). Many periods show the episodes of price increase (and decrease) of similar shape. For reference the same events are highlighted in Fig.2 using same labels. ### The fall 2017 price peak During the fall 2017 the BTC price tripled, its 30-day volatility was up to 180% (Fig 1), and daily returns sometimes exceeded 10%. Such price changes were faster than base-e Figure 2: **Frequency analysis of the BTC price history.** a) Whole BTC price time series and associated periodogram for the whole dataset and b) and for the first 1000 days of the dataset (i.e. 2010 until 2014). High amplitudes levels (yellow to red) highlight discontinuities of price, change of frequency contents. One can note the high number of small event (green to yellow within blue areas) occurring in many occasions since 2012. Two classes of events can be distinguished: those which involve long-periods signals like it was seen for earthquake propagation [53], and those which are only small scale discontinuities (e.g. at approx. 700 days on panel b). exponential, soon looking like a Hockey Stick Function (HSF). HSF is a function parented to the Pascal triangle [21] and Fibonacci series; it can be understood as an exponential function of increasing base. Interestingly, and despite the many opportunities, this function is seldom used to describe natural phenomena. Concentration of CO\({}_{2}\) in the Earth atmosphere [22], the global temperatures [23] but also financial transfers in the context of migrations of populations [24] or pandemic developments [25] are good candidates to be modelled by such a function. As HSF seems to be appropriate to variables resulting from crowds increasing in size, we choose to test it on the BTC time-series. We start by modelling the price peak of the fall 2017 as follows: \[\mathrm{P}_{i}=\left(\sum\nolimits_{k=i-2}^{i-1}\mathrm{P}_{k}\right)\ \ ( \mathrm{i}>1) \tag{1}\] Where \(\mathrm{i}^{\mathrm{th}}\) price is determined from the two previous (\(i\)-1 and \(i\)-2). This formulation is flexible because it can be tuned to fit amplitudes of the signal independently of the time interval used. Eq.(1) is of a form similar to Fibonacci series, and like them it is bounded neither in time nor in amplitude. Therefore, the use of this function does not enable us to predict the maximum price of a BTC, nor the time of such a peak. Finally, we must select the periods of interest within the complete dataset to select the peak price of each data subset. In order to calibrate an HSF profile to our dataset and include information on time and Figure 3: **Comparison between Benford law distribution (orange) and BTC prices (blue) since 17.08.2010.** amplitude, we have had to find two input parameters (\(P_{1}\) and \(P_{2}\)) which describe the evolution of the prices, as well as a sampling interval (\(dT\)) which helps define the speed of price increase. To reach a robust solution, we calibrate data against Eq_(1), using a brute-force scaling approach, to match both the price amplitudes and the duration of development of each event. For instance, one can satisfactorily approximate the pricing episode of 2017 with P\({}_{1}\) = 500 and P\({}_{2}\) = 503 (this initial value for P\({}_{1}\) fits well with the average value of BTC for the year 2016: approx. $550 +/- 150 USD) when using daily prices. We show the results of our analysis in Fig. 5. ### Data fit of secondary episodes (2010-2021) While it is well known that BTC has been volatile during the fall of 2017, one rarely considers that BTC had already experienced similar pricing episodes composed of 1) a sharp rise of the price (\(\sim\)50-70 days), followed by 2) a short stagnation and 3) a readjustment over several weeks (\(>\)100 days). Such a price increase may seem similar to the development of a Minsky-Kindleberger bubble [26, 27, 28] except the final price level is not reduced to the pre-surge price. Three of those events have been documented so far: one in 2012 and two in 2013 [29, 30]. Using the approach described above, we found at least 10 additional events which occurred between 2010 and 2021. We have compared those events together by normalizing both their amplitudes and time frames. We show here that regardless of the maximum price, the relationship between duration and price change is strongly consistent, suggesting some self-similarity in the BTC time-series (Fig. 4). This observation allows us to apply our approach to all new events found so far. Figure 4: **Time- and Price- normalized data segment preceding price peaks.** Average price history is plotted using a red line. From a price of less than USD1 to \(>\) USD 55k, events shows self-similarity. All prices increases are contained within a time-frame of 50–70 days (normalized time – 0.7). We then approximate each event using a HSF by determining the values of P1, P2 and sampling rates. We display the results of all data fit in Fig.5. The quality of each data fit was assessed using their Root Mean Square (RMS) value; distance between each price (\(x\)) at the time \(i\) and the best-fitted model price (\(m\)) histories: \[RMS\left(USD\right)=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(x_{i}-m_{i}\right)^{2}} \tag{2}\] Fig 5: **BTC pricing events presented in this study.** Observed prices are plotted using grey diamonds triangles, modelled prices using red lines. Preceding events are shown using green lines. Residuals are shown using blue diamonds. All curves and residuals parameters are shown in Table 1. BTC pricing events presented in this study. Observed prices are plotted using grey diamonds triangles, modelled prices using red lines. Residuals are shown using blue diamonds. All curves and residuals parameters are shown in Table 1. BTC pricing events presented in this study. Observed prices are plotted using grey diamonds triangles, modelled prices using green lines. Preceding events are shown using green lines. Residuals are shown using blue diamonds. All curves and residuals parameters are shown in Table 1. Panel m) shows that more research needs to be done to understand the mechanism of each event. If the data of panel m) can be well modeled, the next peak is more difficult to fit (red line; Table 1). Here we have unique opportunity to see to simultaneous events and may need to be studied in more detail. Table 1 shows the price changes and _RMS_ for each event shown in Fig. 5. In most cases, we can successfully explain approx. 75% of the data signal (Table 1) or more. The price change over each period is \(>77\)% (median \(\sim\) +80%) with a minimal value of 48% for event 4. After each peak, BTC deprecated, but its value was never lower than the values preceding the peak. We can compare this remarkable behaviour to data histories (time-dynamics) observed in some natural phenomena. Similar to seismicity rate (see [31, 32, 33] for visual comparison) or volcano magma chamber inflation [34], BTC prices hold on to about 30-40% of the peak price after the pricing episode is over. The price decrease being of only approx. 1/3 of the asset volatility suggests that the confidence placed into the asset is never completely dissipated. The driving cause of price peak remains however unclear, and will require more research. ### Similarities with assets under speculative pressure On the financial market, rapid changes of asset prices may be explained by changes of confidence in the asset (e.g. bad and good results, scandal), by a temporary change in the liquidity of the asset, and by a variety of price manipulation schemes (e.g. pump and dump, insider trading, rumour propagation). In order to explore whether BTC price is driven by a fundamental cause or by external perception, we searched within financial market data if the constant time development of 50-70 days could be observed for other assets of various liquidity. Here, we state that an asset is liquid when any amount of the asset can be traded in a cash market without materially affecting its price. We also assume an orderly transaction in the sense of fair value measurement as defined by IFRS 13, i.e. a transaction is not forced and the agent making the transaction is able to conduct usual marketing activities (such as gathering a sufficient number of competitive bids). In that context, the liquidity can be interpreted as a measure of confidence, seeing how the seller-confident that the price of the same assets will be marginally changed in a near-future-is not afraid of losing wealth by selling their assets as they are. And of course, one should keep in mind that the level of investors' confidence may be impacted at any time by changing market context (central banks interest rates raises, stock market volatility, etc.). Because of their known broad liquidity regimes and/or high demand and/or speculation histories, we selected analogues of BTC time series such as gold (XAU), currencies pairs (TRY, \begin{table} \begin{tabular}{r|r|r|r|r|r|r|r|r|r} **Event** & T\_1 & T\_2 & p\_min & P\_max & **Price change (SUS)** & **Date 1 (dd mm yyyyy)** & **Date 2 (dd mm yyyyy)** & **RMS** & **RMS Red (\%)** & **Panel in Fig. 5 \\ \hline 1 & 89 & 186 & 0 & 1 & 1 & 13 Nov. 2010 & 18 Feb. 2011 & 0.08 & 85.07 & a) \\ \hline 2 & 199 & 261 & 1 & 4 & 3 & 03 Mar. 2011 & 04 May 2011 & 0.2 & 87.32 & b) \\ \hline 3 & 199 & 299 & 1 & 35 & 34 & 03 Mar. 2011 & 11 June 2011 & 2. & 76.34 & c) \\ \hline 4 & 517 & 699 & 4 & 13 & 8 & 15 Jan. 2012 & 15 Jul. 2012 & 0.9 & 86.02 & d) \\ \hline 5 & 727 & 937 & 10 & 229 & 219 & 12 Aug. 2012 & 10 Mar. 2013 & 6 & 85.30 & e) \\ \hline 6 & 1017 & 1174 & 66 & 1132 & 1065 & 29 May. 2013 & 02 Nov. 2013 & 28 & 91.19 & f) \\ \hline 7 & 1917 & 2099 & 365 & 705 & 339 & 15 Nov. 2015 & 15 May 2016 & 21 & 95.17 & g) \\ \hline 8 & 2191 & 2467 & 596 & 2953 & 2357 & 15 Aug. 2016 & 18 May 2017 & 165 & 86.78 & h) \\ \hline 9 & 2454 & 2524 & 1933 & 4066 & 2133 & 05 May 2017 & 14 Jul. 2017 & 421 & 84.60 & i) \\ \hline 10 & 2191 & 2647 & 596 & 17803 & 17206 & 15 Aug. 2016 & 14 Nov. 2017 & 1033 & 74.51 & j) \\ \hline 11 & 2999 & 3204 & 3236 & 11007 & 7770 & 01 Nov. 2018 & 25 May 2019 & 1044 & 80.83 & k) \\ \hline 12 & 3561 & 3728 & 9048 & 19104 & 10055 & 16 May 2020 & 30 Oct. 2020 & 1312 & 89.10 & l) \\ \hline 13 & 3561 & 3773 & 9048 & 40789 & 31740 & 16May 2020 & 14 Dec. 2020 & 2503 & 84.70 & m) \\ \hline 14 & 3561 & 3815 & 9048 & 57533 & 48484 & 16 May 2020 & 25 Jan. 2021 & 5621 & 75.25 & n) \\ \hline \end{tabular} [https://doi.org/10.1371/journal.pone.0274165.001](https://doi.org/10.1371/journal.pone.0274165.001) \end{table} Table 1: Characteristics of time series and statistics of fit curves (Fig. 5). The Root Mean Square (RMS) difference between the modeled curve and original time-series never exceeds a –26%. Average length of subdataset is approx. 190 days (median – 180). and EUR), equities (ENRON, EXCITE), CC (ETHEREUM), bond yields (Greece 10-YR yields) and Tulip bulbs in the 17th century (Figs 6 and 7). Those assets of various classes span the complete range of liquidity in the market sense. Some are considered highly liquid (Gold-XAU), others experienced prominent periods of illiquidity (Turkish Iira and Greek debt), others yet have reached terminal illiquidity (Enron and Excite). At last, we chose Ethereum as a reference because of its different governance mechanism and because of the high correlation between ETH and BTC prices (r\({}^{2}\) = 0.91, N = 180) over the last six months. Prices variations of those asset values have of course different causes (from purely speculative versus business plan revaluation) but their consequences are very similar: quick change of price followed by a stagnation, and then a sudden price readjustment after more information becomes available to a wider audience. For the periods following each peak, one can distinguish two classes of assets: those which deflate completely (or return to their normal value as Greece 10-YR bond yield) and those which hold their value for various periods of time (XAU, TRY, BTC). In all cases, confidence plays a strong role in the price fixed by the market, and a given level of illiquidity is reached close to the price peak time. Amongst this group of assets, we distinguish between those who experience illiquidity due to high demand (energy trader ENRON, search web engine EXCITE) and those with Figure 6: Time series of Gold (XAU), currencies (TRY and EUR) and 10YR Greek bond yields (weekly data). Currencies and bond yields show very diverse patterns due to different economic, market and political contexts. Only 10yr Greek Gov. bond yields time-series shows similarities with BTC pricing episodes until the peak is reached. [https://doi.org/10.1371/journal.pone.0274165.006](https://doi.org/10.1371/journal.pone.0274165.006) extremely low demand (TRY, Greeks). Meanwhile, some did not reach their lowest level due to confidence loss (TRY, XAU, Greeks), and others did because of fundamental business issues (Excite) or even accounting fraud (Enron) and, in the end, were revealed as valueless. Our comparison shows that BTC is not fundamentally different from assets under speculative pressure. What is specific to BTC is that, as for gold (XAU), the stabilization of prices following a price peak suggests that investors' confidence level remains strong despite the intrinsic parameters indicate the risk of holding BTC for investor is high. ### Correlation with other CC For other types of assets, it has been observed that a sudden price rise for a given product may spill to others in the same sector; an effect amplified with the level of media attention [35, 36, 37, 38]. Capital spilling suggests that investors aim at investing in alternative assets which are either cheaper or more liquid. This is also true for the most capitalized virtual currencies. News stories about the BTC performance increased the visibility of other crypto-currencies, and encouraged the risk-seeking investors to make a compromise between fund allocated and reputation of the CC in order to buy assets as cheap as possible while benefiting from the herd effect. If media attention and social network activity may impact the price of all CC, it seems that differences in design (i.e. centralization, number of coins available) might not have an influence on their price dynamics. Like in the past, contagion was made easier by the availability of common trading tools for a wide variety of trading financial products. The high correlation between virtual assets in general, and their correlated returns following a medialic event (e.g., NBC Saturday Night Live) confirm that the level of confidence of investors plays a large role in the pricing of virtual assets. ## 4 Discussion In this study we examine episodes of BTC price surges. We found that, in the past, BTC sudden price increases have lasted less than 100 days, were not followed by a full depreciation, BTC staying instead at a level close to 30-50% of its peak value, and that the successive BTC price changes were of quasi-exponential nature. We now hope that, when more data become Figure 7: **Time-shifted (x-axis) and normalized (y-axis) time-series corresponding to data shown in Fig 1. We compare BTC price to BTC pricing periods, Excite stock price, Tulip bulb crisis, and ETH/MONERO prices observed in 2017–2018.** available, analyses similar to those already applied to other kinds of financial returns [39, 40] shall be carried out for BTC and other CC. Put simply, BTC has a lot in common with hard-to-borrow assets, mostly because the market liquidity is limited during periods of price inflation. The valuation of BTC, however, remains a complex endeavor, and one should expect it to behave like any other investment instrument under the scrutiny of a wide population of investors. The overall consistency of each BTC pricing event might be what makes BTC unusual compared to other financial pricing events; few stocks experienced consecutive crises of increasing amplitude without disappearing, or suffering so much that their values never recovered. Finally, we have shown that observations made on the BTC price history could be extended to other types of assets (crypto or not), despite fundamental differences in governance models and price structures (centralization, emissions strategies, price models, underlying businesses). As the BTC also behaved like other equities under speculative pressure, BTC should not be seen purely as a virtual currency. This observation is also supported by the numerous uses of BTC by various owners (savings, speculative, purchase of services, political aims, taste for technologies, hedging, diversification, long-term investment etc.) as published in the recent literature. But BTC prices are not only driven by pulses of trading. BTC price dynamics is also sensitive to causes outside its own pricing mechanisms (mining, validation, number of coins on the market). Those causes are regulatory framework(s), media attention, social media activity, market conditions, and this open list may be expanded in the future. The relative contribution of all effects is likely sensitive to international market conditions and technological sector influence. All along the BTC price history, prices and traded volumes were very much related to the opening/closing of trading platforms with the implementation of national regulations. For instance, when traded volumes were the largest, during the fall 2017, new platforms based in China, with more relaxed rules regarding funds origins and traders identifications, were very active [15]. From January 2018, trading volumes were dramatically reduced, from millions of coins traded daily to thousands, suggesting that while the price stabilized in the range of $8'000-10'000, the price was not driven by the amount of BTC traded nor by the media coverage (see Google Trends time-series in Supplementary Materials). Finally, new regulations focusing on publicizing the identities of traders and owners, or the provenance of funds, with an aim to prevent illegal use of coins, could obviously change the dynamics of CC in the near-future, as suggested before [41]. Regarding media (social or not), trading volume increases observed in 2017-2018 were comparable to those observed in the 1980's in other contexts [42, 43]. This research described sudden price increases followed by warnings of market makers and regulators, resulting in the fall of the stock price of interest. Such a loss of enthusiasm in financial assets has been observed in the past, for instance during the bursting of the dot-com bubble, or following press releases on company performance. The fact that BTC survived various episodes of confidence loss during the last decade demonstrates that it is not purely speculative. Rather, its behavior results from a combination of owners' trust in the future of BTC [44, 45], safety of the transaction system (block chain), and public interest into the asset [46, 47, 48, 49, 50, 11]. Whilst our observations are supported by more than 10 events over a decade and more than five order to price magnitude, some questions remain. We are not able to predict the time and amplitude of the next price peak. Also, we found that in some cases it is difficult to discern the starts and ends of peaks when they are close to each other (Fig.5. m/n). A more sophisticated analysis may be helpful in finding the origin of those "split peaks", and also in linking trading volumes, platform activities and prices on platforms. Further research may help identify potential bottlenecks (trading delays, wrong prices, etc.) between banks involved in derivative products emissions and crypto-platforms trading coins which are used as hedge by those banks. During our research, we faced some issues to explain our results from an economic perspective, because of the lack of research in some domains. First, further research should be carried around the role of platforms within the trading environment (banks, exchanges, retail investors, institutional investors), including in the Over-The-Counter (OTC) trades as initiated by [51]. As an extension, it would be useful to determine the floating quantity and the tracking of coins in order to constrain which portion of the asset is considered as reserve or long-term investment. In the financial domain, it would be necessary to establish clearly whether BTC prices (and CCs prices generally) correlate with other asset class prices, and over Figure 8: **Preliminary search for price decreases for the period 01 Jan. 2021–14 Jun. 2022. We show that the hockey stick function could explain price decreases as well; although the time of development is shorter (days) and likely rooted in the intra-day trading activity.** what time-scale. Regarding exchanges efficiency, we could not explore intra-day price variations because we were not able to access the necessary data so far. Those data are of particular importance to document price decrease episodes which span usually less than 5 days (see Fig 8 for episodes between 01 Jan. 2021 and 15 Jun. 2022). Finally, it would be highly useful to continue studying the sociological profile of the crypto investor (e.g.: age, date of entry in the crypto market, wealth level, country, trade volumes) as such information may help banks define the risk appetite of investors, provide better services, and guaranty the stability of the trading environment [52]. ## Supporting information **S1 Fig. Periodogram for the sinus function;**\(f\)**= sin(t/30). (PDF) **S2 Fig. Periodogram for the sinus function;**\(f\)**= sin(t/30) plus a step (\(dz\) = 0.1) at \(t\)\(>\)1000. (PDF) **S3 Fig. Periodogram for the sinus function;**\(f\)**= sin(t/30) plus a step (\(dz\) = 1.5) at \(t\)\(>\)1000. (PDF) **S4 Fig. Periodogram for the sinus function [\(f\) = sin(t/30)] plus an additional sinus [\(f\) = sin ((\(t\)-60)/5)] at \(t\)\(>\)1000. (PDF) **S5 Fig. Periodogram for the equity stock Wirecard (WDI). (TIFF)** **S6 Fig. Periodogram for the equity stock ABB (ABB). (TIFF)** **S7 Fig. Periodogram for the equity stock Tesla (TSLA). (TIFF)** **S8 Fig. Periodogram for the equity stock GameStop (GME). (TIFF)** **S9 Fig. Periodogram (zoom) for the equity stock GameStop (GME). (PDF)** **S1 Dataset. Google trend map by city (search = "bitcoin price"). (CSV)** **S2 Dataset. Google trend map by country (search = "bitcoin price"). (CSV)** **S3 Dataset. Google trend time series (search = "bitcoin price"). (CSV)** **S4 Dataset. Google trends related searches to search = "bitcoin price". (CSV)** ## Acknowledgments N.H. thanks Pr. Dr. A. Berentsen (Uni. Basel), Dr. Christine Lang (FINMA) and 6 anonymous reviewers for their useful comments. ## Author Contributions **Conceptualization:** Nicolas Houlie. **Data curation:** Nicolas Houlie. **Formal analysis:** Jevgeni Tarassov, Nicolas Houlie. **Methodology:** Jevgeni Tarassov.
2307.10938
Single-Component Superconductivity in UTe$_2$ at Ambient Pressure
The microscopic mechanism of Cooper pairing in a superconductor leaves its fingerprint on the symmetry of the order parameter. UTe$_2$ has been inferred to have a multi-component order parameter that entails exotic effects like time reversal symmetry breaking. However, recent experimental observations in newer-generation samples have raised questions about this interpretation, pointing to the need for a direct experimental probe of the order parameter symmetry. Here, we use pulse-echo ultrasound to measure the elastic moduli of UTe$_2$ in samples that exhibit both one and two superconducting transitions. We demonstrate the absence of thermodynamic discontinuities in the shear elastic moduli of both single- and double-transition samples, providing direct evidence that UTe$_2$ has a single-component superconducting order parameter. We further show that superconductivity is highly sensitive to compression strain along the $a$ and $c$ axes, but insensitive to strain along the $b$ axis. This leads us to suggest a single-component, odd-parity order parameter -- specifically the B$_{2u}$ order parameter -- as most compatible with our data.
Florian Theuss, Avi Shragai, Gael Grissonnanche, Ian M Hayes, Shanta R Saha, Yun Suk Eo, Alonso Suarez, Tatsuya Shishidou, Nicholas P Butch, Johnpierre Paglione, B. J. Ramshaw
2023-07-20T15:09:26Z
http://arxiv.org/abs/2307.10938v2
# Single-Component Superconductivity in UTe\({}_{2}\) at Ambient Pressure ###### Abstract **The microscopic mechanism of Cooper pairing in a superconductor leaves its fingerprint on the symmetry of the order parameter. UTe\({}_{2}\) has been inferred to have a multi-component order parameter that entails exotic effects like time reversal symmetry breaking. However, recent experimental observations in newer-generation samples have raised questions about this interpretation, pointing to the need for a direct experimental probe of the order parameter symmetry. Here, we use pulse echo ultrasound to measure the elastic moduli of samples of UTe\({}_{2}\) that exhibit both one and two superconducting transitions. We demonstrate the absence of thermodynamic discontinuities in the shear elastic moduli of both single- and double-transition samples, providing direct evidence that UTe\({}_{2}\) has a single-component superconducting order parameter. We further show that the superconductivity is highly sensitive to compression strain along the \(a\) and \(c\) axes, but insensitive to strain along the \(b\) axis. This leads us to suggest a single-component, odd-parity order parameter--specifically the B\({}_{2u}\) order parameter--as the most likely order parameter in UTe\({}_{2}\).** ## Introduction Definitive determinations of the superconducting pairing symmetry have been accomplished for only a handful of materials, among them the \(s\)-wave BCS superconductors and the \(d\)-wave cuprates [1]. In some superconductors, such as Sr\({}_{2}\)RuO\({}_{4}\), debate over the pairing symmetry has persisted for decades despite ultra-pure samples and an arsenal of experimental techniques [2; 3; 4]. This is more than an issue of taxonomy: the pairing symmetry places strong constraints on the microscopic mechanism of Cooper pairing, and some pairing symmetries can lead to topological superconducting states [5]. The question of pairing symmetry is nowhere more relevant that in UTe\({}_{2}\) where, in addition to power laws in thermodynamic quantities [6; 7; 8; 9], the most striking evidence for unconventional superconductivity is an extremely high upper critical field \(H_{\rm c2}\) compared to the relatively low critical temperature [7; 10]. Remarkably, for some field orientations, the superconductivity re-emerges from a resistive state above \(\sim\)40 tesla and persists up to at least 60 tesla [11]. This high \(H_{\rm c2}\) constrains the spin component of the Cooper pair to be spin-triplet, which in turn constrains the orbital state of the Cooper pair to be odd under inversion (i.e. odd parity, such as a \(p\) or \(f\)-wave state). However, there are many possible odd-parity order parameters and which one manifests in UTe\({}_{2}\) is unknown. The primary question we address here is regarding the nature of the orbital part of the superconducting order parameter. In addition to even (\(s\) or \(d\)-wave) and odd (\(p\) and \(f\)-wave) designations, order parameters can have multiple components: both conventional \(s\)-wave and high-\(T_{\rm c}\)\(d_{x^{2}-y^{2}}\)-wave order parameters are described by a single complex number, whereas the topological \(p_{x}+ip_{y}\) state has two components, namely \(p_{x}\) and \(p_{y}\). Evidence for a two component order parameter in UTe\({}_{2}\) was first identified stemming from the presence of two distinct superconducting transitions in some samples, as well as the onset of time-reversal symmetry breaking at \(T_{\rm c}\)[12; 13]. Combined with the evidence for spin-triplet pairing, these observations have led to several proposed exotic, multi-component order parameters for UTe\({}_{2}\) (see Table 1). These multi-component states can have a topological structure that could explain other experimental observations, such as the chiral surface states seen in STM [14], or the anomalous normal component of the conductivity observed in microwave impedance measurements [15]. Claims of a multi-component order parameter are not without controversy, however. As the purity of the samples has increased, \(T_{\rm c}\) has shifted to higher values and the second transition has disappeared at ambient pressure [16]. Previous work has suggested that two transitions arise due to inhomogeneity [17], but the application of hydrostatic pressure splits single-\(T_{\rm c}\) samples into two-\(T_{\rm c}\) samples [18; 19], suggesting that two superconducting order parameters are, at the very least, nearly degenerate with one another. The natural way to distinguish between single-component and two-component superconducting order parameters is to apply strain. Single-component superconductors have a single degree of freedom that couples to compression strains--the superfluid density--producing a discontinuity in the compressional elastic moduli at \(T_{\rm c}\) (see Figure 1). They, however, have no such discontinuity in their shear moduli because shear strains preserve volume and thus do not couple to superfluid density. Multi-component superconductors, on the other hand, have additional degrees of freedom: the relative orientation of the two order parameters, as well as their relative phase difference. These additional degrees of freedom couple to shear strains, producing discontinuities in the shear moduli at \(T_{\rm c}\). By identifying which elastic moduli have discontinuities at \(T_{\rm c}\), one can determine whether a superconductor is multi-component without any microscopic knowledge of the Fermi surface or the pairing mechanism. ## III Results We use a traditional phase-comparison pulse echo ultrasound technique to measure the temperature dependence of six elastic moduli in three different samples of UTe\({}_{2}\) over a temperature range from about 1.3 K to 1.9 K. In particular, we measure all three compressional (i.e. \(c_{11}\), \(c_{22}\), and \(c_{33}\)) and shear (i.e. \(c_{44}\), \(c_{55}\), and \(c_{66}\)) moduli in one sample with two superconducting transitions (S3: \(T_{c,1}\approx 1.64\) K, \(T_{c,2}\approx 1.60\) K) and in two samples with a single \(T_{\rm c}\) (S1: \(T_{c}\approx 1.63\) K and S2: \(T_{c}\approx 1.70\) K). Details of the sample growth and preparation, as well as the experiment, are given in the Methods. Figure 2 shows the relative changes in four elastic moduli across \(T_{\rm c}\) for the single-transition samples S1 and S2. We observe a single, sharp (\(\approx 85\) mK wide) discontinuity in the \(c_{33}\) compression modulus, as expected for all superconducting transitions. We observe no discontinuities in any of the shear elastic moduli to within our experimental resolution (approximately 1 part in \(10^{7}\)). Figure 3 shows the relative changes in the elastic moduli for sample S3 with a double superconducting transition (the single-\(T_{\rm c}\) data is reproduced here for comparison). We observe two distinct discontinuities in \(c_{33}\) separated by Figure 1: **The influence of strain on one and two component superconductors.** Panel (a) illustrates how two representative order parameters—single-component \(s\)-wave and two-component \(p_{x}+ip_{y}\)—respond to both compression and shear strain. Both gaps respond under compression (whether increasing or decreasing in magnitude depends on microscopic details). Only the two-component gap, however, couples to shear strain—here we illustrate the “phase” mode (see Ghosh _et al._[2] for more details). Panel (b) shows the expected changes in elastic moduli across \(T_{\rm c}\) for one and two component order parameters. All superconductors have a discontinuity in their compressional moduli across \(T_{\rm c}\), but only two-component superconductors have discontinuities in their shear moduli. approximately 40 mK. Subsequent specific heat measurements on the same sample show a similar "double peak" feature identified in other double-\(T_{\mathrm{c}}\) samples [12] (specific heat data is shown in the S.I.). Notably, we find the sum of the discontinuities in the double-\(T_{\mathrm{c}}\) sample to be of a similar size as the discontinuity in the single-\(T_{\mathrm{c}}\) sample. Additionally, the behaviour of the shear elastic moduli is nearly identical to that of the single-\(T_{\mathrm{c}}\) samples, again with no discontinuities at \(T_{\mathrm{c}}\). We also measure the two other compressional moduli--\(c_{11}\) and \(c_{22}\)--and show them along with \(c_{33}\) in Figure 4. \(c_{11}\) has a discontinuity of approximately 20 parts per million--roughly a factor of 2 smaller than the discontinuity in \(c_{33}\). In contrast \(c_{22}\) has a discontinuity of at most 1 part per million--significantly smaller than the other two compressional moduli. Discontinuities in all three compression moduli are allowed by symmetry for any superconducting order parameter (see Ghosh _et al._[2] and the S.I.). We first analyze the data using only the presence or absence of discontinuities in the elastic moduli. This analysis is based on symmetry arguments alone and is independent of the size of the discontinuities. We then perform a quantitative analysis of the discontinuities using Ehrenfest relations. Finally, we combine all of our observations to speculate on which particular superconducting order parameter is most consistent with our data. **Symmetry of the superconducting order parameter.** The presence or absence of a discontinuity in each elastic modulus at \(T_{\mathrm{c}}\) constrains the symmetry of the superconducting order parameter. Roughly speaking, only strains that couple to a degree of freedom associated with the superconducting order parameter show discontinuities at \(T_{\mathrm{c}}\). We illustrate this with a couple of examples; a more complete and rigorous derivation is given in the SI. Discontinuities in elastic moduli arise when there is coupling between strain and superconductivity that is linear in strain and quadratic in order parameter. For a single-component superconducting order parameter, this only occurs for compression strains [37]. A single-component order parameter can be written as \(\eta=\eta_{0}e^{i\phi}\), where \(\eta_{0}\) is the magnitude of the gap (which may depend on momentum) and \(\phi\) is the superconducting phase. The lowest-order coupling to a strain \(\epsilon_{ij}\) is \(\epsilon_{ij}\eta^{\star}\eta=\epsilon_{ij}\eta_{0}^{2}\), where (\({}^{\star}\)) denotes complex conjugation. This coupling is allowed only if \(\epsilon_{ij}\) preserves the symmetry of the lattice, i.e. it is only allowed for compression strains and not for shear strains (which break the lattice symmetry). Since \(\eta_{0}^{2}\) is proportional to the superfluid density, the physical interpretation of the \begin{table} \begin{tabular}{c c c l} Dimensionality & Representation & Shear discontinuity? & Reference (E: experiment; T: theory) \\ \hline \multirow{6}{*}{One-Component} & \(A_{u}\) & No & E: NMR [20] \\ & & & E: scanning SQUID [21] \\ \cline{2-3} & \(B_{2u}\) & No & E: Ultrasound (this work) \\ \cline{2-3} & & & T: Hund’s-Kondo model[22] \\ & \(B_{3u}\) & No & T: DFT [23] \\ & & & E: NMR[24; 25] \\ & & & E: scanning SQUID [21] \\ \cline{2-3} & & & E: specific heat [16; 17] \\ & & & T: pair-Kondo effect [26] \\ & & & E: uniaxial stress [27] \\ \hline \multirow{6}{*}{Two-Component} & \(\{B_{1u},A_{u}\}\) & \(c_{66}\) & E: microwave surface impedance [15] \\ & & & E: specific heat, Kerr effect [12] \\ \cline{1-1} & & & E: penetration depth [8] \\ \cline{1-1} & & & E: NMR [28] \\ \cline{1-1} \cline{2-3} & & & T: phenomenology analoguous to \({}^{3}\)He [29; 30] \\ \cline{1-1} & & & E: specific heat [9] \\ \cline{1-1} \cline{2-3} & & & T: phenomenology + DFT [31] \\ \cline{1-1} & & & T: DFT [32] \\ \cline{1-1} \cline{2-3} & & & T: DFT [33; 34] \\ \cline{1-1} & & & E: specific heat, Kerr effect [12; 13] \\ \cline{1-1} & & & T: emergent \(D_{4h}\) symmetry under RG flow [35] \\ \cline{1-1} \cline{2-3} & & & E: STM [14] \\ \cline{1-1} & & & T: MFT of Kondo lattice [36] \\ \hline \end{tabular} \end{table} Table 1: **Proposed order parameters for UTe\({}_{2}\).** Proposed odd-parity order parameters for UTe\({}_{2}\), sorted by the number of components (dimensionality), their irreducible representation, and whether the proposed order parameter is based on an experimental observation or a theoretical proposal. Scenarios listed without a specific representation are compatible with any type of one or two-component order parameter. Based on symmetry alone, our work strongly constrains the order parameter to be of the one-component type. Using more quantitative arguments, we suggest a \(B_{2u}\) order parameter. Figure 4: **Relative change in compression elastic moduli through \(T_{c}\).** The compression elastic moduli as a function of temperature through \(T_{c}\). \(c_{33}\) and \(c_{11}\) were measured on a single-\(T_{c}\) sample and \(c_{22}\) was measured on the double-\(T_{c}\) sample. Both \(c_{11}\) and \(c_{33}\) have clearly resolvable discontinuities at \(T_{c}\), whereas \(c_{22}\) has a barely-resolvable discontinuity. Figure 3: **Relative change in elastic moduli through \(T_{c}\) for double-\(T_{c}\) UTe\({}_{2}\).** The compression elastic modulus \(c_{33}\) shows two distinct discontinuities at \(T_{c}\), consistent with the two peaks we find in the specific heat of the same sample. The shear moduli, on the other hand, show no discontinuities and behave nearly identically to the shear moduli of the single-\(T_{c}\) sample. Single (double) transition samples are shown with empty (filled) symbols. Figure 2: **Relative change in elastic moduli through \(T_{c}\) for single-\(T_{c}\) UTe\({}_{2}\).** A compression elastic modulus, \(c_{33}\), shows a sharp discontinuity of approximately 40 parts per million at \(T_{c}\), as expected for all superconductors. In contrast, the shear elastic moduli—\(c_{44}\), \(c_{55}\), and \(c_{66}\)—show only changes in slope at \(T_{c}\), consistent with a single-component superconducting order parameter. resulting discontinuity at \(T_{\rm c}\) is that compression strain couples to the superfluid density, which turns on at \(T_{\rm c}\) and provides a new degree of freedom that softens the lattice. In contrast with single-component order parameters, multi-component order parameters can have discontinuities in shear elastic moduli. This is because there are more degrees of freedom associated with a multi-component order parameter than with a single-component order parameter. Writing a two-component order parameter as \(\vec{\eta}=\left\{\eta_{0,i}e^{i\phi_{i}},\eta_{0,j}e^{i\phi_{j}}\right\}\), there are now several possible couplings at lowest order. Taking the well-known \(p\)-wave state in tetragonal crystals as an example, one possible coupling is \(\epsilon_{xy}\eta_{0,p_{x}}\eta_{0,p_{y}}\cos\left(\phi_{x}-\phi_{y}\right)\)[38]. This is the so-called "phase mode" of the order parameter, as it couples shear \(xy\) strain to the relative phase of the two components (see Figure 1). This produces a discontinuity in the associated elastic modulus \(c_{66}\). The relative phase is a new degree of freedom that is only present in a multicomponent order parameter, as strain cannot couple to the absolute phase of a single-component order parameter (such a term would break gauge symmetry). Similar expressions exist for orthorhombic crystals (see SI for details), but the main conclusion is independent of the crystal structure: shear elastic moduli _only_ exhibit discontinuities at \(T_{\rm c}\) for multi-component superconducting order parameters. The absence of a discontinuity in any shear elastic modulus in the single-transition samples (S1 and S2) rules out all single-parity [39], two-component order parameters in UTe\({}_{2}\). While there are no natural two-component order parameters in UTe\({}_{2}\) because the crystal structure is orthorhombic, many nearly or accidentally-degenerate order parameters have been proposed to explain the presence of the two nearly-degenerate \(T_{\rm c}\)'s, time reversal symmetry breaking, and chiral surface states (see Table 1). One proposal is the onset of first a B\({}_{2u}\) state followed by a B\({}_{3u}\) state at the second, lower \(T_{\rm c}\)[12; 13; 33; 34; 35]. This proposal predicts the usual discontinuities in compression moduli at the first (higher-temperature) \(T_{\rm c}\), followed by a discontinuity in the compression moduli _and_ the \(c_{66}\) shear modulus at the lower \(T_{\rm c}\). In fact, the product of any two odd-parity (i.e. \(p\) or \(f\)-wave) states or any two even-parity states (i.e. \(s\) or \(d\)-wave) in D\({}_{2h}\) predicts a discontinuity in either \(c_{44}\), \(c_{55}\), or \(c_{66}\), none of which we observe. This strongly constrains the superconducting order parameter of UTe\({}_{2}\) to be of the single-component type. Finally, we note that our data is fully consistent with _any_ single-component order parameter, including even-parity states like \(s\)-wave and \(d\)-wave. The similar absence of discontinuities in the shear elastic moduli of the two-transition sample (S3) rules out the multi-component explanation for the second superconducting transition. We find that the single discontinuity in \(c_{33}\) in single-\(T_{\rm c}\) samples is approximately the same size as the sum of the two discontinuities found in double-\(T_{\rm c}\) samples. This suggests that, below the second transition, all electrons in UTe\({}_{2}\) are in the same thermodynamic state, rather than double-\(T_{\rm c}\) samples having two separate superconducting mechanisms. This suggests a common origin for the two superconducting transitions, perhaps split by local strains [17] or magnetic impurities [40]. Why this usually manifests as only two sharp \(T_{\rm c}\)'s (as we also observe in our data), rather than multiple \(T_{\rm c}\)'s or a broad transition, remains an open question. It also leaves unresolved the issue of why even single-\(T_{\rm c}\) samples become double-\(T_{\rm c}\) samples under hydrostatic pressure, perhaps leaving open the possibility of a multi-component order parameter under pressure. **Ehrenfest analysis and the coupling of compression strains to superconductivity.** The smallness of the discontinuity in \(c_{22}\) compared to the other two compression moduli indicates that the superconductivity in UTe\({}_{2}\) is insensitive to strain along the \(b\) axis (\(\epsilon_{yy}\)). This observation is made quantitative through the so-called Ehrenfest relations, which relate discontinuities in the elastic moduli, \(\Delta c_{ij}\), to the discontinuity in the specific heat, \(\Delta C\). The Ehrenfest relations are \[\Delta c_{ij}=-\frac{\Delta C}{T}\left(\frac{dT_{\rm c}}{d\epsilon_{ij}} \right)^{2}, \tag{1}\] where \(\frac{dT_{\rm c}}{d\epsilon_{ij}}\) is the derivative taken at zero applied stress. Using the specific heat measured on sample S3 (see S.I.) and the data shown in Figure 4, we calculate \(\frac{dT_{\rm c}}{d\epsilon_{xx}}=0.23\pm 0.01\) K/(% strain), \(\frac{dT_{\rm c}}{d\epsilon_{yy}}=0.07\pm 0.03\) K/(% strain), and \(\frac{dT_{\rm c}}{d\epsilon_{zz}}=0.34\pm 0.02\) K/(% strain). These values are consistent with those measured in uniaxial strain experiments [27] (see S.I. for details). These Ehrenfest relations indicate that the superconductivity of UTe\({}_{2}\) is significantly more sensitive to strains along the \(a\) and \(c\) axes than it is to strain along the \(b\) axis. This observation is perhaps surprising given the relatively quasi-two-dimensional nature of the Fermi surface measured by quantum oscillations in UTe\({}_{2}\)[41; 42]. The Fermi surface consists of two sets of quasi-one-dimensional sheets running along the \(a\) and \(b\) axes that hybridize to form one electron and one hole pocket (see Figure 5). Thus, if any direction is to be weakly coupled to superconductivity, one might expect it to be the \(c\) axis. Looking at the crystal structure in Figure 5, however, it is clear that the \(a\) and \(b\) axes are highly asymmetric: chains of \(c\)-axis-coupled uranium dimers run along the \(a\) axis, whereas chains of tellurium run along the \(b\) axis (the other tellurium site, Te(1), participates much less in the Fermi surface than the Te(2) chains: see SI.). Thus \(\epsilon_{xx}\) and \(\epsilon_{zz}\) modulate the inter and intra-uranium dimer coupling, respectively, whereas \(\epsilon_{yy}\) only modulates the weak inter-chain coupling of the uranium chains. \(\epsilon_{yy}\) does, however, modulate the inter-tellurium spacing along the \(b\) axis. Our observation of the relative insensitivity of \(T_{\mathrm{c}}\) to \(\epsilon_{yy}\) therefore suggests that the superconducting pairing is more sensitive to the uranium-uranium distances than to the tellurium-tellurium distances. **Proposed single-component superconducting order parameter.** Thermal transport [6], specific heat [7; 9], and penetration depth [8] all indicate the presence of point nodes in the superconducting gap of UTe\({}_{2}\). \(B_{1u}\), \(B_{2u}\), and \(B_{3u}\) order parameters all have point nodes in their superconducting gaps, but these nodes lie along different directions in momentum space and thus intersect different portions of the Fermi surface (or may not intersect the Fermi surface at all if it is quasi-2D). We use our observation of relatively weak coupling between \(\epsilon_{yy}\) and \(T_{\mathrm{c}}\) to motivate a particular orientation of the point nodes in UTe\({}_{2}\) and to suggest one particular single-component order parameter. Figure 5 shows a tight binding model of the Fermi surface of UTe\({}_{2}\) as determined by quantum oscillations, color-coded by the relative uranium \(6d\) and tellurium \(5p\) content (both bands have significant uranium \(5f\) character that contribute to their heavy masses, but not to their geometry). Our results suggest that the superconducting gap is either weak or absent on the tellurium-dominant electron Fermi surface. Only the \(B_{2u}\) order parameter has nodes that lie along the \(k_{y}\) direction, producing a node in the gap on the tellurium-dominant surface and a gap maximum on the uranium-dominant surface. We note that a reported small, isotropic pocket with a light mass does not qualitatively affect this argument, as it will contribute little to the density of states compared to the Fermi surfaces shown in Figure 5. Figure 5: **Influence of compressional strains on the crystal structure and Fermi surface of UTe\({}_{2}\).** Panel (a) shows the crystal structure of UTe\({}_{2}\). Highlighted are tellurium chains along the \(b\) axis, and chains that run along the \(a\) axis consisting of \(c\) axis-oriented uranium dimers. These chains dominate the geometry of the Fermi surface shown in panel (b), modeled after quantum oscillation measurements [41]. The Fermi surface is colored according to its uranium (yellow) and tellurium (gray) content. The superconducting gaps for three possible odd-parity order parameters are shown at \(k_{z}=0\) as blue lines in panel (c). Discussion Our primary result is that the superconducting transitions in both single and double-transition samples of UTe\({}_{2}\) exhibit no thermodynamic discontinuities in any of the shear elastic moduli at \(T_{\rm c}\). The strictest interpretation of this result is that it rules out all multi-component order parameters that have a bilinear coupling to strain. For UTe\({}_{2}\), this rules out all multi-component order parameter scenarios except for mixed-parity order parameters like \(s\) + \(p\) wave. Looking beyond our own experiment, there is strong evidence that UTe\({}_{2}\) has an odd-parity order parameter. There is also strong evidence for nodes in the superconducting gap. Combined with our result, this leaves the three \(B_{iu}\) representations as the only possibilities. We argue that the our observed lack of sensitivity of \(T_{\rm c}\) to \(\epsilon_{yy}\) suggests a \(B_{2u}\) order parameter. A single-component order parameter places constraints on possible explanations for other experiments. First, a single-component order parameter cannot break time reversal symmetry. This suggests that the interpretation of time reversal symmetry breaking at \(T_{\rm c}\) as seen by polar Kerr effect measurements [12; 13], along with the chiral surface states seen in STM [14] and microwave surface impedance measurements [15], may need to be revisited. The search for multi-component superconductors continues: they are of both fundamental and practical interest, since a multi-component order parameter is a straightforward route to topological superconductivity. We find that, while UTe\({}_{2}\) may have an odd-parity, spin-triplet order parameter, it seems that the most likely order parameter to condense at \(T_{\rm c}\) is of the single-component \(B_{2u}\) representation--either \(p_{y}\) or \(f_{yz^{2}}\)-wave superconductivity. Definitive determination of the orientation of the nodes in the superconducting gap would confirm this scenario. ## Methods ### Sample Growth and Preparation Single crystals of UTe\({}_{2}\) were grown by the chemical vapor transport (CVT) method as described in Ran _et al._[43, 7]. Samples with one \(T_{\mathrm{c}}\) (two \(T_{\mathrm{c}}\)'s) were grown in a two-zone tube furnace with temperatures of 950\({}^{\circ}\)C and 860\({}^{\circ}\)C (1060\({}^{\circ}\)C and 1000\({}^{\circ}\)C) at the hot and cold end, respectively. Specimens were aligned to better than 1\({}^{\circ}\) using their magnetic anisotropy (performed in a Quantum Design MPMS) and X-ray diffraction (performed in a Laue backscattering system) measurements. Samples were then polished to produce two parallel faces normal to the (100), (010), and (001) directions, depending on the mode geometry (see Table 2). Thin-film ZnO piezoelectric transducers were sputtered from a ZnO target in an atmosphere of oxygen and argon. Both shear and longitudinal responses are present in each transducer--the shear axis was aligned with either the (100), (010), or (001), again depending on the particular mode geometry. 3 crystals were measured in total; see Table 2 for details. ### Pulse-Echo Measurements Measurements were performed in an Oxford Instruments Heliox \({}^{3}\)He refrigerator. We used a traditional phase-comparison pulse echo ultrasound method to measure the elastic moduli. Short bursts (typically \(\sim 50\) ns) of radiofrequency signals, with the carrier frequency between 500 MHz and 2.5 GHz, were generated with a Tektronix TSG 4106A RF generator modulated by a Tektronix AFG 31052 arbitrary function generator, amplified by a Mini-Circuits ZHL-42W+ power amplifier, and transmitted to the transducer. The signal was detected with same transducer, amplified with a Mini-Circuits ZX60-3018G-S+ amplifier, and recorded on a Tektronix MSO64 oscilloscope. The detection amplifier was isolated from the power amplifier using Mini-Circuits ZFSWA2-63DR+ switches, timed with the same Tektronix AFG 31052 arbitrary function generator. Both shear and compression sound are generated by our transducers--these signals are separated in the time domain due to the different speeds of propagation and identified as shear or compression using the known elastic moduli of UTe\({}_{2}\)[44]. Figure 6 shows a raw pulse echo signal from a transducer sputtered on sample S3 with sound propagating along the [010] direction with a shear polarization axis along [100], thus measuring \(c_{22}\) and \(c_{66}\) simultaneously. Echoes corresponding to the different elastic modes can be clearly identified as shear (red vertical dashed lines) and compression (blue vertical dashed lines). The phase of each echo was analyzed using a software lockin, and the relative change in phase between two echoes was converted to the relative change in speed of sound as a function of temperature. In Figure 7 we compare the temperature dependence of \(c_{33}\) of samples S1 and S3 obtained with different transducers. \begin{table} \begin{tabular}{c c c c c} \hline \hline \# \(T_{c}\) & Sample & \(\vec{k}\) & \(\vec{u}\) & \(c_{ij}\) \(f\) ( MHz ) \\ \hline \multirow{4}{*}{1} & \multirow{4}{*}{S1} & \multirow{4}{*}{[001]} & [100] & \(c_{55}\) & 1261 \\ & & & [001] & \(c_{44}\) & 1434 \\ & & & [001] & \(c_{33}\) & 2260 \\ \cline{2-4} & \multirow{2}{*}{S2} & \multirow{2}{*}{[100]} & [100] & \(c_{11}\) & 823 \\ & & & [010] & \(c_{06}\) & 1250 \\ \hline \multirow{4}{*}{2} & \multirow{4}{*}{S3} & \multirow{4}{*}{[001]} & [100] & \(c_{55}\) & 1348 \\ & & & [001] & \(c_{44}\) & 1352 \\ \cline{1-1} & & & [001] & \(c_{33}\) & 1348 \\ \cline{1-1} \cline{2-4} & & & [010] & \(c_{06}\) & 1362 \\ \cline{1-1} \cline{2-4} & & & [010] & \(c_{22}\) & 1362 \\ \hline \hline \end{tabular} \end{table} Table 2: **Sample configuration.** Listed are the transducer configurations for all the measurements in this manuscript. Samples are sorted by the number of superconducting phase transitions (first column). Additional information given is the propagation \(\vec{k}\) and the polarization \(\vec{u}\) of the sound pulse excited in the sample, as well as the measured elastic modulus. Also shown is the frequency at which each measurement is performed. ###### Acknowledgements. We acknowledge helpful discussions with D. Agterberg and P. Brydon. B.J.R. and F.T. acknowledge funding from the Office of Basic Energy Sciences of the United States Department of Energy under award no. DE-SC0020143 for preparing the samples and transducers, performing the measurements, analyzing the data, and writing the manuscript. Research at the University of Maryland was supported by the Department of Energy award number DE-SC-0019154 (sample characterization), the Gordon and Betty Moore Foundation's EPiQS Initiative through grant number GBMF9071 (materials synthesis), the National Science Foundation under grant number DMR-2105191 (sample preparation), the Maryland Quantum Materials Center and the National Institute of Standards and Tech Figure 6: **Raw Pulse Echo Signal.** The raw signal from a sputtered ZnO shear transducer on sample S3 with sound propagation along the [010] and polarization along the [100] directions. The transducer exhibits both a compressional (blue lines) and a shear (red dashed lines) response. These correspond to sound modes determined by the elastic moduli \(c_{22}\) and \(c_{66}\), respectively. Figure 7: **Transducer Comparison.** Shown are \(\Delta c_{33}/c_{33}\) for single \(T_{c}\) (S1, left) and two \(T_{c}\) (S3, right) samples. For each sample we compare the relative change in elastic modulus between measurements obtained with two different transducers. Both transducers excited sound along the [001] direction. However, for the data in red, the shear component of the transducer was polarized along [100] (additionally measuring \(c_{55}\)), whereas for the data in blue, the shear component of the transducer was polarized along [010] (additionally measuring \(c_{44}\)). nology. A part of this work was performed at the Cornell Center for Materials Research Shared Facilities which are supported through the NSF MRSEC program (DMR-1719875).
2310.13626
Nonreciprocal Coulomb Drag between Quantum Wires in the quasi-1D regime
Coulomb drag experiments have been an essential tool to study strongly interacting low-dimensional systems. Historically, this effect has been explained in terms of momentum transfer between electrons in the active and the passive layer. Here, we report Coulomb drag measurements between laterally coupled GaAs/AlGaAs quantum wires in the multiple 1D sub-band regime that break Onsager's reciprocity upon both layer and current direction reversal, in contrast to prior 1D Coulomb drag results. The drag signal shows nonlinear I-V characteristics, which are well characterized by a third-order polynomial fit. These findings are qualitatively consistent with a rectified drag signal induced by charge fluctuations. However, the nonmonotonic temperature dependence of this drag signal suggests that strong electron-electron interactions, expected within the Tomonaga-Luttinger liquid framework, remain important and standard interaction models are insufficient to capture the qualitative nature of rectified 1D Coulomb drag.
R. Makaju, H. Kassar, S. M. Daloglu, A. Huynh, A. Levchenko, S. J. Addamane, D. Laroche
2023-10-20T16:22:20Z
http://arxiv.org/abs/2310.13626v2
# Nonreciprocal Coulomb Drag between Quantum Wires in the quasi-1D regime ###### Abstract Coulomb drag experiments have been an essential tool to study strongly interacting low-dimensional systems. Historically, this effect has been explained in terms of momentum transfer between electrons in the active and the passive layer. Here, we report Coulomb drag measurements between laterally coupled GaAs/AlGaAs quantum wires in the multiple 1D sub-band regime that break Onsager's reciprocity upon both layer and current direction reversal, in contrast to prior 1D Coulomb drag results. The drag signal shows nonlinear I-V characteristics, which are well characterized by a third-order polynomial fit. These findings are qualitatively consistent with a rectified drag signal induced by charge fluctuations. However, the nonmonotonic temperature dependence of this drag signal suggests that strong electron-electron interactions, expected within the Tomonaga-Luttinger liquid framework, remain important and standard interaction models are insufficient to capture the qualitative nature of rectified 1D Coulomb drag. + Footnote †: preprint: ## I Introduction Since their first experimental realization nearly four decades ago [1; 2], one-dimensional systems have been extensively studied, both to deepen our understanding of strongly correlated systems and for novel quantum applications such as charge sensing [3], proximity-induced superconductivity [4], and qubit engineering [5; 6; 7]. In one dimension, the strong confinement leads to reduced screening and increased electron-electron (e-e) interactions [8], giving rise to unique transport phenomena such as interaction-dependent universal scaling [9], spin-charge separation [10; 11], and charge fractionalization [12]. These seminal experimental results are well understood within the Tomonaga-Luttinger Liquid theory [13; 14], where the low-energy excitations of one-dimensional systems are best described by collective spin and charge modes. While transport in single quantum wires has been heavily studied experimentally, these experiments did little to deepen our understanding of 1D electron interactions, as the simple conductance measurement in clean systems is expected to yield the noninteracting quantized value [15], shadowing potential signatures of non-Fermi liquid physics. Instead, experiments between coupled 1D systems have yielded the bulk of the experimental observations of Luttinger liquid physics in 1D systems [16; 10]. Owing to its sensitivity to both inter- and intrawire e-e interactions, Coulomb drag (CD) [17] is one of the prime experimental techniques to study these strongly interacting systems. In a typical CD experiment, a current (\(I_{drive}\)), sourced in one wire called the drive wire, induces a voltage (\(V_{drag}\)) in the adjacent drag wire due to e-e interactions, provided that no current is flowing in said drag wire. Historically, most CD measurements have been interpreted in terms of momentum transfer, owing to their compliance to the Onsager's reciprocity relations [18], as demonstrated in both 2D systems [19; 20; 21; 22; 23; 24; 25] and closely separated 1D systems [26; 27; 16; 28]. However, subsequent theoretical advances [29; 30; 31; 32; 33] have highlighted that, in mesoscopic structures, alternate drag-inducing mechanisms involving rectification of charge fluctuations could explicitly break Onsager's relations. Recent observations consistent with these novel theories have been reported in quantum dots [34], in nanowires coupled to graphene [35] as well as in superconducting [36; 37] and topological wires [38]. Understanding the material and parametric considerations behind the onset of this alternate drag-inducing mechanism is crucial for future developments in the field of coupled 1D systems. In this letter, we report CD between laterally-coupled quantum wires. In contrast with past studies focusing on the single 1D subband regime and understood within the conventional momentum-transfer framework [26; 16] [see upper panel of Fig. 1(a)], we explore the multiple subbands regime at large (\(d\gtrsim 150\) nm) interwire separation, where charge rectification has been found to play a predominant role. The reported drag signal shows a clear departure from Onsager's relation and exhibits nonlinear current-voltage characteristics. However its nonmonotonic temperature dependence departs from the expected quadratic dependence predicted in mesoscopic systems with negligible e-e interactions [29], highlighting the likely role that interactions still play within the rectification framework. In the rectification model, depicted in the bottom panel of Fig. 1(a), the violation of Onsager relations can be understood by the drive layer creating energy excitations that induce bidirectional momentum transfer in the adjacent layer. However, the wire's energy dependent electron-hole (e-h) asymmetry, intrinsic to mesoscopic devices, results in a drag voltage that is primarily generated in a specific direction, independently of the sign of the drive current. Characterizing this novel drag-inducing mechanism might prove crucial for the development of quantum devices harnessing e-e interactions, particularly in the fields of thermo-electricity [39; 40] and quantum computing [41]. ## II Device fabrication The coupled quantum wires are fabricated from a GaAs/AlGaAs heterostructure with a quantum well buried \(\sim 80\,nm\) below the surface. The quantum wires are laterally coupled over a length \(l=5\,\mu m\) and are separated by an electrostatic barrier of width \(d\sim 150\,nm\). A scanning electron microscope image of a typical device is shown in Fig. 1(b). The wires are engineered using standard nanofabrication procedures, consisting of both electron-beam and photolithography, and are contacted with evaporated Ge-Au-Ni-Au ohmic contacts. Additional details concerning the fabrication can be found in the Supplemental Material. The coupled quantum wires are defined by three gates: a top gate (\(V_{T}\)), a middle gate (\(V_{M}\)) and a bottom gate (\(V_{B}\)) [see Fig. 1(b)]. Unless otherwise specified, standard low frequency lock-in techniques, at a frequency of either \(9\,Hz\) or \(37.3\,Hz\), are used for the CD measurements. Additional standard DC measurements have also been performed. Measurements have been performed in a Blue Fors dilution refrigerator, with a base lattice temperature of \(\sim 10\,mK\). A circuit diagram of the CD measurement scheme utilizing a virtual ground on the drag side is presented in Fig. 1(c), where \(I_{drive}\) is applied on the drive wire (green) and \(V_{drag}\) is measured in the drag wire (blue). A typical CD measurement, over a wide range of subband occupancy in both wires, is shown in Fig. 2(a) while the conductance of both the top and the bottom wire is shown in Fig. 2(b) and Fig. 2(c) respectively, along with a linecut of the drag voltages. The plateaus observed in the conductance of both wires do not lie at the integer values of \(2e^{2}/h\) even after accounting for series resistance in the setup, indicating the non-ballistic nature of the wires. The drag signal shows pronounced oscillations over both positive and negative polarities of drag voltage for a given drive current direction, and the oscillations are generally concomitant with openings of 1D subbands in either the drag or the drive wire. As seen from the comparison of the drag peaks in Fig. 2(b) and 2(c), the modulation from the bottom (drive) wire is notably weaker than the one of the top (drag) wire, especially away from the single 1D subband regime. All drag measurements were performed with \(V_{M}=0.15\)V, yielding a tunneling resistance larger than \(30\,M\Omega\). The drag signal is also frequency independent between 9 and \(85\,Hz\) (see Fig S4). ## III Nonreciprocal Coulomb drag To further investigate the discrepancy in the modulation of the drag signal between the top and bottom wires, we measured CD upon layer reversal. Fig. 3(a) Figure 1: Schematics and circuit diagram of the laterally coupled quantum wires. (a) Schematic representation of the drag-inducing mechanisms due to momentum transfer (top) and energy rectification (bottom). The top wire (blue) is the drag wire and the bottom wire (green) is the drive wire. (b) Scanning electron microscope image of the laterally coupled quantum wires, constituted of a top (\(V_{T}\)), a middle (\(V_{M}\)) and a bottom (\(V_{B}\)) gate. (c) Circuit diagram for Coulomb drag measurements. A drive current is supplied to the green section of the device and the drag voltage is measured in the adjacent wire. The drive current is sourced using a \(R_{s}=10M\Omega\) resistor in series with the drive wire. A virtual ground setup is used on the drag side of the experiment. shows the CD signal with the bottom wire as the drive wire and Fig. 3(b) with the top wire as the drive wire. The oscillations observed in the drag signal are primarily correlated with the drag wire gate and not strongly correlated with the drive wire gate, as seen from the presence of the horizontal stripes in Fig. 3(a) and vertical stripes in Fig. 3(b). This is a clear violation of Onsager's reciprocity [18], which is expected to be satisfied within the conventional momentum transfer approach to CD. A similar violation occurs upon current direction reversal, as shown in Fig. 3(c) and 3(d). As the current direction is inverted without exchanging the drag voltage probes, Onsager's reciprocity would result in a sign reversal of the drag signal, whereas our measured signal showed minimal changes. These changes, observed when extracting the symmetric and anti-symmetric contributions to the drag signal (see Fig. S5), are less than \(\sim 20\%\) of the symmetric signal, and exhibit reduced modulation with 1D subband occupancy. These results strongly suggest that conventional momentum-transfer models for 1D Coulomb drag are inadequate to explain our data. An alternate drag mechanism explaining the violation of Onsager's relationships in mesoscopic systems is due to rectification [29] (Fig. 1(a), bottom pannel). This model predicts that strong asymmetry, either in e-h transmission probability or the circuit itself, could induce a rectified CD signal that is independent on the drag current direction. A model for rectified Coulomb drag in coupled Quantum Point Contacts (QPCs) predicts two dominant contributions to drag: a linear contribution from near-equilibrium thermal noise rectification due to e-h asymmetry and a nonlinear contribution, dominating at larger drive currents due to the rectification of quantum shot noise which is sensitive to the circuit intrinsic asymmetry. Both terms are predicted [29] to provide the following contribution to the drag signal: \[I_{D}^{th}=V\frac{R_{Q}^{2}}{4\pi}\int d\omega\frac{Z_{+}(\omega)}{\omega^{2}} \frac{\partial}{\partial\omega}\left[\coth\frac{\omega}{2T}\right]\Gamma_{1}( \omega)\Gamma_{2}(\omega),\] \[I_{D}^{shot}=\frac{eV^{2}}{\Delta_{2}R_{Q}}Z_{-}(0)\sum_{n}|\mathbf{t}_{n}|^{2 }[1-|\mathbf{t}_{n}|^{2}].\] Here, \(R_{Q}=\frac{2\pi\hbar}{e^{2}}\) is the quantum resistance, \(\omega\) is the frequency of the rectified noise from the drive circuit, \(\Delta\) is the energy scale of the confinement potential, \(Z_{\pm}(\omega)\) is a dimensionless trans-impedance kernel that captures circuitry of interwire interactions, and \(\Gamma_{1,2}\) are the rectification coefficient given by: \[\Gamma=\frac{2e}{R_{Q}}\sum_{n}\!\int\!d\epsilon[f(\epsilon_{-})-f(\epsilon_{+ })][|\mathbf{t}_{n}(\epsilon_{+})|^{2}-|\mathbf{t}_{n}(\epsilon_{-})|^{2}]\] where \(\epsilon_{-}(\epsilon_{+})\) is the energy of the electrons (holes) with the corresponding occupations \(f(\epsilon_{\pm})\), and \(\mathbf{t}_{n}\) is the transmission probability across the \(n^{th}\) channel of the wire. Higher order effects can also contribute additional nonlinear terms to the drag signal [42]. We note that, in the linear regime, an e-h asymmetry is essential for the onset of a drag signal, and its sign will depend on whether the carriers transmission probability is locally increasing or decreasing with energy. Within this framework, the left-right Onsager's relation is explicitly broken through the current rectification. In addition, owing to the finite bias across the drive wire, its chemical potential is \(\sim 200\)\(\mu\)eV larger than that of the drag wire. The lack of layer inversion symmetry implies that \(\Gamma_{1}(\omega,\epsilon+200\mu eV)\Gamma_{2}(\omega,\epsilon)\neq\Gamma_{2 }(\omega,\epsilon+200\mu eV)\Gamma_{1}(\omega,\epsilon)\), _i.e._ that the wire's rectification coefficients are not identical. Figure 2: Characterization of the quantum wires. (a) Drag voltage as a function of top (drive) and bottom (drag) gate voltages. The vertical (red) and horizontal (black) dashed lines represent the line cuts used for panels b) and c). (b,c) Drag voltage along their respective line-cut in the top and bottom wires. The conductance plateaus are not quantized at integer values of \(2e^{2}/h\) as the wires are non-ballistic. Studying the DC response of the drag signal simplifies the measurement by fixing the electrons chemical potential to a single value in the drive wire. As presented in Fig. 4, we measured the DC 1D drag with the top wire as drive wire and the bottom wire as the drag wire, in both current directions and with both positive [Fig. 4(a) and (b)] and negative current sources [Fig. 4(c) and (d)]. As in the AC drag, the DC drag violates Onsager's relation upon reversal of the current direction. However, the signal changes both in magnitude and sign by going from positive to negative voltages. This further corroborates the rectified Coulomb drag model, as only the chemical potential of the drive wire has an incidence on the drag signal, and not the direction of the current flow. As expected for rectified drag dominated by the linear component, the sign of the drag voltage also inverts when the sign of the drive voltage is inverted. To quantify the nonlinearity of the drag signal, we present in Fig. 4(e) the I-V relation for the DC drag voltage. The drag voltage is well described by \(V_{drag}=-5.5I+0.058I^{2}-3.4\times 10^{-4}I^{3}\), with the current given in nA and the voltage in \(\mu\)V. A cubic fit was selected, as neither a linear fit nor a quadratic fit provided a good match to the data. A similar nonlinearity of drag I-V relation is observed in the AC regime, as shown in Figs. 5(a) and (b), where the data can also be well fitted by a cubic polynomial. Consistently with the microscopic model for rectified drag in mesoscopic circuits, quantitative details of the drag nonlinearity strongly vary with gate voltage. Over all gate voltages analysed, the linear coefficients (in \(\mu V/nA\)) are between 1 and 2 order of magnitudes stronger than the quadratic terms (in \(\mu V/nA^{2}\)), which are themselves between 1 and 2 orders of magnitude larger than the cubic terms (in \(\mu V/nA^{3}\)) (see Tables S3 and S4 for parameters details). As such, the predominant contribution to the drag signal appears to be rectification of near-equilibrium shot-noise, but quantum-shot noise rectification is still significant for certain gate voltage configurations. The discrepancy in the fitting parameters between the DC and the AC measurements can be explained by microscopic changes in the wires' potential landscape between different cooldowns. In the mesoscopic regime, one would naturally expect the size of CD fluctuations to be of the order of the Thouless energy, \(E_{\rm Th}=\frac{\hbar D}{L^{2}}=\frac{\hbar v_{F}l}{2L^{2}}\)[43; 44]. Here, \(D\) is the mesoscopic system diffusion coefficient, \(L\) is the wire's length, \(l\) is the electron mean free path, \(v_{F}=\frac{\pi\hbar n_{1D}}{2m^{*}}\) is the Fermi velocity, \(n_{1D}\) is the electronic Figure 3: Onsager relations of the drag signal. The drag voltage is plotted as a function of both top and bottom gate voltages for various measurement configurations. (a) The top wire is used as the drag wire while the bottom wire is used as the drive wire. (b) The top wire is used as the drive wire while the bottom wire is used as the drag wire. Onsager’s relation is not obeyed when the drag and the drive wires are exchanged as the signals are not identical. (c) Same setup as a), but over a different cooldown. (d) The position of current injection in the wire is reversed. Onsager’s relation is broken yet again as the signal’s polarity remains virtually unchanged when the current direction is reversed. Figure 4: Current dependence of the DC drag signal with the top wire as the drive wire and the bottom wire as the drag wire. (a) DC drag as a function of top and bottom gate voltages with right-flowing \(I_{drive}\sim 10\) nA. (b) Same measurement but with the current source position reversed. (c) DC drag as a function of top and bottom gate voltages for right-flowing current \(I_{drive}\sim-10\) nA. (d) Same measurement as in c), but with left-flowing current. (e) I-V relation of the DC drag for positive and negative drive currents. The solid line represents a cubic fit to the data. density and \(m^{*}\) is the electron effective mass. Estimating our 1D electron density from magnetic depopulation measurements[53] (see Supplemental Material), a 1D density of \(n_{1D}\sim 8.94\!\times\!10^{8}\,m^{-1}\) is estimated when the wire has 5 populated subbands, and \(n_{1D}\sim 1.06\!\times\!10^{9}\,m^{-1}\) when the wire has 6 populated subbands. The mean free path can be estimated from the typical size at which quantized 1D conduction is observed, which is \(\sim 1\,\mu\)m in shallow 2DEGs. From these estimates, we obtain a Thouless energy in the range of \(E_{th}\sim 6.12-9.87\,\mu\)eV, in good agreement with the typical size of the oscillations observed in our Coulomb drag signal. The nonlinearity of the current-voltage characteristics should also result in a deformation of the sinusoidal drag signal in AC measurements. Fig. 5(c) and (d) show the waveforms of the drag signal from \(-\pi/2\) to \(-\pi/2\) at different drive currents (1.46 nA, 5.38 nA and 18.34 nA from bottom to top) for wire setups A [Fig 5(c)] and B [Fig 5(d)] respectively, with bottom gate at -1.52 V and top gate at -0.38 V. These waveforms were calculated by adding the first 9 harmonics of the drag signal. The shape of the waveform is drive current dependent and its magnitude increases as the drive current is increased. Fig. 5(e) and (f) shows the waveforms for \(I_{drive}=11.8\,nA\), at different top and bottom gate voltages which are represented by the filled circles in Fig. 5(g). Due to the nonlinear I-V relation, we expect the waveforms to digress from the expected sinusoidal shape, and exhibit significant dependence on the gate voltages values. The waveforms exhibit nonidentical characteristic upon current direction reversal, likely caused by a small momentum-transfer contribution to the drag signal or different line resistances into the wires, changing the electron's chemical potential. We also note that Joule heating from the drive current (\(\sim 1mV\) voltage drop at 10 nA) is unlikely to be at the origin of the I-V nonlinearity, since, as shown in Fig. 6, the drag signal resulting from a 10 nA drive current exhibits a nonmonotonic temperature dependence down to \(\sim 180\) mK, a much lower temperature than the voltage temperature of the drive circuit (\(\sim 1.6\) K). ## IV Temperature dependence Figure 5: Waveform and I-V characteristics of the drag signal in two different setups. (a) Drag voltage as a function of drive current, for setup A, as shown in the top of panel c). (b) Same plot for setup B, with inverted current direction. The solid lines represent a cubic fit. The I-V relationship deviates from the linear behavior predicted by momentum transfer models. (c) The waveforms of the drag signal at different drive currents: 1.46 nA, 5.38 nA and 18.34 nA (from bottom to top) for wire setup A. (d) Same plot as (d) for setup B. The peak of the waveform increases in magnitude as the drive voltage is increased from 1.46 nA to 18.34 nA. (e) The waveform of the drag signal at different gate voltages for setup A. (f) Same waveform plot as (e) for setup B. (g) Drag voltage as a function of top gate and bottom gate voltages. The colored points on the plot corresponds to the waveform plots in (e) and (f). In the Fermi liquid regime, Coulomb drag induced from charge density fluctuations is expected to depend quadratically on the temperature. However, as presented in Fig. 6(a), the observed temperature dependence of the drag signal is nonmonotonic. The observation of both an increasing drag signal with a decreasing temperature [45] and of a nonmonotonic temperature dependence [17; 46] are hallmarks of interaction effects within the Luttinger liquid model, albeit in a framework where the drag signal is induced by momentum transfer. However, to describe this effect in the diffusive limit of multichannel quantum wires, one must go beyond the usual approximations of the Fermi liquid and Luttinger liquid theories of drag. In particular, the three-particle interwire correlations remove the constraints of particle-hole asymmetry and may lead to a strong drag effect in the low-temperature regime. In the diffusive limit, \(T\tau\ll 1\), where \(\tau\) being the intrawire transport scattering time, the resulting temperature dependence of the third-order drag mechanism of transconductivity can be extracted from Ref. [47] with modifications appropriate for the 1D system. We find \(\sigma_{D}\sim R_{Q}^{-1}(\nu U_{0})^{3}L_{T}\propto\frac{1}{T}\) for the case of short-ranged interactions (strong screening), where \(L_{T}=\frac{v_{F}}{T}\) is the thermal de Broglie lengths, and \(\nu\) is the 1D-density of states, and \(U_{0}\) is the characteristic strength of the interwire interaction for forward scattering with small momentum transfer. The surprising feature of this result is that it is independent of \(\tau\). For long-ranged interactions, we find the same temperature dependence, but with a more rapid decay of drag with the interwire separation, namely \(\sigma_{D}\sim R_{Q}^{-1}(\nu U_{0})^{3}L_{T}/(\kappa d)^{3}\) for \(\kappa d\gg 1\) where \(\kappa\) is the inverse Thomas-Fermi screening radius. An extension of the formalism from Refs. [47; 48] to the ballistic limit of transport \(T\tau>1\) results in the Fermi-Liquid like temperature dependence of drag conductivity \(\sigma_{D}\sim R_{Q}^{-1}(\nu U_{0})^{3}(v_{F}\tau)(T\tau)^{2}\propto T^{2}\). Therefore, that three-particle mechanism of drag can result in both a nonmonotonic temperature-dependence and an upturn of drag at low temperatures, even from the forward electron scattering at small momentum transfer between the wires. This analysis should be contrasted to the Arrhenius behavior predicted to occur in ballistic wires within the momentum transfer formalism [49; 45; 50]. We present the result of power-law fits of the drag temperature dependence in Fig. 6(b) and (c), in log-log and Arrhenius form respectively. The blue solid line in fig. 6(b) and 6(c) indicates the regime where the log-log plot and the Arrhenius plot is nearly linear, and the exponent for these fittings were calculated as \(V_{drag}\propto T^{\alpha};\alpha=-0.98\pm 0.04\) for the power-law function and \(V_{drag}\propto e^{\frac{\alpha}{T}};\beta=1.028\pm 0.01\) for the Arrhenius function. Analysis at different subband occupancies lead to a comparable power-law fit: \(V_{drag}\propto T^{\alpha};\alpha=-0.8\pm 0.2\). We note that, owning to a sign change at high temperature in our data, an offset voltage \(V_{0}\) has been included in the fit. Additional details about the fitting procedure can be found in the supplement. While the power-law exponent value is consistent with the three-particle mechanism for Coulomb drag described prior, the limited range where the drag signal is showing an increase with decreasing temperature prevents us from ruling out the possibility that a more conventional Arrehenius-like behavior is occurring. Additional experimental and theoretical work will be required to confirm this conclusion. ## V Discussion and Summary The results reported in this letter are fairly different from prior 1D drag results [51; 16; 27] where the CD signal appeared to be consistent with the momentum transfer model. The reason behind this discrepency is not readily apparent. However it is likely that a combination of the large subband occupancy in the wires, the significant interwire separation and the sample innate disorder could be the source of these fundamental differences in Figure 6: Temperature dependence of the CD signal. (a) Temperature dependence of the CD signal with \(N_{drive}\leq 1\) and \(N_{drag}<=4\) (black), \(N_{drive}\leq 1\) and \(N_{drag}\leq 3\) (red) and \(N_{drive}\leq 1\) and \(N_{drag}\leq 5\) (green). (b) Log-log plot of drag voltage and temperature for \(N_{drive}\leq 1\) and \(N_{drag}\leq 4\). The blue straight line is the linear fit for high temperature regime and the offset is \(V_{0}=0.16451\mu V\). (c) Arrhenius plot of drag voltage and temperature for \(N_{drive}\leq 1\) and \(N_{drag}\leq 4\), with a linear fit (blue straight line) in the high temperature regime. The offset is \(V_{0}=0.16451\mu V\). the nature of the dominant drag-inducing mechanism. It should also be noted that, as highlighted by recent studies [35; 36; 38; 52], observations of a negative and/or nonreciprocal CD is not uncommon in mesoscopic systems. Additional experimental and theoretical work will be required to determine the universality of rectification-induced drag across various material platforms and to assess the parametric onset of both momentum transfer and rectification induced drag. In summary, we present an experimental study of 1D Coulomb Drag between quantum wires in the multiple subband regime. Our CD measurements deviates from the standard momentum transfer models by clearly violating the Onsager reciprocity relations, both upon layer reversal and current reversal. Subsequent measurements of the nonlinearity of the drag signal are consistent with a microscopy energy rectification model for Coulomb drag. However, the nonmonotonic temperature dependence of the drag signal highlights the importance of including electron-electron interactions beyond the Luttinger liquid framework in future theoretical description of rectification-induced drag. ## Acknowledgements This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. DOE's National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the article do not necessarily represent the views of the U.S. DOE or the United States Government. This work was partially supported by the National High Magnetic Field Laboratory through the NHMFL User Collaboration Grants Program (UCGP). The National High Magnetic Field Laboratory is supported by the National Science Foundation through NSF/DMR-1644779 and the State of Florida. A. L. acknowledges support by the NSF Grant No. DMR-2203411 and H. I. Rommes Faculty Fellowship provided by the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation.
2303.14071
Improving Real-time Communication for Educational Metaverse by Alternative WebRTC SFU and Delegating Transmission of Avatar Transform
Maintaining real-time communication quality in metaverse has always been a challenge, especially when the number of participants increase. We introduce a proprietary WebRTC SFU service to an open-source web-based VR platform, to realize a more stable and reliable platform suitable for educational communication of audio, video, and avatar transform. We developed the web-based VR platform and conducted a preliminary validation on the implementation for proof of concept, and high performance in both server and client sides are confirmed, which may indicates better user experience in communication and imply a solution to realize educational metaverse.
Yong-Hao Hu, Kenichiro Ito, Ayumi Igarashi
2023-03-24T15:31:03Z
http://arxiv.org/abs/2303.14071v1
Improving Real-time Communication for Educational Metaverse by Alternative WebRTC SFU and Delegating Transmission of Avatar Transform ###### Abstract Maintaining real-time communication quality in metaverse has always been a challenge, especially when the number of participants increase. We introduce a proprietary WebRTC SFU service to an open-source web-based VR platform, to realize a more stable and reliable platform suitable for educational communication of audio, video, and avatar transform. We developed the web-based VR platform and conducted a preliminary validation on the implementation for proof of concept, and high performance in both server and client sides are confirmed, which may indicates better user experience in communication and imply a solution to realize educational metaverse. Metaverse, Real-time Communication, Web User Interface ## I Introduction As the rising use of metaverse, usually referred to 3D virtual space accessible from electronic devices including computers or VR devices, in various situations, its application in education has also been conducted and discussed [1][2]. Similar to how Moodle1 serves as an representative Learning Management System (LMS), we consider it essential to build an inclusive platform in metaverse for education. We chose Mozilla Hubs2 as the base for such platform due to its open-source-driven, highly accessible browser-based nature, complete architecture and well-proven track record. Footnote 1: [https://moodle.org/](https://moodle.org/) Footnote 2: [https://hubs.morilla.com/](https://hubs.morilla.com/) Mozilla Hubs currently has a limit of 24 participants per room3, and raising the limit decreases the communication quality, since real-time communication within 3D virtual spaces includes not only media (audio and video) but also spatial data such as avatar transform, which results in much heavier data transmission especially with high number of participants. Creating multiple rooms and redirecting participants is an alternative to accommodate more participants, however, this approach may not be suitable for an inclusive online classroom where all participants may attend in the same room. Footnote 3: [https://support.mozilla.org/en-US/kb/room-capacity-hubs](https://support.mozilla.org/en-US/kb/room-capacity-hubs) The goal of this study includes achieving comparable or higher communication quality through audio, video, and spatial data in the same room without being impacted by the number of clients per room, for the purpose of developing an inclusive educational metaverse. ## II Development of Inclusive Educational Metaverse Platform ### _Requirements_ We considered that the basis being open-source, browser-based, and capable of maintaining communication quality are required for an education platform to be sustainable and inclusive. An open-source software, depending little on proprietary ones, are more flexible to customize and robust against discontinuation of proprietary software, while proprietary software could still be utilized as tools with benefits to reduce the cost of operation and maintenance, such as Google Workspace or Microsoft 365. A browser-based service, available for various devices with browsers installed, enhances the accessibility for users within diverse situations. In contrast, services being not browser-based and depending on any specific operating system require concurrent maintenance to comply with the OS update life cycle, which many applications failed to keep up with. Stable and reliable communication is indispensable for education, and low communication quality owing to the number of students is undesirable. Current number of students per classroom in Japan is up to around 35 or 40, and we expect an online classroom to support at least the same capacity while keeping its communication quality. Therefore, educational metaverse should not merely be open-source and browser-based, which are part of the reasons we adopt Mozilla Hubs, but also maintain communication quality with little impact from the number of participants, which is what we attempts to improve by utilizing proprietary software as an alternative. ### _Method_ Mozilla Hubs' original architecture4 transmit spatial data through WebSocket on a mesh network, and transmit media using their own WebRTC Selective Forwarding Unit (SFU) named Dialog, which was formerly based on Janus5 and currently Mediasoup6, both open-source WebRTC SFU libraries. Footnote 4: [https://hubs.mozilla.com/docs/system-overview.html](https://hubs.mozilla.com/docs/system-overview.html) Footnote 5: [https://janus.com/meetecho.com/](https://janus.com/meetecho.com/) Footnote 6: [https://mediasoup.org/](https://mediasoup.org/) We propose introducing an alternative WebRTC SFU solution besides Mediasoup for media transmission, and delegate the transmission of avatar transform to the alternative. ### _Implementing WebRTC SFU Alternative_ Sora, a WebRTC SFU provided by Shiguredou Inc. in Japan7 was chosen for implementation due to its featured higher client capacity (1:1000 broadcasting per room), conciser implementation, swift response to new browser updates, and more stable signaling through a hybrid of WebSocket and DataChannel. Footnote 7: [https://sora.shiguredo.jp/](https://sora.shiguredo.jp/) In current architecture of Mozilla Hubs, a client joining a room uses data retrieved from Reticulum, the web server of Mozilla Hubs, to conduct signaling through Protoo8 to create WebRTC connections with other clients, and media starts being transmitted through Mediasoup, as shown in Fig. 1. Footnote 8: [https://protoo.versatica.com/](https://protoo.versatica.com/) Our implementation is shown in Fig. 2. Components in red are the differences from the default Mozilla Hubs architecture. Sora is capable of handling signaling, which Mediasoup is not9, therefore, Sora handles both signaling and data transmission for the proposed implementation. In our implementation, we host Reticulum, Dialog and Hubs frontend on the same server, and when a room manager chooses to use Sora, Dialog is paused and Sora's cloud service starts serve as the WebRTC SFU. Hence, the implementation provides a proprietary software solution to easily solve existing problem when needed, but also avoids the software to have dependency on it. Footnote 9: [https://mediasoup.org/documentation/v3/communication-between-client-and-server/](https://mediasoup.org/documentation/v3/communication-between-client-and-server/) ### _Delegate Avatar Transform Transmission to WebRTC SFU_ Current architecture of Mozilla Hubs synchronizes avatar transform through Reticulum, the web server, using the reliable WebSocket protocol. However, real-time synchronization between voice and avatar poses may be regarded more important for communication than simply prioritizing reliability. WebRTC Datachannel was implemented on top of UDP while remaining reliable as TCP, suitable for the real-time transmission of avatar transform data. The transmission of avatar transform was delegated to Sora's Datachannel, including position and rotation of body, head, and hands. ## III Preliminary Validation: Proof of Concept A preliminary validation was conducted on implementation with Sora, compared with the original Dialog implementation. For each condition, 12 devices typically used at educational scenery was connected: 1 Apple Mac, 1 Laptop Windows 10, 2 iPad, 1 Chromebook, 1 Microsoft Surface, 3 iPhone, 1 Android smartphone, 1 Meta Quest 2, and 1 Pico 4. Server data and client data was obtained for 5 minutes, no severe delay in media and avatar transform was observed for both conditions. Server's average load was collected every minute (Table I). The results indicate that the proposed implementation using Sora relieves server load than the original implementation. Client data of transmitted bytes between clients with WebRTC getStats API was collected on the Apple Mac device. Bitrates were calculated and plotted in Fig. 3, with the average listed in Table II. The results indicate higher sent/received and stabler sent bitrates for implementation with Sora. ## IV Conclusion We introduced a proprietary WebRTC SFU service to Mozilla Hubs to improve media and avatar transform transmission. Preliminary validation results showed less server load and higher bitrates, which implies better user experience in communication and feasibility of a metaverse for education. ## Acknowledgment This work was partially supported by the following grants: JST Grant Number JPMJPF2202, JSPS KAKENHI Grant Number 22K19683.
2306.09198
A Review on Quantum Approximate Optimization Algorithm and its Variants
The Quantum Approximate Optimization Algorithm (QAOA) is a highly promising variational quantum algorithm that aims to solve combinatorial optimization problems that are classically intractable. This comprehensive review offers an overview of the current state of QAOA, encompassing its performance analysis in diverse scenarios, its applicability across various problem instances, and considerations of hardware-specific challenges such as error susceptibility and noise resilience. Additionally, we conduct a comparative study of selected QAOA extensions and variants, while exploring future prospects and directions for the algorithm. We aim to provide insights into key questions about the algorithm, such as whether it can outperform classical algorithms and under what circumstances it should be used. Towards this goal, we offer specific practical points in a form of a short guide. Keywords: Quantum Approximate Optimization Algorithm (QAOA), Variational Quantum Algorithms (VQAs), Quantum Optimization, Combinatorial Optimization Problems, NISQ Algorithms
Kostas Blekos, Dean Brand, Andrea Ceschini, Chiao-Hui Chou, Rui-Hao Li, Komal Pandya, Alessandro Summer
2023-06-15T15:28:12Z
http://arxiv.org/abs/2306.09198v2
# A Review on Quantum Approximate Optimization Algorithm and its Variants ###### Abstract The Quantum Approximate Optimization Algorithm (QAOA) is a highly promising variational quantum algorithm that aims to solve combinatorial optimization problems that are classically intractable. This comprehensive review offers an overview of the current state of QAOA, encompassing its performance analysis in diverse scenarios, its applicability across various problem instances, and considerations of hardware-specific challenges such as error susceptibility and noise resilience. Additionally, we conduct a comparative study of selected QAOA extensions and variants, while exploring future prospects and directions for the algorithm. We aim to provide insights into key questions about the algorithm, such as whether it can outperform classical algorithms and under what circumstances it should be used. Towards this goal, we offer specific practical points in a form of a short guide. **Keywords:** Quantum Approximate Optimization Algorithm (QAOA), Variational Quantum Algorithms (VQAs), Quantum Optimization, Combinatorial Optimization Problems, NISQ Algorithms _All authors contributed equally to this work._ ###### Contents * 1 Introduction * 2 Background * 2.1 MaxCut Problem Overview * 2.2 QUBO Problems and Applications * 2.3 Classical Algorithms for MaxCut Problem * 2.4 Variational Quantum Algorithms * 2.4.1 Variational Quantum Eigensolver (VQE) * 2.4.2 Quantum Adiabatic Algorithm (QAA) * 2.4.3 Barren plateaus * 2.5 The Quantum Approximate Optimization Algorithm (QAOA) ###### Abstract We consider a \(6\)-node graph with \(6\) nodes and \(1\) nodes. We show that the graph \(G\) is a \(6\)-node graph with \(6\) nodes and \(1\) nodes. We show that the graph \(G\) is a \(6\)-node graph with \(6\) nodes and \(1\) nodes. We also show that the graph \(G\) is a \(6\)-node graph with \(6\) nodes and \(1\) nodes. Performance of QAOA variants * 8 Performance of QAOA variants on a simulator * 9 Amount of resources employed by different QAOA variants List of Tables * 1 Summary of ansatz strategies for improving QAOA. * 2 Summary of approaches in QAOA parameter optimization. * 3 Summary of established theoretical bounds on the performance of QAOA and selected classical algorithms on optimization problems. * 4 Summary of selected state-of-the-art experiments on various quantum devices. * 5 Overview of the graph structures and instances used in experiment. * 6 Summary statistics of the selected QAOA variants across all implementation combinations (simulation results). * 7 Mean approximation ratio achieved in relation to circuit layer depth (\(p\)). * 8 Mean approximation achieved for each variant for all problem types and sizes. * 9 Summary of cosine similarity metrics across QAOA variants. ## 1 Introduction Although fault-tolerant quantum computers are still years away, significant progress has been made in the development of the Noisy Intermediate-Scale Quantum (NISQ) machines, and there is a growing interest in finding useful algorithms meant to be run on these near-term quantum devices. As such, Variational Quantum Algorithms (VQAs) [1, 2, 3, 4, 5, 6] have been proposed to take advantage of current quantum systems through a hybrid quantum-classical optimization routine. The hybrid loop of a VQA involves a parameterized quantum circuit to be run on a quantum computer and an optimizer that can update the parameters on a classical machine by minimizing the cost function constructed based on the outputs of the quantum circuit. In this way, VQAs often have the advantage of having shallow quantum circuits, making them less susceptible to noise in NISQ devices. To date, VQAs have found use cases in various areas, including quantum chemistry simulations, machine learning, and optimization [7, 8, 9, 10, 11]. In particular, the Quantum Approximate Optimization Algorithm (QAOA) [12, 13] is one of the most promising VQAs that has attracted great interest in recent years. QAOA is designed to find approximate solutions to hard combinatorial optimization problems on quantum computers: it encodes the Hamiltonian related to the problem into a quantum circuit and leverages adiabatic time evolution and layering to optimize the variational parameters of the circuit, such that the approximate solution to the problem can be constructed by measuring the QAOA circuit with the optimal set of parameters. The fundamental building block, a single layer of the QAOA circuit, consists of a cost layer associated with the problem Hamiltonian and a mixer layer whose corresponding Hamiltonian does not commute with the problem Hamiltonian. The performance of QAOA is typically measured by the approximation ratio \(C_{\text{QAOA}}/C_{\text{max}}\), i.e., the ratio of the cost associated with the solution output by QAOA to that of the true optimal solution. Theoretically, such an approximation ratio increases with increasing layers \(p\), as QAOA recovers the adiabatic evolution in the \(p\to\infty\) limit. QAOA is suitable for finding good approximated solutions to several optimization problems, such as Maximum Cut (MaxCut) [12], Maximum Independent Set (MIS) [14, 15], Binary Paint Shop Problem (BPSP) [16], Binary Linear Least Squares (BLLS) [17], Max E3LIN2 [13], Multi-Knapsack [18], and, more generally, Quadratic Unconstrained Binary Optimization (QUBO) problems [19]. Consequently, applications of QAOA in the real world are many and far-reaching. Some recent examples include portfolio optimization [20, 21], tail assignment [22], object detection [23], maximum likelihood detection of binary symbols over a multiple-input and multiple-output channel [24], text summarization [25], maximum independent set [26], factorization (Variational Quantum Factoring algorithm) [27, 28], protein folding [29], and wireless scheduling [30]. However, at the moment, literature still contains many conflicting opinions on various aspects of the algorithm, such as for which problems, if any, QAOA can outperform classical algorithms and whether it can provide any practical quantum advantage under the noise and errors of current quantum devices. Here we extensively study the available literature in order to provide a comprehensive review of the current status of QAOA and summarize existing results in different aspects of the algorithm. This review aims to be a guide for using QAOA, providing insights into key questions about the algorithm, that is, whether QAOA can outperform classical algorithms and under what circumstances it should be used. Additionally, we provide meaningful insights on QAOA's potential for achieving quantum advantage, and discuss promising research directions for the future. In particular, we focus on the following aspects: a survey of various extensions and variants of the QAOA ansatz, strategies to improve parameter optimization, efficiency, and performance analysis of the algorithm in various problem instances, and hardware-specific issues including the effects of noise and hardware-tailored implementations. Moreover, we also implement and assess the efficiency and performance of some promising QAOA variants on the MaxCut problem, which is a paradigmatic combinatorial optimization problem commonly used to benchmark the potential of the algorithm [31]. The remainder of this paper is organized as follows. The Background section (Section 2) provides an overview of relevant hard combinatorial problems such as the MaxCut problem, General QUBO Problems, and related algorithms, including the Variational Quantum Eigensolver (VQE) and Quantum Adiabatic Algorithm (QAA). The Analysis section (Section 3) provides a detailed examination of various aspects of QAOA, including the ansatz, computational efficiency, quality of solution, effects of noise and errors, and hardware-specific implementations (see Figure 4). Our Experimental Results (Section 4) provide quantitative evaluations and performance comparisons between different QAOA variants. In Section 5 we summarize our findings, highlight possible applications for QAOA, discuss its potential quantum advantage and examine future directions for the research. Finally, in Section 6, we provide insights for a practical guide to QAOA by answering key questions about the algorithm, such as which QAOA variant or ansatz to use for a specific problem and how to effectively optimize it. ## 2 Background Generally speaking, combinatorial optimization problems concern finding the optimal solution among a set of feasible solutions, given some constraints on a discrete set of variables. The objective function can either be minimized or maximized, and it can be seen as a (possibly weighted) sum of the clauses satisfied by a feasible solution. Some typical combinatorial optimization problems include the Knapsack, Traveling Salesman, and MaxCut problem [32]. However, due to the combinatorial nature of such problems, the solution space explodes with respect to the number of inputs, and the optimization process quickly becomes intractable. Generally, finding the exact solution to many combinatorial optimization problems belongs to the NP complexity class [33]. This means that classical algorithms cannot efficiently retrieve the optimal solution since the time required scales exponentially with the number of inputs. In this context, approximate optimization algorithms are employed to find a good approximate solution in polynomial time [34], which can be formulated as follows. Given a combinatorial optimization problem defined on \(n\)-bit binary strings of the form \(\mathbf{x}=x_{1}\cdots x_{n}\), where the goal is to maximize a given classical objective function \(C(\mathbf{x}):\left\{0,1\right\}^{n}\rightarrow\mathbb{R}_{\geq 0}\), an approximate optimization algorithm aims to find a solution \(\mathbf{x}^{*}\) such that the approximation ratio \(\alpha\), defined as \[\alpha=\frac{C(\mathbf{x}^{*})}{C_{\max}}, \tag{1}\] with \(C_{\max}=\max_{\mathbf{x}}C(\mathbf{x})\), reaches some desired value. Ideally, the value should be as close to 1. Despite the fact that the solution found by approximate algorithms may not be optimal, it generally comes with some optimality guarantees, which are typically lower bounds on the quality of the solution. For example, an algorithm is said to be \(\alpha\)-approximated for a problem if and only if it can find a solution within a factor \(\alpha\) (\(\leq 1\)) of the optimal solution for every instance of the problem [35]. Thus, should such an algorithm exist, the above criterion certifies that the approximate solution is at least \(\alpha\) times the optimum. However, for some optimization problems, the gap between the approximate solution and the optimal one cannot be reduced in polynomial time, suggesting the difficulty of finding tight lower bounds with respect to the optimal solution. This is known as the "hardness of approximation", and it implies that finding a polynomial time approximation for the underlying problem is impossible unless P = NP [36]. A comprehensive list of state-of-the-art approximation algorithms for some key combinatorial optimization problems can be found in [35]. More formally, as outlined in Section 2.2, many optimization problems can be transformed into a quadratic unconstrained binary optimization (QUBO) form. However, QUBO problems are usually NP-complete [37], meaning that finding the solution classically requires traversing a solution space that grows exponentially with the problem size. On the other hand, quantum computing promises to enable exponentially faster computation due to the superposition nature of qubits. A quantum system's exponentially growing Hilbert space can naturally accommodate the solution space of a combinatorial optimization problem and, therefore, may provide advantages in solving such problems over classical machines. The Quantum Approximate Optimization Algorithm (QAOA) is designed to tackle QUBO problems by utilizing a quantum circuit to find approximate solutions. The objective is to address the inherent hardness of approximation present in classical computation by leveraging the capabilities of QAOA. It should be noted, however, that while QAOA has the potential to be applied to a wide range of optimization problems, its effectiveness is dependent on the specific problem characteristics (more details in Sections 3.3 and 3.4). ### MaxCut Problem Overview The MaxCut problem is one of the most well-known optimization problems, and it is hereby discussed in detail. It involves finding a cut in a graph such that the vertices of the graph are divided into two complementary subsets, and the sum of the weights of the edges crossed by the cut is maximized. The MaxCut problem can be formulated as follows: Given an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of vertices, \(\mathcal{E}\) is the set of edges, and \(w_{ij}\) is the weight corresponding to the edge \((i,j)\in\mathcal{E}\), which connects the vertices \(i\) and \(j\). The objective of the MaxCut is to partition the graph vertices \(x_{i}\), for \(i=1,\ldots,|\mathcal{V}|\), into two complementary subsets labeled by \(0\) and \(1\), such that the weighted sum of the edges connecting vertices in different partitions, defined as \[C(\mathbf{x})=\sum_{i,j=1}^{|\mathcal{V}|}w_{ij}x_{i}(1-x_{j}), \tag{2}\] is maximized, where \(w_{ij}>0\), \(w_{ij}=w_{ji}\), \(\forall(i,j)\in\mathcal{E}\), and \(x_{i}\in\{0,1\}\). An example of the MaxCut problem is illustrated in Figure 1. With general weights \(w_{ij}\), the problem is commonly known as the weighted-MaxCut; the plain MaxCut problem is a special case of the weighted-MaxCut where \(w_{ij}=1\) for all \((i,j)\in\mathcal{E}\). Since finding a cut that yields the maximum value of the objective function, \(C\) is an NP-hard problem [38], our best hope for a polynomial-time algorithm lies in an approximate optimization approach. This means finding a partition \(\mathbf{x}^{*}\) which yields a value \(C(\mathbf{x}^{*})\) that is as close as possible to the maximum value \(C_{\max}=\max_{\mathbf{x}}C(\mathbf{x})\). Hastad [39] conjectured that achieving an approximation ratio higher than \(16/17\simeq 0.9412\) is NP-hard, highlighting the hardness of approximation for the MaxCut problem. Currently, the best-performing classical algorithm for the MaxCut problem is the Goemans-Williamson algorithm, which delivers a solution with \(\alpha\simeq 0.878\)[40, 41] (Section 2.3). For these reasons, the MaxCut is considered the paradigmatic example of hard combinatorial optimization problems. As a result, the aspiration is to find good and efficient approximate solutions to MaxCut by leveraging the power of quantum algorithms. In this regard, QAOA is a quantum algorithm that has shown promise in finding solutions to hard combinatorial optimization problems such as MaxCut. More details of the performance analysis of QAOA and the comparison to its classical counterparts will be provided in Section 3.4. ### QUBO Problems and Applications A more general formulation of the MaxCut problem is represented by Quadratic Unconstrained Binary Optimization (QUBO) problems. QUBO problems belong to the NP-complete class. This mathematically assures that any NP-complete problem can be mapped to a QUBO one in polynomial time. Lucas Figure 1: Left: A problem graph with 6 vertices and 11 equal-weight edges. Right: The solution to the MaxCut problem, where the vertices are partitioned into two groups (red and blue) such that the number of edges crossed by the cut (black curve) is maximized, which is 8. [42] discussed how all the Karp's 21 NP-complete problems can be mapped to a QUBO one. Among them some relevant optimization problems other than MaxCut are Graph Coloring [43], Number Partitioning, and Quadratic Knapsack [44]. An extensive list of QUBO applications is presented in [37]. In a QUBO problem, the vector of unknowns \(\mathbf{x}=(x_{1},\ldots,x_{n})\) is represented by decision variables taking discrete binary values, so that \(\mathbf{x}\in\{0,1\}^{n}\). Moreover, a QUBO problem is defined by a square symmetric matrix \(\mathbf{Q}\in\mathbb{R}^{n\times n}\). Given the cost function \[C(\mathbf{x})=\mathbf{x}^{T}\mathbf{Q}\mathbf{x}=\sum_{i,j=1}^{n}Q_{ij}x_{i}x _{j}, \tag{3}\] the aim of a QUBO problem is to find the optimal vector \(\mathbf{x}^{*}\) such that \[\mathbf{x}^{*}=\arg\min_{\mathbf{x}\in\{0,1\}^{n}}C(\mathbf{x}). \tag{4}\] QUBO can also be defined as maximization problems instead of minimization ones by simply inverting the sign of the cost function \(C(\mathbf{x})\), i.e., by flipping the sign of the coefficients in \(w\): \[\min_{\mathbf{x}\in\{0,1\}^{n}}C(\mathbf{x})=\max_{\mathbf{x}\in\{0,1\}^{n}}- C(\mathbf{x})=\max_{\mathbf{x}\in\{0,1\}^{n}}\sum_{i,j=1}^{n}(-Q_{ij})x_{i}x _{j}. \tag{5}\] It is important to note that QUBO problems are unconstrained, namely there are no constraints on the variables \(\mathbf{x}\). QUBO instances can also be seen as a one-to-one correspondence with Ising models [42]. Moreover, Lodewijks [45] discusses several mappings from NP-hard problems to QUBO problems and corrects the errors in the approach taken in [42]. It also expands the range of NP-complete and NP-hard optimization problems that can be formulated as QUBO problems. Ising problems replace the original QUBO variables \(\mathbf{x}\in\{0,1\}^{n}\) with Ising variables \(\mathbf{z}\in\{-1,1\}^{n}\), such that \(z_{i}=2x_{i}-1\) for \(i=1,\ldots,n\). The final Ising Hamiltonian, which depends on \(\mathbf{z}\), is equivalent to Eq. (3) except for a constant irrelevant to the optimization. A more detailed explanation of the relationship between QUBO and Ising models can be found in [44]. Inspired by the adiabatic theorem, annealing methods are used to find the ground state of a physical system. Similarly, solutions to Ising problems are often carried out with annealing techniques [46]. Due to its equivalence with Ising models, QUBO represents a family of problems suitable to be solved by adiabatic quantum computing through quantum annealing [47]. As previously mentioned, many optimization problems can be reformulated as QUBO problems. Although QUBO problems are limited to quadratic interactions between variables, they can be extended to higher-order terms. For example, let us consider a problem with a third-order term \(x_{i}x_{j}x_{k}\). To convert this problem into a quadratic one, we can introduce an ancillary variable (also called "gadget") \(x^{\prime}\coloneqq x_{i}x_{j}\), and express the original term as \(x_{i}x_{j}x_{k}=x^{\prime}x_{k}\). This allows us to rewrite the entire problem in terms of quadratic interactions between variables. Optimization problems with higher order interaction are in general refereed to as Polynomial Unconstrained Binary Optimization (PUBO) problems. Babbush et al. [48] analysed how to efficiently map PUBO problems to QUBO ones. The fundamental reason why QAOA is focused on QUBO instances is linked to the hardware constraints. However, in case the device can implement gates on more than two qubits it could be more advantageous to reduce the number of interaction by increasing their order. It was shown that arbitrary combinatorial optimization problems can be mapped to PUBO problems through dualizing constraints [49]. Alternatively, they can also be formulated in the Lechner-Hauke-Zoller (LHZ), or parity model, which is a lattice gauge model with nearest neighbor four-body interactions [50, 51, 52]. ### Classical Algorithms for MaxCut Problem Despite the MaxCut problem being a well-known optimization problem in computer science with a practical significance, finding the optimal solution is computationally challenging, as it is an NP-hard problem [32]. This means that no known algorithms can solve it in polynomial time concerning the size of the input. However, several approximation algorithms and heuristics can provide good solutions in a reasonable time for practical problem sizes [34]. For example, greedy algorithms are widely used because of their simplicity and efficiency: they make locally optimal choices at each iteration step in the hope of finding a globally optimal solution, but such a myopic strategy of focusing only on the current step often leads to suboptimal solutions, especially when the problem exhibits complex interactions between different variables or features. Local search algorithms are heuristics techniques that can overcome this limitation by systematically exploring the search space. They start with an initial solution and iteratively improve it by considering a neighborhood of the current solution and moving to the best neighboring solution; the quality of the solution found profoundly depends on the initial solution and the quality of the neighborhood explored. However, local search algorithms may still get trapped in suboptimal solutions, especially if the search space is large or complex. Simulated annealing, a specific type of local search algorithm, can help avoid getting stuck in local optima. It uses a temperature parameter to control the probability of accepting a worse solution. In the context of the MaxCut problem, simulated annealing can provide good solutions, but it can be slow for large problem instances. Genetic algorithms are also popular metaheuristic algorithms that generate an initial population of possible solutions and apply genetic operators like selection, crossover, and mutation to generate new solutions, which are then evaluated using an objective function. They can provide reasonable solutions for complex optimization problems. However, they can be slow for large problem instances due to the need to maintain a population of solutions and evaluate each solution using the objective function. Another popular heuristic algorithm for MaxCut is spectral clustering, that involves using the eigenvectors of the graph Laplacian matrix to partition the graph into two parts; the eigenvectors of the Laplacian matrix capture the global structure of the graph and are able to identify the most significant cuts in the graph. Spectral clustering can provide good solutions for many MaxCut instances, despite being sensitive to the choice of the number of eigenvectors used for partitioning and the spectral gap between the eigenvalues. In addition, spectral clustering can be computationally expensive for large graphs. Optimization techniques like linear programming and semidefinite programming (SDP) are based on formulating the problem as a mathematical program and solving it using optimization algorithms. These techniques can provide strong theoretical guarantees on the quality of the solution, but they may require significant computational resources to solve large instances of the problem. In this regard, a prominent classical approximation algorithm for the MaxCut problem is the Goemans-Williamson algorithm, based on Semidefinite Programming (SDP) relaxations. The algorithm transforms the MaxCut problem into an SDP problem by relaxing the binary constraints on the membership of each node in one of the two sets into a real-valued vector in a high-dimensional space. The solution of the SDP relaxation provides a set of real-valued vectors that can be rounded randomly to obtain a feasible solution for the original problem. The randomized rounding procedure maps each vector to one of the two sets with probability proportional to its squared length. Notably, the algorithm guarantees an approximation ratio of at least \(0.87856\), ensuring that the obtained cut weights at least \(87.856\%\) of the optimal cut's weight [40, 53]. Under the Unique Games Conjecture, this approximation ratio is the best possible for any polynomial-time algorithm [54]. The Goemans-Williamson algorithm can be summarized in the following steps [53]: 1. Given a graph \(G=(V,E)\) with \(n\) vertices and edge weights \(w_{ij}\), formulate the MaxCut problem as a QUBO that maximizes the objective function: \(\sum_{i,j<i}w_{ij}x_{i}(1-x_{j})\), where \(x_{i}\in\{0,1\}\) indicating which side of the cut vertex \(i\) belongs. 2. Relax the QP by replacing binary variables \(x_{i}\) with unit vectors \(y_{i}\in\mathbb{R}^{n}\) whose elements could be continuous variables, and \(x_{i}x_{j}\) with \(y_{i}^{T}y_{j}\), where the superscript \(T\) denotes the transpose operation. This gives a semidefinite program (SDP) that maximizes the objective function: \(\sum_{i,j<i}w_{ij}(1-y_{i}^{T}y_{j})\), subject to \(\forall i\in\{0,\ldots,n\}\), \(y_{i}^{T}y_{i}=1\), and \(Y=(y_{i}^{T}y_{j})\) is positive semidefinite. 3. Solve the SD using a polynomial-time algorithm such as interior point methods to obtain an optimal solution \(Y^{*}\). 4. Choose a random vector \(r\in\mathbb{R}^{n}\) from a Gaussian distribution and \(\forall i\), let \(h_{i}=\operatorname{sgn}(r^{T}y_{i})\), where \(\operatorname{sgn}(x)=1\) if \(x\geq 0\) and \(-1\) otherwise. This gives a partition of \(V\) into two sets: \(S_{+}=i|h_{i}=1\) and \(S_{-}=i|h_{i}=-1\). 5. Return the cut \((S_{+},S_{-})\) as the output of the algorithm. Gaining a deep understanding of the classical approaches proposed to approximate the MaxCut problem can offer valuable insights into the problem's complexity and the limitations of classical computing resources. Furthermore, with the advent of quantum computing, this knowledge can inspire the development of novel quantum algorithms that can leverage the power of quantum mechanics to solve this problem more efficiently. ### Variational Quantum Algorithms QAOA is part of a broader category of quantum algorithms known as Variational Quantum Algorithms (VQAs) [3, 55]. These algorithms share the properties of being targeted primarily at optimization problems on NISQ devices, using the variational principle in quantum theory. The variational principle is used to find the lowest expectation value which can be obtained for a particular observable, typically the ground state energy, with respect to a trial wave function. This trial wave-function makes the principle variational, as it is parameterized by a set of values which allows for a general wave-function form to be fitted to the system and the minimum expectation value to be found. This is expressed in terms of a Hamiltonian \(\hat{H}\), and trial wave-function \(\ket{\psi}\), to find the ground state energy of the system \(E_{0}\), which is bounded as follows, \[E_{0}\leq\frac{\bra{\psi}\hat{H}\ket{\psi}}{\bra{\psi}\psi}. \tag{6}\] Given this form, the objective of variational algorithms is to find a parametrization of \(\ket{\psi}\), which minimizes the expectation value of the Hamiltonian. This is achieved by approximating the eigenvector, \(\ket{\psi}\), of the Hermitian operator, \(\hat{H}\), with the lowest eigenvalue, \(E_{0}\), by iteratively improving upon an ansatz. The initial trial wave function and its first set of parameters form the ansatz, with the parameters typically chosen at random within a range expected to be reasonable in the context of the quantum system. The selection of the ansatz form is a problem of its own which has many possible approaches depending on the Hamiltonian of the system, as explored in [56, 57, 58] but is inspired by the context of the problem to be solved. For example, in quantum chemistry, finding the ground state of a helium atom involves having an ansatz wave-function of the product of two hydrogen atom wave-functions, which is then improved upon from there [59]. Apart from the "problem-inspired ansatze", another architecture is the so-called hardware-efficient ansatze [60], which build arbitrary unitaries using single- and two-qubit gates that are native to the hardware in use. It has the advantage of reducing circuit depth and being versatile enough to solve a wide range of problems as it is "problem-agnostic". Overall, this ambiguity and freedom of choice in forming an ansatz for a variational problem allows for a smooth transition to quantum computers being used, as they can encode Hamiltonians as the sum of unitary operations. This linear combination of unitary operators also allows for creating an ansatz wave function that can be easily parameterized through Bloch sphere rotation angles. This principle translates very smoothly to quantum computers, as qubits are elementary and versatile manifestations of wave functions, which are measured to obtain the expectation value and energy of the system at the end of the quantum circuit after going through a set of parameterized quantum gates to alter the system and its wave-functions. This idea's simplest and most direct implementation is the Variational Quantum Eigensolver (VQE). #### 2.4.1 Variational Quantum Eigensolver (VQE) Variational Quantum Eigensolver (VQE) is a quantum algorithm that employs a hybrid system, integrating both quantum and classical computing resources, to solve the eigenvalue problem for a given Hamiltonian operator. This technique was initially presented by Peruzzo et al. [61] as an alternative to the quantum phase estimation algorithm by implementing a quantum chemistry problem on a hybrid system consisting of a photonic quantum processor and a conventional computer. This work was improved, and its theoretical framework was reinforced and extended in a subsequent work by McClean et al. [1]. The VQE algorithm, as all VQAs, operates via a parameterized quantum circuit, or ansatz, characterized by a set of parameters, \(\boldsymbol{\theta}\). A systematic method is required to vary the ansatz parameters until an optimal solution is found to implement the variational principle on a quantum computer. The entire action of this variational operation can be represented by a unitary gate \(U(\boldsymbol{\theta})\). The ansatz acts on the initial state of the quantum circuit of \(N\) qubits, \(\ket{\psi_{0}}\), typically taken to be the ground state \(\ket{\boldsymbol{0}}\) (also expressed as \(\ket{0}^{\otimes N}\)), and generates an output \(U(\boldsymbol{\theta})\ket{\psi_{0}}=\ket{\psi(\boldsymbol{\theta})}\). From this construction, it is clear that \(U(\boldsymbol{\theta})\ket{\psi_{0}}\) is a normalized wave-function, which allows for the expression of the optimization problem as \[\lambda=\min_{\boldsymbol{\theta}}\,\bra{\psi_{0}}U^{\dagger}(\boldsymbol{ \theta})\hat{H}U(\boldsymbol{\theta})\ket{\psi_{0}}. \tag{7}\] This is the state that is iteratively optimized by varying the parameters \(\boldsymbol{\theta}\) towards an optimal set of parameters, \(\mathbf{\theta}^{*}\), to obtain the optimized expectation value, \[\lambda_{\min}=E_{0}\approx\,\langle\psi(\mathbf{\theta}^{*})|\hat{H}|\psi(\mathbf{ \theta}^{*})\rangle\,. \tag{8}\] This algorithm can be easily scaled up to include more complexity, parameters, variational gates, and entanglement schemes [62]. These more expressive variational forms allow for better fine-tuning of the ansatz, resulting in a more accurate output state estimation of the eigenvalues. The VQE algorithm, as a VQA, relies on both a quantum and classical part. The quantum part consists of estimating a quantum circuit, and it evaluates the desired quantum states for the given set of parameters. Since it is not optimized to calculate the variations in the parameters towards an optimized set, this part is dealt with in the classical part of the algorithm. Indeed, the mathematics involved in optimization calculations is very efficient on modern classical computers. Combining quantum and conventional computers to handle different components of a larger problem is known as hybrid quantum-classical computing and is the backbone of any VQA [1]. Moreover, it is a powerful framework for many use cases, especially with NISQ devices, as quantum computers are less efficient and fault-tolerant than their conventional counterparts with tasks such as optimizing parameters. So the burden can be shared between the two to maximize overall performance. This hybrid regime allows for the outsourcing of tweaking the parameters to a classical computer which then passes the values back to the quantum computer to calculate the eigenvalues. The classical computer calculates the new parameters through optimization methods typically based on gradient descent [63, 64], which calculates a hyperplane of error or deviation from the ideal solution to find a minimum point that indicates the highest accuracy of the model. An attempt to experimentally prove the efficiency of this hybrid method was carried out by Otterbach et al. [65] by training a weighted MaxCut problem on 19 qubits. However, it should be noted that hybrid computing does not always provide the most efficient solution, as some algorithms are more powerful when using only quantum hardware, as demonstrated by Magann et al. [66]. Kandala et al. [60] have used a medium-sized quantum computer to optimize Hamiltonian problems with up to six qubits and over one hundred Pauli terms, determining ground-state energy for molecules up to \(\mathrm{BeH}_{2}\). The approach used a variational quantum eigensolver, efficiently prepared trial states tailored to available quantum processor interactions, and a robust stochastic optimization routine. Their results help elucidate requirements for scaling the method to larger systems and bridging gaps between high-performance computing problems and their implementation on quantum hardware. Recent VQE variations have also been proposed to efficiently tackle combinatorial optimization problems, similar to those targeted by the QAOA [11, 65]. #### 2.4.2 Quantum Adiabatic Algorithm (QAA) As mentioned in Section 2.2, the adiabatic theorem is vital in solving optimization problems. The adiabatic theorem states that starting in the ground state of a time-dependent Hamiltonian, if the Hamiltonian evolves slowly enough, the final state will be the ground state of the final Hamiltonian. Moreover, the adiabatic theorem can be generalized to any other eigenstate as long as there is no overlap (degeneracy) between different eigenstates across the evolution. The \(n\)-th eigenstate of the initial Hamiltonian will evolve into the \(n\)-th eigenstate of the final one. The Quantum Adiabatic Algorithm (QAA) [67, 68] was developed based on this principle to solve optimization problems on a quantum computer. It also falls under a more general computational paradigm called Adiabatic Quantum Computing (AQC) [69]. Specifically, we have an initial Hamiltonian \(\hat{H}_{M}\), whose ground state is typically easy to prepare, and a final Hamiltonian \(\hat{H}_{C}\), whose ground state encodes the solution to the optimization problem of interest. The adiabatic evolution path is then encapsulated in the transitional Hamiltonian, which is expressed as \(\hat{H}(t)=f(t)\hat{H}_{C}+g(t)\hat{H}_{M}\) with some slowly varying control functions such as \(f(t)=t/T\) and \(g(t)=1-t/T\), where \(t\in[0,T]\) and \(T\) is the total evolution time. The evolution operator will then be \(\hat{U}(t)=:e^{-i\int_{0}^{t}\mathrm{d}\tau\hat{H}(\tau)}\). It is worth noting that although the QAA requires a continuous evolution of the state, it can be emulated on a gate-based quantum computer by Trotterizing \(\hat{U}(t)\) in sufficiently small steps, namely, decomposing \(\hat{U}(t)\) into a sequence of small steps through the Trotter-Suzuki formula: \[\hat{U}(t)\approx\prod_{k=0}^{r-1}\exp\left[-i\hat{H}(k\Delta\tau)\Delta\tau \right]=\prod_{k=0}^{r-1}\exp\left[-if(k\Delta\tau)\hat{H}_{C}\Delta\tau \right]\exp\left[-ig(k\Delta\tau)\hat{H}_{M}\Delta\tau\right] \tag{9}\] where \(\Delta\tau\eqqcolon t/r\). Here, we notice that as \(k\) increases, \(f(k\Delta\tau)\) increases while \(g(k\Delta\tau)\) decreases. Therefore, the time steps of \(\hat{H}_{C}\) (\(\hat{H}_{M}\)) will decrease (increase) linearly. Crosson et al. [70] conducted numerical simulations of the QAA for over 200,000 instances of MAX 2-SAT on 20 qubits with a unique optimal solution. They selected a subset of instances for which the success probability was less than \(10^{-4}\) at \(T=100\) and proposed three strategies to increase the success probability for all of these instances. The first strategy was to run the adiabatic algorithm more rapidly, which increased the success probability at shorter times for all instances. The second strategy was initializing the system in a random first excited state of the problem Hamiltonian, producing an average success probability close to the upper bound for most hard instances. The third strategy involved adding a random local Hamiltonian to the middle of the adiabatic path, which often increased the success probability. These strategies were also tested on the QAA version of the Grover search algorithm, but they did not improve the success probability. The authors concluded that their strategies might be helpful only for particularly challenging instances and could be tested on a quantum computer with higher qubit numbers. #### 2.4.3 Barren plateaus In training parameterized quantum circuits, such as VQE and QAOA, there is a major challenge related to the cost function landscape. In fact, for certain families of parameterized quantum circuits, the landscape of the cost function can be flat, meaning that the gradients concerning the trainable parameters are exponentially small in the number of qubits, causing the optimization process to stall. This is known as the problem of barren plateaus [71], i.e., the gradient of the cost function with respect to any parameter vanishes exponentially in the number of qubits. Formally, a cost function \(C(\mathbf{\theta})\) exhibits a barren plateau if, for all trainable parameters \(\theta_{i}\in\mathbf{\theta}\), the variance of the partial derivative of the cost function vanishes exponentially in the number \(n\) of qubits: \[\text{Var}_{\mathbf{\theta}}[\partial_{i}C(\mathbf{\theta})]\leq F(n), \tag{10}\] with \(F(n)\in O(b^{-n})\) for some constant \(b>1\). Eq. (10) implies that the gradient of the cost function will be, on average, exponentially small: due to Chebyshev's inequality, the probability that the partial derivative \(\partial_{i}C(\mathbf{\theta})\) deviates from its average (of zero) by a value larger than a given constant \(c\), with \(c>0\), is bounded by \(\text{Var}_{\mathbf{\theta}}[\partial_{i}C(\mathbf{\theta})]\), as illustrated in Eq. (11). \[\text{Pr}[|\partial_{i}C(\mathbf{\theta})|\geq c]\leq\frac{1}{c^{2}}\text{Var}_{ \mathbf{\theta}}[\partial_{i}C(\mathbf{\theta})] \tag{11}\] Quantum circuit training strategies are becoming increasingly crucial for all variational algorithms. ### The Quantum Approximate Optimization Algorithm (QAOA) The QAOA was first introduced by Farhi et al. [12] as a VQA able to find approximate solutions to the MaxCut problem, suitable to be run on NISQ devices. Inspired by the Trotterized version of QAA (Section 2.4.2), QAOA was designed to be a variational algorithm with repeated cost and mixer layers, namely these steps, instead of following some \(f,g\) functions, are trained variationally. Hence, it comes with a repeated cost and mixer layers, denoted \(\hat{U}_{C}(\gamma_{k})\) and \(\hat{U}_{M}(\beta_{k})\), respectively, where \(k\) denotes the \(k\)-th layer. These layers are analogous to the exponentiated operators on the right-hand side of Eq. (9). However, instead of following predefined \(f\) and \(g\) functions, the parameters \(\gamma_{k}\) and \(\beta_{p}\) are trained variationally. In this sense, QAOA can be regarded as a discretized version of QAA and a special case of VQE (Section 2.4.1). The key idea behind QAOA is to encode the objective function of the optimization problem into the cost Hamiltonian \(\hat{H}_{C}\) to search for an optimal bitstring \(\mathbf{x}^{*}\) that will give a good approximation ratio \(\alpha\) with a high probability. In fact, the cost function \(C(\mathbf{x})\) can be mapped to a cost Hamiltonian \(\hat{H}_{C}\) such that \[\hat{H}_{C}\ket{\mathbf{x}}=C(\mathbf{x})\ket{\mathbf{x}}, \tag{12}\] where \(\mathbf{x}\) is the quantum state encoding the bitstring \(\mathbf{x}\). The original QAOA consists of the following steps: 1. Define a cost Hamiltonian \(\hat{H}_{C}\) such that its highest energy state encodes the solution to the optimization problem. Define also a mixer Hamiltonian \(\hat{H}_{M}\) that does not commute with \(\hat{H}_{C}\) Typically, for the MaxCut problem of a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), \(\hat{H}_{C}\) and \(\hat{H}_{M}\) are given as: \[\hat{H}_{C} =\frac{1}{2}\sum_{(i,j)\in\mathcal{E}}w_{ij}(I-Z_{i}Z_{j}), \tag{13a}\] \[\hat{H}_{M} =\sum_{j\in\mathcal{V}}X_{j}, \tag{13b}\] where \(I\) is the identity operator, \(Z_{j}\) (\(X_{j}\)) is the Pauli-Z (-X) operator acting on the \(j\)-th qubit. In the problem Hamiltonian \(\hat{H}_{C}\), diagonal in the computational basis, each binary variable \(x_{i}\in\{0,1\}\) in the MaxCut problem is mapped to a Pauli-Z operator \(Z_{i}\) in the following way: \[x_{i}\rightarrow\frac{1}{2}(1-Z_{i}). \tag{14}\] The Hamiltonian \(\hat{H}_{C}\) in Eq. (13a) corresponds precisely to the objective function in Eq. (2). 2. Initialize the circuit in the state \(\ket{s}\): \[\ket{s}=\ket{+}^{\otimes n}=\frac{1}{\sqrt{2^{n}}}\sum_{\mathbf{x}\in\{0,1\}^ {n}}\ket{\mathbf{x}},\] (15) where \(n\) is the number of qubits and \(n=\ket{\mathcal{V}}\). The state \(\ket{s}\) corresponds to the highest energy state of the Pauli-X basis, i.e., to the highest energy state of the mixer Hamiltonian \(\hat{H}_{M}\). 3. Construct the circuit ansatz by defining and applying the unitaries: \[\hat{U}_{C}(\gamma) =e^{-i\gamma\hat{H}_{C}}=\prod_{i=1,j<i}^{n}R_{Z_{i}Z_{j}}(-2w_{ ij}\gamma),\] (16a) \[\hat{U}_{M}(\beta) =e^{-i\beta\hat{H}_{M}}=\prod_{i=1}^{n}R_{X_{i}}(2\beta),\] (16b) where \(\gamma\) and \(\beta\) are variational parameters of the circuit. We call \(\hat{U}_{C}(\gamma)\) and \(\hat{U}_{M}(\beta)\) the cost and mixer layers, respectively. A single QAOA layer comprises one cost and one mixer layer, which can be further stacked to build a deeper circuit with more layers. As shown in Figure 2, each element in the mixer layer can be implemented with a single rotation gate \(R_{X}\). In contrast, each of the two-qubit Pauli-Z interactions in the cost layer is implemented through two CNOT gates sandwiching a local rotation gate \(R_{Z}\). 4. Define the total number of QAOA layers, \(p\geq 1\). Initialize the \(2p\) variational parameters \(\mathbf{\gamma}=(\gamma_{1},\gamma_{2},\dots,\gamma_{p})\) and \(\mathbf{\beta}=(\beta_{1},\beta_{2},\dots,\beta_{p})\) such that \(\gamma_{k}\in[0,2\pi)\) and \(\beta_{k}\in[0,\pi)\) for \(k=1,\dots,p\). The final state output by the circuit is therefore given by \[\ket{\psi_{p}(\mathbf{\gamma},\mathbf{\beta})}=e^{-i\beta_{p}\hat{H}_{M}}e^{-i\gamma _{p}\hat{H}_{C}}\dots e^{-i\beta_{1}\hat{H}_{M}}e^{-i\gamma_{1}\hat{H}_{C}} \ket{s}.\] (17) 5. The expectation value of the Hamiltonian \(\hat{H}_{C}\) with respect to the ansatz state \(\ket{\psi_{p}(\mathbf{\gamma},\mathbf{\beta})}\), which corresponds to the cost obtained by the quantum algorithm for the underlying problem, is calculated through repeated measurements of the final state in the computational basis: \[F_{p}(\mathbf{\gamma},\mathbf{\beta})=\,\bra{\psi_{p}(\mathbf{\gamma},\mathbf{\beta})}\hat{H}_ {C}\ket{\psi_{p}(\mathbf{\gamma},\mathbf{\beta})}\] (18) Figure 2: Implementation of the elements of mixer (left) and cost (right) layers based on the cost and mixer Hamiltonians, \(\hat{H}_{C}\) and \(\hat{H}_{M}\). By \(\left(e^{-i\beta_{k}\hat{H}_{M}}\right)_{v_{i}}=:\left(\hat{U}_{M}(\beta_{k}) \right)_{v_{i}}\) we mean the element of \(\hat{U}_{M}(\beta_{k})\) generated by the vertex \(v_{i}\), i.e., \(e^{-i\beta_{p}(\hat{H}_{M})_{v_{i}}}=e^{-i\beta_{k}X_{i}}=R_{X_{i}}(2\beta_{k})\), where \(\beta_{k}\) is the variational parameter for the layer \(k\). Similarly for \(\left(e^{-i\gamma_{k}\hat{H}_{C}}\right)_{v_{i},v_{j}}\), the cost unitary is given by \(e^{-i\gamma_{k}(\hat{H}_{C})_{ij}}=R_{Z_{i}Z_{j}}(-2w_{ij}\gamma_{k})\), which can be decomposed as shown on the right. 6. A classical optimization algorithm is employed to iteratively update the parameters \(\mathbf{\gamma}\) and \(\mathbf{\beta}\). The goal of the aforementioned routine is to find the optimal set of parameters \((\mathbf{\gamma}^{*},\mathbf{\beta}^{*})\) such that the expectation value \(F_{p}(\mathbf{\gamma},\mathbf{\beta})\) is maximized: \[(\mathbf{\gamma}^{*},\mathbf{\beta}^{*})=\arg\max_{\mathbf{\gamma},\mathbf{\beta}}F_{p}(\mathbf{ \gamma},\mathbf{\beta})\] (19) An example of a QAOA circuit instance is shown in Figure 3. At the end of the optimization procedure, the approximation ratio \(\alpha\) will be given by: \[\alpha=\frac{F_{p}(\mathbf{\gamma}^{*},\mathbf{\beta}^{*})}{C_{\max}}, \tag{20}\] and the state \(|\psi_{p}(\mathbf{\gamma}^{*},\mathbf{\beta}^{*})\rangle\) will encode the solution to the optimization problem. It is worth noting that, instead of the ground state, QAOA is typically initialized with the highest energy eigenstate of the mixer Hamiltonian, such as in the case of MaxCut. Nevertheless, the adiabatic theorem still holds in such cases. Apart from the digitized adiabatic quantum computing approach, which the standard QAOA implementation (Section 2.5) is based on, an analog version of QAOA that can be run on quantum annealers was recently proposed by Barraza et al. [72]. ## 3 QAOA Analysis The QAOA has generated significant interest in the quantum computing community as a promising method for solving combinatorial optimization problems. This sectio Figure 4: General scheme of QAOA and its features. Figure 3: Schematic of the hybrid workflow of QAOA with \(p\) layers. QAOA and its various associated aspects (Figure 4). Our analysis covers a range of topics, including ansatz variants (Section 3.1), parameter optimization strategies (Section 3.2), computational resource efficiency (Section 3.3), quality of solution (Section 3.4), noise and error considerations (Section 3.5), and hardware-specific approaches (Section 3.6). We evaluate the strengths and limitations of QAOA in light of recent advancements and studies in the literature. Our analysis aims to illuminate the capabilities and potential of QAOA and emphasize the challenges that must be addressed to harness its power in practical applications fully. ### Ansatz Variants An ansatz is an educated guess about the form of an unknown function that is made in order to facilitate the solution of an equation or other problem. In the context of QAOA, the ansatz that needs to be made is about the structure of the quantum circuit, which defines the operators \[\hat{U}(\mathbf{\gamma},\mathbf{\beta})=e^{-i\beta_{p}\hat{H}_{M}}e^{-i\gamma_{p}\hat{H }_{C}}\cdots e^{-i\beta_{1}\hat{H}_{M}}e^{-i\gamma_{1}\hat{H}_{C}}, \tag{21}\] where such operators are layered \(p\) times. The choice of ansatz typically depends on the problem type, such as combinatorial problems represented as graphs [73], or problems strongly influenced by hardware design [74, 75, 76]. However, ansatz design must balance specificity and generality to avoid overfitting and maintain applicability to a wide range of problems. For this reason, designing optimal ansatze for QAOA is an extensively researched and widely investigated topic. This section introduces and discusses various prominent designs and variations of the QAOA ansatz. These variations encompass several approaches to constructing the optimization methodology and address many of the shortcomings of the original algorithm. These variations are summarized in Table 1. #### 3.1.1 Multi-Angle QAOA A straightforward approach to enhanced ansatz design was introduced by Herrman et al. [77]. The authors proposed a multi-angle ansatz for QAOA (ma-QAOA), which improves the approximation ratio by increasing the number of variational parameters. In ma-QAOA, new parameters are introduced into the circuit so that each element of the cost and mixer layers has its angle instead of one angle for the cost operator and one for the mixer operator, as follows: \[\hat{U}_{C}(\mathbf{\gamma}_{l})=e^{-i\sum_{x=1}^{m}\gamma_{l,x}\hat{H}_{C,a}}= \prod_{a=1}^{m}e^{-i\gamma_{l,x}\hat{H}_{C,a}} \tag{22a}\] \[\hat{U}_{M}(\mathbf{\beta}_{l})=e^{-i\sum_{v=1}^{n}\beta_{l,v}\hat{H}_{M,v}}= \prod_{v=1}^{n}e^{-i\beta_{l,v}\hat{H}_{M,v}}, \tag{22b}\] where \(\mathbf{\gamma}_{l}=(\gamma_{l,1},\gamma_{l,2},...,\gamma_{l,m})\), \(\mathbf{\beta}_{l}=(\beta_{l,1},\beta_{l,2},...,\beta_{l,n})\), \(l\) is the QAOA layer, \(n\) is the number of qubits (nodes) and \(m\) is the number of clauses (edges) of the problem (graph). The authors referred to the matrices \(\hat{H}_{C}\) and \(\hat{H}_{M}\) as \(C\) and \(B\), respectively. The total number of circuit parameters becomes \((n+m)p\), where \(p\) is the number of QAOA layers. The vanilla QAOA can be treated as a special case of ma-QAOA, where all the parameters of a given cost or mixer layer have the same value. As such, ma-QAOA was revealed to be more potent than the standard algorithm: the value of the approximation ratio \(\alpha\) achieved by the ma-QAOA is better or equal to that of the vanilla QAOA. Despite a more complex parameter optimization, empirical results suggest that ma-QAOA may require shallower circuits. In this regard, a follow-up study by Shi et al. [93] proposed to reduce the number of ma-QAOA parameters by exploiting the natural symmetries of the input graphs. This approach reduced approximately 33% of the parameters while having little to no impact on the objective function. Moreover, a connection between ma-QAOA and Continuous-Time Quantum Walks (CTQW) on dynamic graphs was investigated by Herrman [94], who showed that ma-QAOA is equivalent to a restriction of CTQW on dynamic graphs. A possible advantage of relating ma-QAOA to CTQW on dynamic graphs is that well-studied CTQW phenomena, such as hitting times, might be investigated to improve our understanding of ma-QAOA and help find optimal parameters. #### 3.1.2 Qaoa\(+\) Another example of an improvement in the ansatz is the work of Chalupnik et al. [78]. To address the problem of the originally proposed form of the QAOA ansatz not providing sufficient performance advantage over classical counterparts in problems such as MaxCut [95], the authors proposed an alternative \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline **Ansatz** & **Main Idea** & **Enhancement \& Applications** \\ \hline ma-QAOA [77] & Multi-angle ansatz with a unique parameter for each element of cost and mixer Hamiltonians & Improves approximation ratio for MaxCut while reducing circuit depth \\ QAOA+ [78] & Augments traditional QAOA with an additional multi-parameter problem-independent layer & Higher approximation ratios for MaxCut on random regular graphs \\ DC-QAOA [79, 80] & Adds a problem-dependent counterdiabatic driving term to the QAOA ansatz & Improves the convergence rate of the approximation ratio while reducing circuit depth \\ ab-QAOA [81] & Incorporates local fields into the operators to reduce computation time & Computation time reduction for combinatorial optimization \\ ADAPT-QAOA [82] & Iterative version of QAOA with systematic selection of mixers based on gradient criterion & Can be problem-specific and addresses hardware constraints \\ Recursive QAOA [83] & Non-local variant of QAOA that iteratively reduces problem size by eliminating qubits & Overcomes locality constraints and achieves better performance \\ QAOAnsatz [84] & Extends the original formulation with broader families of operators and allows for encoding of constraints & Adaptable to a wider range of optimization problems with hard and soft constraints \\ GM-QAOA [85] & Uses Grover-like selective phase shift mixing operators & Solves \&-Vertex Cover, Traveling Salesperson Problem, Discrete Portfolio Rebalancing \\ Th-QAOA [86] & Replaces standard phase separator with a threshold function & Solves MaxCut, Max \&-Vertex Cover, Max Bisection \\ Constraint Preserving Mixers [87] & Constructs mixers that enforce hard constraints constraints & Solves optimization problems with hard constraints \\ WS-QAOA [88] & Modifies the initial state and mixer Hamiltonian based on the optimal solution to the relaxed QUBO problem & Solutions guaranteed to retain the GW bound for the MaxCut problem \\ FALQON [66] & Uses qubit measurements for feedback-based quantum optimization, avoiding classical optimizers & Produces monotonically improving approximate solutions as circuit depth grows while bypassing classical optimization loops \\ FALQON+ [89] & Combines FALQON’s initialization with QAOA for better parameter initialization & Improves initialization of standard QAOA for non-isomorphic graphs with 8 to 14 vertices \\ FQAOA [90] & Utilizes fermion particle number preservation to intrinsically impose constraints in QAOA process & Improves performance in portfolio optimization, applicable to Grover adaptive search and quantum phase estimation \\ Quantum Dropout [91] & Selectively drops out clauses defining the quantum circuit while keeping the cost function intact & Improves QAOA performance on hard cases of combinatorial optimization problems \\ ST-QAOA [92] & Uses an approximate classical solution to construct a problem instance-specific circuit & Achieves same performance guarantee as the classical algorithm, outperforms QAOA at low depths for MaxCut problem \\ Modified QAOA [31] & Modifies cost Hamiltonian with conditional rotations & Improves approximation ratio for MaxCut at \(p=1\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of ansatz strategies for improving QAOA. ansatz, which they call QAOA+. This variant augments the traditional \(p=1\) QAOA ansatz with an additional multi-parameter problem-independent layer of parameterized \(ZZ\) gates and a layer of mixer \(X\) gates. The QAOA+ ansatz allows one to obtain higher approximation ratios than \(p=1\) QAOA while keeping the circuit depth below that of \(p=2\) QAOA with comparable performance, as benchmarked on the MaxCut problem for random regular graphs. Moreover, it showed a similar level of performance to \(p=2\) QAOA. The added circuit depth beyond the vanilla QAOA grows only in the number of qubits used as a set of \(2N-1\) parameters for \(N\) qubits. They additionally showed that the proposed QAOA+ ansatz, while using a more significant number of trainable classical parameters than the standard QAOA, in most cases outperforms the alternative multi-angle QAOA ansatz in [77]. #### 3.1.3 Digitized counterdiabatic QAOA In pursuit of reducing computational complexity, and thereby circuit depth, of QAOA, Chandarana et al. [79] propose a new variant of the algorithm coined Digitized Counterdiabatic QAOA (DC-QAOA). This QAOA variant is built on the adiabatic evolution of the Hamiltonians in vanilla QAOA resulting in unnecessary computational cost and circuit depth, which is difficult to implement on near-term devices. This method utilizes counterdiabatic (CD) driving to speed up the optimization process of the variational algorithm. This is achieved through the extension of the time evolution operator to include an additional variational parameter, \[U(\gamma,\beta)\to U(\gamma,\beta,\alpha), \tag{23}\] that represents a CD operator. This operator is represented as \[U_{\mathrm{CD}}(\alpha)=\prod_{J=1}^{L}\exp(-i\alpha A_{t}^{q}), \tag{24}\] where \(A_{t}^{q}\) is the respective \(q\)-local CD operator chosen from the CD pool \(A\). This pool of operators is defined through the nested commutator approach of the adiabatic gauge potential [96] as \[A_{\lambda}^{(l)}=i\sum_{k=1}^{l}\alpha_{k}(t)\underbrace{[H_{a},[H_{a},\dots [H_{a}]]]}_{2k-1}. \tag{25}\] The authors applied this QAOA variant to problems such as Ising models, classical optimization problems, and the \(k\)-spin model, demonstrating that it outperforms the standard QAOA in all cases. Wurtz and Love [80] propose a similar algorithm, CD-QAOA, also inspired by the use of counterdiabaticity to accelerate the convergence of QAOA to minimize circuit depth and improve solution quality. #### 3.1.4 Adaptive bias QAOA Inspired by the previous work that introduced bias fields in quantum annealing [97], Yu et al. [81] proposed a modified version of QAOA called the adaptive bias QAOA (ab-QAOA), which incorporates the adaptive bias fields into the mixer operators of QAOA to accelerate the convergence of the algorithm. Essentially, in this approach, \(n\) additional parameters \(\{h_{j}\}\) that comprise the bias fields are introduced in the \(n\)-qubit QAOA circuit, which enter both the modified mixer Hamiltonian \[\hat{H}_{M}^{\mathrm{ab}}(\{h_{j}\})=\sum_{j\in\mathcal{V}}\left(X_{j}-h_{j}Z _{j}\right), \tag{26}\] and the initial state that is the product ground state of \(\hat{H}_{M}^{\mathrm{ab}}(\{h_{j}\})\). These local fields are not optimized but rather updated according to the following prescription, \[h_{j}\to h_{j}-\ell\left(h_{j}-\left\langle\psi_{p}^{\mathrm{ab}}\big{|}Z_{j} \big{|}\psi_{p}^{\mathrm{ab}}\right\rangle\right), \tag{27}\] where \(\ell\) is the learning rate and \(\big{|}\psi_{p}^{\mathrm{ab}}\big{\rangle}\) is the state output by the level-\(p\) ab-QAOA circuit, that is, \[\big{|}\psi_{p}^{\mathrm{ab}}\big{\rangle}=\prod_{k=1}^{j}e^{-i\beta_{k}\hat{H} _{M}^{\mathrm{ab}}(\{h_{j}\})}e^{-i\gamma_{k}\hat{H}_{C}}\left|\psi_{0}^{ \mathrm{ab}}(\{h_{j}\})\right\rangle. \tag{28}\] The method was shown to substantially reduce the computation time of QAOA for a fixed level of accuracy and the same number of gates. The computation time of the ab-QAOA converging to a desired accuracy was polynomially shorter than that of the vanilla QAOA. Moreover, such improvement further increases with the problem size, paving the way for the quantum advantage of QAOA in combinatorial optimization problems. #### 3.1.5 Adapt-Qaoa Zhu et al. [82] addressed the ansatz selection problem by proposing an iterative version of QAOA called the Adaptive Derivative Assembled Problem Tailored-QAOA (ADAPT-QAOA). Instead of the standard mixer Hamiltonian, the ADAPT-QAOA systematically selects the QAOA mixer from a pre-defined pool of operators \(\hat{A}_{k}\) that changes from one layer to the next: \[\ket{\psi_{p}(\mathbf{\gamma},\mathbf{\beta})}=\left(\prod_{k=1}^{p}e^{-i\beta_{k} \hat{A}_{k}}e^{-i\gamma_{k}\hat{H}_{C}}\right)\ket{s}. \tag{29}\] In each step, the operator \(\hat{A}_{k}\) is selected by maximizing the gradient of the commutator of the pool operator and the cost Hamiltonian over the ansatz of the previous step, namely maximizing the gradient of \[-i\bra{\psi_{k-1}(\mathbf{\gamma},\mathbf{\beta})}e^{i\hat{H}_{C}\gamma_{k}}[\hat{H}_ {C},\hat{A}_{k}]e^{-i\hat{H}_{C}\gamma_{k}}\ket{\psi_{k-1}(\mathbf{\gamma},\mathbf{ \beta})},\] where \(\gamma_{k}\) is initialized to a certain value \(\gamma_{0}\). Once \(\hat{A}_{k}\) is selected, all parameters are optimized again, and if the cost function has not reached a target value, a new layer can be added similarly. In simulations on the MaxCut problems, ADAPT-QAOA converged faster than the standard QAOA while reducing the number of CNOT gates and optimization parameters by about 50% each, particularly when entangling gates were included in the operator pool. Such a speedup is attributed to the concept of shortcuts to adiabaticity [98, 99]. This concept has been crucial in the enhancement of many ansatz designs which provide improved variations of QAOA, such as digitized and counterdiabatic frameworks [79, 80, 100]. However, the drawback of the method is that the selection of the mixing operators requires an additional number of measurements that depends on how big the operator pool is. #### 3.1.6 Recursive QAOA Performance of QAOA can be limited by the \(Z_{2}\) symmetry of the QAOA states and the geometric locality of the ansatz; that is, the cost operators include interactions only between nearest neighbor qubits concerning the underlying graph. To address this, Bravyi et al. [83] proposed the Recursive QAOA (RQAOA) as a non-local variant of QAOA that iteratively reduces the size of the problem. At each step, RQAOA uses the output distribution of QAOA to compute the \(ZZ\)-correlations of all pairs of edges in the graph, i.e., \(M_{ij}=\bra{\psi(\mathbf{\gamma},\mathbf{\beta})}\ket{Z_{i}Z_{j}}\ket{\psi(\mathbf{\gamma},\mathbf{\beta})},~{}\forall(i,j)\in\mathcal{E}\). Then, it selects the pair(s) with the largest magnitude of the correlation and imposes a parity constraint, \[Z_{j}=\text{sgn}(M_{ij})Z_{i}. \tag{30}\] This effectively eliminates one or more qubits from the Hamiltonian by imposing a constraint on them. RQAOA then reruns the QAOA circuit on the reduced Hamiltonian and repeats the process until the problem reaches a predefined cutoff size. At this point, RQAOA solves the remaining problem exactly using classical methods and reconstructs the final solution by reinserting the eliminated qubits. While RQAOA is less studied compared with the vanilla version, research interest is increasing as it emerges as a promising QAOA variant on NISQ devices [65, 101, 102]. For example, Bae and Lee [103] compared the performance of the level-1 QAOA with that of the RQAOA applied to the MaxCut problem on complete graphs with \(2n\) vertices. They analytically demonstrated that in this particular scenario, the level-1 RQAOA achieves the approximation ratio 1, while the approximation ratio of the original QAOA at \(p=1\) is strictly upper bounded as \[\alpha\leq 1-\frac{1}{8n^{2}}.\] #### 3.1.7 Quantum alternating operator ansatzes The process of ansatz design in QAOA is very versatile and can be extended to much more far-reaching constructions. One such extension is the remodeling of QAOA to the Quantum Alternating Operator Ansatz (QAOAnsatz) by Hadfield et al. [84]. As introduced in Section 2.5, in QAOA, the ansatz structure alternates between applying unitaries based on the cost and mixer Hamiltonians. The extended framework allows for alternating between a more general set of operators. This extension is based on considering general parameterized families of unitary operators instead of only those corresponding to time evolution under a fixed local Hamiltonian. This altered ansatz structure allows for a broader set of problems to be solved by this family of algorithms, especially in optimization problems that have hard constraints that always need to be satisfied, defining feasible subspaces, and soft constraints which need to be minimized in their violations [84]. This ansatz supports representing a larger, potentially more useful, set of states than the original formulation, with potential long-term impact on a broad array of application areas [104]. A novelty of the QAOAnsatzes is their different approach in encoding the constraints of a graph into the ansatz. While the standard approach is to add "penalties" to the cost Hamiltonian, Hadfield et al. [104] proposed to modify the mixer Hamiltonian into XY-mixer, partial mixer, and others to encode different constraints. It is worth noting that a recent effort to unify these two different approaches into a single method was carried out by Ruan et al. [105] in the so-called Unified Quantum Alternating Operator Ansatz (UQAOA). As depicted in Figure 5, QAOAnsatz proposes an alternative and more generic way of defining the different parts of the ansatz. A QAOAnsatz circuit is characterized by two families of parameterized operators in a Hilbert space \(\mathcal{H}\): one of phase-separation operators, \(\hat{U}_{P}(\gamma)\), that depends on the objective function \(f\); and one of mixing operators, \(\hat{U}_{M}(\beta)\), which depends on the domain and its structure, where \(\beta\) and \(\gamma\) are real parameters. A depth-\(p\) circuit consists of \(p\) alternating applications of operators from these two families, i.e., \[Q_{p}(\mathbf{\gamma},\mathbf{\beta})=\hat{U}_{M}(\beta_{p})\hat{U}_{P}(\gamma_{p}) \cdots\hat{U}_{M}(\beta_{1})\hat{U}_{P}(\gamma_{1}). \tag{31}\] QAOAnsatz consists of states representable by the application of this type of circuit to a suitable initial state \(\ket{s}\): \[\ket{\psi(\mathbf{\gamma},\mathbf{\beta})}=Q_{p}(\mathbf{\gamma},\mathbf{\beta})\ket{s}. \tag{32}\] It provides a framework that allows almost any combinatorial optimization problem to be modeled as a QAOA problem, with the objective functions encoded as the sum of Pauli operator-based Hamiltonians, forming the phase separator and mixing unitary operators. Many combinations and variations of these mixer unitaries have been analyzed in recent literature. For example, the original formulation of QAOA [12, 13] made use of transverse field based \(X\)-mixers for unconstrained problems, while the XY-model Ring and Clique Mixers are effective for Hamming weight constrained problems [106, 107]. Cook et al. [106] compared the efficacy of classical states, and Dicke states as initial states, assessed the impact of two distinct XY-Hamiltonian mixing operators, and conducted an analysis of solution distributions via Monte Carlo sampling. Their findings indicate that Dicke states enhance performance compared to easily-prepared classical states. They suggest that the complete graph mixer outperforms the ring mixer, though with a trade-off between improved performance and more extended circuit depth. An intriguing aspect of their study was that the standard deviation of solution distributions decreases exponentially with the number of rounds, which has implications for the feasibility of finding better solutions in deeper algorithm rounds. Despite this, they found that high-quality solutions share patterns with a discretized version of the quantum adiabatic algorithm, suggesting potential avenues for efficient angle selection strategies. Below we introduce a few notable QAOAnsatz variants with different mixer/phase separator designs. Grover Mixer QAOA:For both constrained and unconstrained optimization problems, the use of Grover-like selective phase-shifting operators has shown to be effective. One such example is the work of Bartschi and Eidenbenz [85], in which the Grover Mixer QAOA (GM-QAOA) variant is proposed. This variation to QAOAnsatz makes use of the Grover-like selective phase shift mixing operators, inspired by the Grover Search quantum algorithm [108, 109]. This variant works for any NP optimization application, which can be efficiently prepared with an equal superposition of all feasible solutions. This design works incredibly well for constraint optimization problems where not all possible variables are feasible solutions, such as \(k\)-Vertex-Cover. Figure 5: Representation of the QAOAnsatz. The GM-QAOA variant has significant benefits over the original algorithm, such as not being susceptible to Trotterization errors or any other Hamiltonian simulation errors, due to its operators being implemented exactly using only standard gate sets. The design of the variant also allows solutions sharing an objective value to be sampled with the same amplitude, which significantly increases efficiency and stability. The authors demonstrate the prowess of this framework on a set of significant optimization problems. One such problem is the Traveling Salesman Problem, in which an efficient algorithm is presented to prepare a superposition of all possible permutations of \(n\) numbers over \(O(n^{2})\) qubits. Another problem to which GM-QAOA is applied in this demonstration is the hard constraint \(k\)-Vertex-Cover problem, as a standard benchmark against other combinatorial optimization problems. Finally, in the problem of discrete portfolio rebalancing, the application of GM-QAOA is demonstrated to outperform other existing QAOA approaches, as it can restrict mixing to the feasible subspace and provide transitions between all feasible states. Threshold QAOA:Another variation within the alternating operator ansatz framework is provided by Golden et al. [86], based on the above discussion on various QAOA mixers. The authors presented a variation of QAOA in which the standard phase separation operator is replaced by a threshold function, returning a value of 1 for solutions with an objective value above a defined threshold and a value of 0 otherwise. This variation is coined Threshold QAOA (Th-QAOA), in which the threshold value is varied to reach a quantum optimization algorithm. Although this algorithm is versatile enough to be constructed using any previously studied QAOA mixers, in this work, the authors focused on the combination with the Grover mixers, which have shown to be effective for solving constrained [110] and unconstrained [85] optimization problems. This application of Th-QAOA combined with the Grover mixer is called GM-Th-QAOA. It was demonstrated that the algorithm could be classically simulated up to 100 qubits with relative ease because of memory optimization techniques applicable to implementing the algorithm. The authors found that this variation of QAOA outperforms other variants in terms of the approximation ratio for a range of optimization problems, including MaxCut, Max-\(k\)-Vertex-Cover, and Max Bisection. Constraint Preserving Mixers:A framework for constructing mixing operators that enforce hard constraints in quantum optimization problems was introduced by Fuchs et al. [87]. The authors generalized the XY-mixer, designed to preserve the subspace of "one-hot" states, to the case of subspaces given by a number of computational basis states. The underlying mathematical structure is exposed to minimize the cost of the mixer in terms of CNOT gates, mainly when Trotterization is considered. This work also introduces efficient decomposition algorithms for basis gates and analyzes several examples of more general cases. Govia et al. [111] proposed the free axis mixer quantum alternating operator ansatz called Free Axis Mixer-QAOA (FAM-QAOA), which adds additional variational parameters in the XY-plane of the mixer Hamiltonian. They explore the Hilbert space expansion and the Z-phase error mitigation, showing that the ansatz outperforms the standard QAOA, especially at low depths. #### 3.1.8 Warm-starting QAOA Inspired by the recent progress in the study of the continuous relaxations of NP-hard combinatorial optimization problems, Egger et al. [88] proposed to "warm-start" the QAOA parameters based on the solution to the relaxed QUBO problem, i.e., one with continuous variables instead of binary ones. They considered two types of continuous-value relaxations, quadratic programming (QP), which corresponds to the cases of QUBO where the matrix \(\mathbf{Q}\) in Eq. (3) is positive semidefinite, and semidefinite programming (SDP) when \(\mathbf{Q}\) is not positive semidefinite. In the simplest variant of Warm Starting QAOA (WS-QAOA), one replaces the initial \(n\)-qubit equal-superposition state \(\ket{+}^{\otimes n}\) with the state \[\ket{s^{*}}=\bigotimes_{i=0}^{n-1}\hat{R}_{Y}(\theta_{i})\ket{0}^{\otimes n}, \tag{33}\] where \(\theta_{i}=2\arcsin\bigl{(}\sqrt{c_{i}^{*}}\bigr{)}\), with \(c_{i}\in[0,1]\) is the \(i\)-th coordinate of the optimal solution to the continuous-valued relaxation QP. Moreover, the mixer Hamiltonian is modified accordingly to \(\hat{H}_{M}^{\mathrm{(ws)}}=\sum_{i=0}^{n-1}\hat{H}_{M,i}^{\mathrm{(ws)}}\), where \[\hat{H}_{M,i}^{\mathrm{(ws)}}=-\sin(\theta_{i})\hat{X}-\cos(\theta_{i})\hat{Z} =\begin{pmatrix}2c_{i}^{*}-1&-2\sqrt{c_{i}^{*}(1-c_{i}^{*})}\\ -2\sqrt{c_{i}^{*}(1-c_{i}^{*})}&1-2c_{i}^{*}\end{pmatrix}\!. \tag{34}\] Therefore, the action of the mixer unitary \(\hat{U}_{M}^{(\text{ws})}=e^{-i\beta\hat{H}_{M}^{(\text{ws})}}\) on qubit \(i\) can be implemented via single-qubit rotations \(\hat{R}_{Y}(\theta_{i})\hat{R}_{Z}(-2\beta)\hat{R}_{Y}(-\theta_{i})\). In another variant of WS-QAOA, the optimum of the continuous-valued relaxation is randomly rounded before using it as the initial state. A notable example is the Goemans-Williamson random-hyperplane rounding of SDP relaxations for the MaxCut problem. The most significant advantage of this approach is that it provides initial states that already have the best approximation guarantee available classically in polynomial time and can retain the GW bound at any number of layers \(p\). Therefore, the solution output by WS-QAOA is at least as good as that given by GW rounding. Furthermore, WS-QAOA can be readily incorporated into the workflow of recursive QAOA, and this variant (WS-RQAOA) was shown in numerical simulations to give the best MaxCut results for both random and full-connected graphs with 20 and 30 nodes. The WS-QAOA approach thus provides an advantage over the standard QAOA at low depth, which is particularly important for implementing on the NISQ devices. #### 3.1.9 Falqon In the literature discussed thus far, the QAOA and its variants have primarily been hybrid algorithms, using quantum computation alongside classical optimization of variational parameters. This proves beneficial in the NISQ era of quantum computers, as they are not fault-tolerant or stable enough to handle an entire optimization problem independently and do not need to. However, this may hinder performance in future quantum computers, as there will be an efficiency bottleneck in passing information back and forth between quantum and classical computers. The solution to this potential problem lies in the quantum computer's ability to encompass the full spectrum of processing and optimization. An introduction to this framework is given in the work by Magann et al. [66], in which the authors introduced a feedback-based strategy for quantum optimization. This algorithm is designed around qubit measurements to assign values to quantum circuit variational parameters constructively. This procedure results in an estimate of combinatorial optimization solutions that improves monotonically with the circuit depth. Crucially, this measurement-based feedback loop enables approximate solutions without classical optimization, as the entire process can be done on the quantum device. This purely quantum optimization loop is achieved through a direct connection to Quantum Lyapunov Control (QLC) [89], which is a control strategy that uses feedback to identify controls to manipulate the drive dynamics of a quantum system. The authors demonstrated the capabilities of this algorithm. They coined the Feedback-based ALgo-rithm for Quantum OptimizatioN (FALQON) on combinatorial optimization problems such as MaxCut on 3-regular and randomly generated non-isomorphic graphs. The FALQON is defined similarly to the original QAOA, with the "drift" and "control" Hamiltonians, \(H_{p}\) and \(H_{d}\), respectively, rather than cost and mixer ones, although they have similar forms. The authors begin by considering a quantum system whose dynamics are governed by \[i\frac{\mathrm{d}}{\mathrm{d}t}\ket{\psi(t)}=\left(H_{p}+H_{d}\beta(t)\right) \ket{\psi(t)}, \tag{35}\] and seek to minimise \(\bra{H_{p}}=\bra{\psi(t)}{H_{p}}\psi(t)\), which is done by designing \(\beta(t)\) such that \[\frac{\mathrm{d}}{\mathrm{d}t}\bra{\psi(t)}{H_{p}}\psi(t)=A(t)\beta(t)\leq 0 \tag{36}\] where \(A(t)\equiv\bra{\psi(t)}{i[H_{d},H_{p}]}\psi(t)\) and \(\beta(t)=-A(t)\). This is an essential step in circumventing the need for a classical optimizer since the updated parameters are taken directly from the expectation value measurements. Although this algorithm is designed for purely quantum computation for fault-tolerant devices that do not yet exist, it still finds use in NISQ devices as a way to improve the initialization of the standard QAOA. In the original QAOA, the initial parameters are chosen at random. However, with the help of FALQON, these seed parameters can be better chosen after a few iterations of the purely quantum algorithm. It was demonstrated that this hybridization of FALQON and QAOA, coined the FALQON+ [89], allows QAOA to start from a parameter set that has a higher success probability and smoother solution landscape. Performing the algorithm on a set of non-isomorphic graphs over 8 to 14 vertices, the authors find a significant increase in approximation ratio and success probability with a minimal increase in circuit depth and noise degradation. This makes the FALQON+ algorithm highly suitable for NISQ devices as a warm-start technique that utilizes the best features of both the fully quantum and hybrid approaches. #### 3.1.10 Fermionic QAOA Yoshioka et al. [90] proposed the Fermionic Quantum Approximate Optimization Algorithm (FQAOA) to solve combinatorial optimization problems with constraints, utilizing fermion particle number preservation to impose these constraints intrinsically. Many such problems feature constraints that can negatively impact optimization algorithms when treated as soft constraints in the cost function. FQAOA tackles this issue by using fermion particle number preservation to impose constraints throughout the QAOA process intrinsically. The authors offer a systematic guideline for designing the "driver" Hamiltonian (\(H_{d}\)) for problem Hamiltonians with constraints. In the context of the quantum adiabatic theorem, the driver Hamiltonian is used to slowly transform a system Hamiltonian from its initial state to a cost function-based problem Hamiltonian, ultimately leading to optimal solutions of the cost function from the ground state of \(H_{d}\) (Section 2.4.2). They suggest choosing the initial state to be a superposition of states satisfying the constraint and the ground state of the driver Hamiltonian. In the FQAOA ansatz, the mixer unitary is generated by the driver Hamiltonian which is carefully designed to satisfy specific conditions, ensuring that the constraints of the combinatorial optimization problem are intrinsically imposed. The driver Hamiltonian introduces hybridization between different basis states by representing non-local fermions with hopping terms. This hybridization allows the algorithm to effectively explore different solutions in the search space. FQAOA has demonstrated substantial performance advantages over existing methods in portfolio optimization problems. According to the authors, the Hamiltonian design guideline is valuable for both QAOA and Grover adaptive search and quantum phase estimation in solving constrained combinatorial optimization problems. This compatibility enables the application of existing software tools developed for fermionic systems in quantum computational chemistry to address constrained optimization challenges. #### 3.1.11 Other approaches Wang et al. [91] proposed a method to deal with hard combinatorial optimization problems where the energy landscape is rugged and the global minimum locates in a narrow region of the cost function landscape. In such a problem, the global minimum must satisfy a set of clauses \(C\), encoded in the cost Hamiltonian of QAOA. The authors found that the problem mainly originates from the QAOA circuit rather than the cost function. So they decided to exploit the combinatorial nature of the problem by selectively dropping out the clauses defining the quantum circuit, \[\hat{H}_{C^{\prime}}=\sum_{c_{i}\in C^{\prime}\subset C}\hat{H}_{c_{i}}, \tag{37}\] while keeping the cost function intact to ensure the uniqueness of the global minimum. The quantum dropout of clauses helps smoothen the energy landscape, making optimizing parameters easier. The numerical results confirmed QAOA's performance improvements with various types of quantum dropout implementation and that the dropout of clauses in the circuit does not affect the solution. Wurtz and Love [92] introduced the Spanning Tree QAOA (ST-QAOA) to solve MaxCut problems using an ansatz derived from an approximate classical solution. In the ST-QAOA ansatz, a classical solver is first used to find an approximate solution for the MaxCut problem, typically represented by a spanning tree of the graph. This approximate solution is then used to construct a problem instance-specific circuit with \(r\) rounds of gates. The circuit is designed to reflect the problem structure, using classical algorithm insights to tailor the quantum circuit specifically for the given problem instance. When \(r=1\), the ST-QAOA is guaranteed to match the performance of the classical solver. As the number of rounds increases, the ST-QAOA ansatz approaches the exact solution to the MaxCut problem. This approach achieves the same performance guarantee as the classical algorithm and can outperform the vanilla QAOA at low depths. An additional modification to the ansatz construction of QAOA is given by the work of Li et al. [112], in which the authors proposed modifications to both the QAOA ansatz as well as the prescription for choosing the variational parameters of the quantum circuit. First, the Gibbs objective function is defined and shown to be superior to the energy expectation value, \(\langle E\rangle\), as an objective function for optimizing variational parameters. Second, the authors describe an Ansatz Architecture Search (AAS) algorithm for searching the discrete space of quantum circuit architectures near QAOA to find a better ansatz. The Gibbs objective function is defined as follows, \[f=-\log(e^{-\eta E}), \tag{38}\] where \(\eta>0\) is a hyper-parameter based on the general properties of the class of problems, \(E\) is the energy of the Ising model used to encode the optimization problem, and \(f\) has a form similar to the Gibbs free energy in statistical mechanics; hence the name. The exponential profile rewards the optimization procedure for increasing the probability of low energy and de-emphasizes the shape of the probability distribution at higher energies. The authors propose greedy search as an affordable strategy for AAS. A model \(\mathcal{I}\) is defined on a graph \(\mathcal{G}^{\mathcal{I}}\) with \(m\) vertices, while \(\mathcal{G}^{\mathcal{A}}\) is the ansatz graph obtained from \(\mathcal{G}^{\mathcal{I}}\) by removing some edges. The QAOA prescription is to set \(\mathcal{G}^{\mathcal{A}}=\mathcal{G}^{\mathcal{I}}\) and search architectures by searching through graphs obtained by removing edges from \(\mathcal{G}^{\mathcal{I}}\). Given an instance \(\mathcal{I}\), the search starts with \(\mathcal{G}^{\mathcal{A}}=\mathcal{G}_{m}\) at level \(0\). Then level by level, ansatze are expanded by removing one two-qubit gate from the best ansatz of the previous level, scored, and the best of them is selected as the output of this level. The output architectures at level \(l\) have \(l\) two-qubit gates (i.e., edges of the graph) removed. It was found that applying these modifications to a complete graph Ising model results in a 244.7% median relative improvement in the probability of finding a low-energy state while using 33.3% fewer two-qubit gates. Villalba-Diez et al. [31] also proposed a modification to the cost Hamiltonian. Their improved ansatz, "Modified QAOA", performs a conditional rotation of \(\gamma\) to each node connected to another if the second is in state \(\ket{1}\). This is done by concatenation of two \(U_{3}(\gamma/2,0,0)\) and \(U_{3}(-\gamma/2,0,0)\) gates. They follow this with a conditional CX rotation to each pair of nodes. They tested their approach in simulations for up to 30 network nodes and for \(p=1\) and observed significant increase of the approximation ratio achieved; compared to the vanilla QAOA. ### Parameter Optimization In this section, we discuss key aspects of parameter optimization in QAOA. This covers various approaches to finding good initial parameters, which becomes increasingly important as the depth and complexity of the QAOA circuit increase. The original proposal of the algorithm suggests a random selection of initial parameters within a range believed to be close to the optimal parameters. However, this method can often hinder the algorithm's performance, especially when the cost function landscape is rugged and contains numerous local minima. Therefore, developing efficient and reliable parameter optimization strategies is crucial to achieving optimal QAOA performance. This section also covers the selection of appropriate classical optimization algorithms for investigating the parameter space and strategies for overcoming barren plateaus (Table 2). #### 3.2.1 Finding good initial parameters To address the issue of a random selection of initial parameters, researchers have proposed various techniques to find better initial parameters. One such approach is provided by Zhou et al. [15], who developed an efficient parameter optimization procedure for QAOA applied to MaxCut problems. They proposed heuristic strategies for initializing optimizations, allowing them to find quasi-optimal \(p\)-level QAOA parameters in \(O[\text{poly}(p)]\) time compared to random initialization, which requires \(2^{O(p)}\) optimization runs for similar performance. Their heuristic strategies, such as INTERP and FOURIER heuristics, showed promising results in finding optimal parameters for large p-level QAOA. Lee et al. [113] proposed a parameter fixing strategy for QAOA training to address the difficulty of finding optimal parameters at large p values and achieve higher approximation ratios. The algorithm initializes the QAOA circuit with optimal parameters from previous layers, finds the best parameters for each layer, fixes them, and then adds another layer on top of the previous one with new parameter values to optimize. Jain et al. [114] proposed a more efficient initialization of QAOA using Graph Neural Networks (GNNs). This approach builds on the precedent of warm-start techniques, aiming to initiate the initialization process closer to the target parameters. The GNN approach generalizes across graph instances and increases graph sizes, speeding up inference time across graphs. After GNN initialization, the authors explore several optimizers, including quantum aware/agnostic methods and machine learning techniques such as reinforcement learning, making the training process an end-to-end differentiable pipeline. Another technique to improve initial parameters is based on parameter transferability across graphs, as proposed by Galda et al. [115]. They showed that optimal QAOA parameters converge around specific values, and their transferability among different QAOA instances can be predicted based on local characteristics of the subgraphs composing the original graph. Building on this idea, Shaydulin et al. [116] proposed reusing optimal QAOA parameters for a given problem as an initial point for similar problem instances, demonstrating that this approach not only improves solution quality but also reduces \begin{table} \begin{tabular}{l l} \hline \hline **Problem Addressed** & **Approach** \\ \hline \hline \multirow{8}{*}{Initial Parameter Search} & Heuristic strategies for initializing optimizations [15] \\ & Layer-by-layer optimization [113] \\ & Warm-start techniques using GNNs [114] \\ & Transferability among different QAOA instances based on local characteristics of subgraphs [115] \\ & Parameter reusability for similar problem instances [116, 117] \\ & Parameter initialization using TQA method [118] \\ \hline \multirow{8}{*}{Gradient-free Parameter Optimization} & Comparison of gradient-based and gradient-free methods [119, 120, 121] \\ & Genetic algorithm approach for optimization [122] \\ & Robust control optimization techniques [123] \\ \hline \multirow{8}{*}{Gradient-based Parameter Optimization} & Gradient-based approaches with machine learning techniques \\ & Gradient-based optimization with tensor networks [125] \\ & Stochastic Gradient Descent in quantum context [63] \\ & Surrogate-model based optimization [126] \\ & Policy gradient-based reinforcement learning algorithm for QAOA [127] \\ & BFGS optimization algorithm for QAOA [128] \\ \hline \multirow{8}{*}{Machine Learning for Parameter Optimization} & Parameter correlation and machine learning models [129] \\ & Meta-learning for QAOA optimization [130, 131] \\ & Clustering for setting QAOA parameters [19] \\ & GNN-based prediction of QAOA parameters [132] \\ & RL and KDE techniques for parameter optimization [133, 101] \\ \hline \multirow{4}{*}{Remedies for Barren Plateaus} & Incremental growth of circuit depth during optimization [134] \\ & Parameter concentraion as an inverse polynomial with respect to the problem size [135] \\ \hline \multirow{4}{*}{Parameter Concentration \& Symmetry} & Inverse polynomial concentration of optimal parameters [135] \\ & Exploiting symmetry in objective functions, cost Hamiltonians, and QAOA parameters to improve optimization [136, 15, 93, 15, 137] \\ \hline \hline \multicolumn{2}{l}{Analytical Solutions for Optimal Parameters} & Deriving analytical solutions for optimal parameters [138] \\ \hline \end{tabular} \end{table} Table 2: Summary of approaches in QAOA parameter optimization. the number of evaluations required. Shaydulin et al. [117] also developed QAOAKit, a Python framework that includes a set of pre-optimized parameters and circuit templates for QAOA, leveraging parameter transferability to generate high-quality initial guesses for parameter optimization. Sack and Serbyn [118] suggested initializing QAOA parameters based on the Trotterized quantum annealing (TQA) method, parameterized by a single variable, the Trotter time step. They established a heuristic way of finding the optimal time step based on TQA protocol performance, showing that this method of initialization can avoid the issue of false minima for a wide range of time steps, allowing QAOA to find solutions comparable to the best outcomes obtained from an exponentially scaling number of random initializations. #### 3.2.2 Selecting an appropriate optimizer Selecting an appropriate optimization algorithm is crucial for enhancing the performance of QAOA. This subsection reviews various optimization approaches for improving QAOA parameters, including gradient descent, policy gradient, BFGS algorithm, and machine learning techniques. We categorize these methods into three broad categories: gradient-free, gradient-based, and ML. Gradient-free methodsIn optimization algorithms, gradient-free methods have gained attention due to their computational efficiency. For example, McClean et al. [1] found that significantly fewer functional evaluations are required when using gradient-free methods in VQE. Shaydulin et al. [116] were among the first to benchmark different gradient-free optimizers on QAOA. They compared the solution quality of QAOA produced by six different gradient-free optimizers, including BOBYQA, COBYLA, NEWUOA, Nelder-Mead, PRAXIS, and SBPLX. It was found that for a fixed number of functional evaluations allowed, using BOBYQA within the APOSSM framework, which allows for parallel optimization, led to the best performance. Despite this, QAOA's performance was severely challenged with increasing layers across all gradient-free optimizers tested, suggesting the hardness of parameter optimization even at modest circuit depths. More recently, Fernandez-Pendas et al. [119] investigated the performance of twelve different classical optimizers for QAOA optimization and found that gradient-based methods like Adam and SPSA have computational times up to two orders of magnitude higher compared to gradient-free methods such as COBYLA, Powell and Nelder-Mead, despite achieving similarly good results. In a broader context of hybrid quantum-classical algorithms, Bonet-Monroig et al. [120] tested four commonly used gradient-free optimization methods: SLSQP, COBYLA, CMA-ES, and SPSA, on finding ground-state energies of a range of small chemistry and material science problems. The study, although not explicitly focused on QAOA, demonstrated the necessity for tailoring and hyperparameter-tuning known optimization techniques for inherently-noisy variational quantum algorithms and highlighted that the variational landscape that one finds in a VQA is highly problem- and system-dependent. Similarly, in a study focused on variational hybrid quantum-classical algorithms, specifically the variational quantum linear solver, Pellow-Jarman et al. [121] examined the impact of several gradient-free and gradient-based classical optimizers on the performance of these algorithms. They analyzed both the average rate of convergence and the distribution of average termination cost values of the classical optimizers, considering the effects of noise. Their findings indicate that realistic noise levels on NISQ devices pose a significant challenge to the optimization process, negatively affecting all classical optimizers. However, they found that the gradient-free optimizers, Simultaneous Perturbation Stochastic Approximation (SPSA) and Powell's method, and the gradient-based optimizers, AMSGrad and BFGS, performed the best in the noisy simulation and were less affected by noise than other methods. In particular, SPSA emerged as the best-performing method. Conversely, the COBYLA, Nelder-Mead, and Conjugate-Gradient methods were the most heavily affected by noise, with even slight noise levels significantly impacting their performance. The study suggests that if noise levels can be significantly improved, gradient-based methods, which performed better than the gradient-free methods with only shot-noise present, may be preferred in the future. Acampora et al. [122] addressed the remaining shortcomings of the above gradient-free optimization methods with a new approach. The authors proposed an evolutionary approach to optimization using genetic algorithms. A genetic algorithm is a search heuristic that reflects the process of natural selection where the fittest individuals are selected for reproduction to produce offspring of the next generation. The authors expressed that such a population-based heuristic could more efficiently process candidate solutions and converge to an optimal parameter set for a QAOA circuit. This evolutionary approach is demonstrated on noisy quantum devices and compared with popular gradient-free optimizers (COBYLA, Nelder-Mead, Powell's modified method, and SPSA) when solving the MaxCut problem for graphs with 5 to 9 nodes. The authors found that the proposed genetic algorithm statistically outperforms other gradient-free algorithms in achieving higher approximation ratios for the same QAOA circuits. Despite these promising results, the authors noted that genetic algorithms are known to have limitations, such as scalability in the number of parameters to learn and premature convergence, encouraging more research to build upon this proposal for a more reliable gradient-free optimization solution. In another recent work, Cheng et al. [139] introduced a novel gradient-free optimizer called Double Adaptive-Region Bayesian Optimization (DARBO), which demonstrated robustness against measurement noise and quantum noise. This optimizer explores the QAOA landscape using a Gaussian process surrogate model and iteratively suggests the optimal parameters restricted to two auto-adaptive regions. Incorporating the two adaptive regions makes it more robust to noise and different initial parameters. Upon benchmarking against other optimizers (Adam, COBYLA, SPSA) for the MaxCut problem on weighted 3-regular graphs, DARBO showed superior performance in simulations with measurement shot noise. Furthermore, the authors also demonstrated that DARBO remained effective on superconducting quantum computers despite hardware noise by integrating with proper quantum error mitigation techniques. In addition to the previously discussed methods, robust control optimization techniques have also been explored in the context of QAOA. Dong et al. [123] demonstrated that the error of QAOA simulation could be significantly reduced by robust control optimization techniques, specifically by sequential convex programming (SCP). This approach ensures error suppression in situations where the source of the error is known but not necessarily its magnitude. The study showed that robust optimization improves the objective landscape of QAOA and overall circuit fidelity in the presence of coherent errors and errors in initial state preparation. Gradient-based ApproachesGradient-based strategies are one of the most commonly used methods for parameter optimization, which come in various flavors. The gradient descent method is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. Gradient descent takes repeated steps in the opposite direction of the gradient because the direction of the gradient is the direction toward the local maximum. Combining the gradient descent method with machine learning, Crooks [124] optimized QAOA on a classical computer using automatic differentiation and stochastic gradient descent using QuantumFlow, a quantum circuit simulator implemented with TensorFlow. Authors amortized the training cost of QAOA circuits by training variational parameters on batches of problem instances (graphs), thus alleviating the training procedure. Streif and Leib [125] also simulated on classical hardware with tensor networks instead of making parameter updates by repetitive calls of the QPU. As a notable variant in the gradient descent family, Stochastic Gradient Descent (SGD) has been widely used in machine learning for training deep neural networks and made it an option for efficient exploration of quantum circuits using classical simulation due to accelerated optimization and ease of use. However, implementing stochastic gradient descent directly on a quantum computer is demanding, requiring many measurements for each gradient component, which can be computationally costly and complex. Sweke et al. [63] makes use of SGD, which replaces the exact partial derivative at each optimization step with an estimator of the partial derivative to tune the value and provide numerical results using a gradient, offering a practical way to utilize SGD in a quantum context. Sung et al. [126] introduced a surrogate-model-based algorithm called the Model Gradient Descent (MGD), which is also inherently stochastic. This method incorporates a least-squares quadratic model to estimate the gradient of the objective function, allowing the previously evaluated points to be reused and leading to more efficient optimization. Empirical comparisons with other popular optimizers (SGD, SPSA, BOBYQA, Nelder-Mead) suggested that stochastic optimizers such as MGD may be advantageous in realistic settings since they are more robust to variations in problems and show good tolerance to noise. The work by Yao et al. [127] is focused on the fact that the original QAOA selects initial parameters at random and optimizes them through gradient-based methods, which can be computationally expensive and vulnerable to noise in NISQ devices and may hinder the optimization process. Noting this, the authors suggested a new method for selecting and optimizing the variational parameters of the model. This is achieved by assigning a probability distribution to randomly selected initial parameters and using this distribution as a basis for a policy gradient-based reinforcement learning algorithm to find optimal variational parameters. It was demonstrated that this policy-gradient-based model (PG-QAOA) does not require derivatives to be computed explicitly and can perform well even if the objective function is not smooth concerning the error. This probability-based approach also has the advantage of resisting perturbations and noise. In this sense, policy-gradient-based reinforcement learning algorithms are well suited for optimizing the variational parameters of QAOA in a noise-robust fashion, opening up the way for developing reinforcement learning techniques for continuous quantum control. Additionally, similar to the MGD extension for gradient descent, the vanilla policy gradient method [127] can also be extended with a surrogate model to reduce the variance in the estimation of the policy gradient, which is dubbed the Model Policy Gradient (MPG) method [126]. Like MGD, MPG showed good tolerance to noise and robustness to problem variations. In the work by Lotshaw et al. [128], the effectiveness of circuit optimization using BFGS was assessed against exact optimization software and brute force solutions for several graphs up to \(p=2\). Empirical results confirmed that BFGS could return optimal QAOA angles in all the test cases. However, it is still unclear how BFGS will behave for larger parameter vectors, i.e., for \(p\geq 3\). The authors also found that optimized angles returned by BFGS reveal various symmetry patterns, where multiple angle solutions gave the same expectation value of the cost function. In light of this, they suggest that appropriate heuristics may be employed to select parameters more efficiently. Machine Learning ApproachesIn addition to gradient-based methods, machine learning has been investigated as a different way to find optimal parameters for QAOA and enhance the optimization process. Alam et al. [129] applied machine learning techniques to accelerate QAOA optimization based on parameters' correlation. The authors noted a correlation among parameters of the lower-depth and higher-depth QAOA layers, so they exploited it by training machine learning models to predict the variational parameters close to the optimal values. Then, these quasi-optimal parameters are fine-tuned with classical optimizers to generate the final solution. The authors trained four machine learning models: Gaussian Process Regression (GPR), Linear Regression (LM), Regression Tree (RTREE), and Support Vector Machine Regression (RSVM). They found that the proposed machine learning-based approaches can shorten the optimization iterations by 44.9% on average. A meta-learning approach based on classical Long Short-Term Memory (LSTM) neural networks was also investigated by Wang et al. [130] and Wilson [131], where similar conclusions were drawn on the goodness of meta-learner techniques for QAOA optimization. In [130], QAOA with meta-learning (MetaQAOA) for the MaxCut problem was proposed: an LSTM neural network was used as a black-box optimizer in order to help find optimal QAOA parameters. Numerical simulations showed that MetaQAOA converged faster and to a better value than the vanilla QAOA with local optimization methods such as Nelder-Mead and L-BFGS-B. Similarly, Wilson [131] compared the performance of an LSTM meta-learner to evolutionary strategies, L-BFGS-B and Nelder-Mead approaches, confirming the results of Wang et al. [130] and showing that the meta-learner applied to a QAOA problem finds the global optima more frequently than all other optimizers tested in the paper, is more resistant to noise and can easily generalize to larger problems even if it was trained on small problems. Moussa et al. [19] made use of unsupervised ML, namely clustering, for setting the QAOA angles without optimization. They considered several inputs to the clustering algorithm, including the angle values, instance features, and the output from a variational graph autoencoder. Their findings demonstrated that such a method is effective in learning how to set QAOA parameters, reducing circuit calls while maintaining a relatively small reduction in the approximation ratio of less than 1-2%. They achieved comparable results to those obtained through exhaustive angle optimization, translating to significant reductions in the required circuit calls. Deshpande and Melnikov [132] used a Graph Neural Network (GNN) to predict QAOA's optimal parameters for unseen MaxCut instances, with up to nine vertices and a depth of \(p=3\). The GNN could predict quasi-optimal QAOA's parameters within 2.7% of the optimal parameter solution, which could then be used as a warm-start for QAOA. However, the scalability of this approach is uncertain, as it requires many training instances for the GNN to perform well in the prediction tasks. Reinforcement Learning (RL) has also been investigated in the VQA framework to improve parameters optimization [140, 141]. On this matter, some papers explored the potential of RL in assisting QAOA's optimization for finding optimal parameters [101, 133]. Khairy et al. [133] try to find optimal QAOA parameters through two different machine learning methods: an RL technique and a Kernel Density Estimation (KDE) approach. The RL framework trains a policy network to optimize QAOA circuits. The network exploits geometrical regularities in the QAOA energy landscapes to efficiently find high-quality solutions for unseen test instances with only a few hundred quantum circuit evaluations. The KDE technique is used to create a generative model of optimal QAOA parameters, which can be employed to generate new parameters and quickly solve test instances. Extensive simulations with IBM Qiskit Aer demonstrated that both the proposed methods are effective in finding good QAOA parameters, achieving superior approximation ratios compared to other commonly used off-the-shelf optimizers. Finally, Patel et al. [101] uses RL to enhance the performance of the RQAOA (Section 3.1.6). In particular, they compared the original RQAOA [83] with a proposed RL-enhanced RQAOA variant (RL-RQAOA). The latter trains the circuit's parameters via RL and uses correlations between qubits to perform variable elimination at every iteration of the algorithm. Through simulations over an ensemble of randomly generated weighted \(d\)-regular graphs, the authors empirically showed that a \(p=1\) RL-RQAOA consistently outperforms RQAOA and simple classical RL agents. OthersIn Barkoutsos et al. [142], they proposed using the Conditional Value-at-Risk (CVaR) instead of choosing the sample mean of the expectation values as the objective function. CVaR is a risk metric that estimates the expected loss of a portfolio under a specified confidence level by focusing on the tail end of the loss distribution. The researchers demonstrated that employing the CVaR aggregation function in QAOA enabled the algorithm to achieve optimal solutions more rapidly in simulations and on real quantum hardware, such as the IBMQ Poughkeepsie 20-qubit device. To address the difficulty of finding optimal parameters at large \(p\) values and to achieve a higher approximation ratio, Lee et al. [113] proposed a parameter fixing strategy for QAOA training. Starting from \(p=1\) and continuing to a desired \(p\) level, the algorithm initializes QAOA with the optimal parameters from previous layers. The algorithm finds the best parameters for each layer, fixes them, and then adds another layer on top of the previous one with new parameter values to optimize. In their study, Lotshaw et al. [128] developed a search heuristic for QAOA parameter optimization that is effective for a wide range of graphs, which also significantly decreases computational expenses when compared with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) search with random seeding, which is a widely used optimization algorithm. They achieved this by examining how the optimized angle patterns based on a wide range of graphs can help efficiently identify suitable approximate angles. They demonstrated that identifying consistent patterns among the optimized variational parameters is an efficient heuristic for solving MaxCut problems with up to 9 vertices with up to \(p=3\) QAOA. #### 3.2.3 Overcoming barren plateaus A significant challenge in training parameterized quantum circuits, such as QAOA, is addressing the issue of barren plateaus in the cost function landscape (Section 2.4.3). Barren plateaus are regions where the gradients concerning the trainable parameters vanish exponentially in the number of qubits, causing the optimization process to stall. To tackle this problem, several approaches have been proposed in the literature, aiming to improve the optimization process and help QAOA converge to better solutions. One promising strategy to overcome this issue is presented in the work of Skolik et al. [134], in which a layerwise learning strategy is developed. During optimization, the circuit depth is grown incrementally, with only subsets of parameters being updated with every training step. In this approach, the circuit structure and a number of parameters are successively grown while the circuit is trained, and the randomization effects are contained in subsets of the parameters in all the training steps. This avoids initializing on a plateau and reduces the probability of creeping onto a plateau during training. The authors demonstrate the success of this strategy in an image classification problem with a general parameterized quantum circuit, which obtained an average generalization error of 8% lower than in standard learning schemes, but the methodology applies to all VQAs, such as QAOA. The phenomenon of parameter concentration in QAOA circuits has been investigated by Akshay et al. [135]. They found that optimal parameters concentrate as an inverse polynomial concerning the problem size, which is beneficial for improving circuit training. Empirical investigation revealed two symmetric branches of optimal parameters, within which parameters concentrated in a fixed range of values as the number of qubits varied. This concentration effect allows training a depth-\(p\) QAOA on a fraction \(w<n\) of qubits and asserting that these parameters are nearly optimal on \(n\) qubits and \(p\) layers, thereby reducing training time. Dupont et al. [143] investigated the growth and spread of the entanglement resulting from optimized and randomized QAOA circuits to solve the MaxCut problem on different graphs. The study finds a volume-law entanglement barrier between the initial and final states. Entanglement barrier \(S\sim N\) must be crossed for large-depth QAOA circuits, making entanglement-based simulation methods challenging. This study also investigates the entanglement spectrum in connection with random matrix theory. The results are compared with a quantum annealing protocol, and implications for the simulation of QAOA circuits with tensor network-based methods are discussed. Finally, a promising general attempt at tackling the problem of barren plateaus stems from the idea of beginning the optimization process closer to the target parameters (Section 3.2). This may seem like an obvious solution. However, it is challenging to execute in practice because the solution landscape is almost entirely unknown in most variational problems. #### 3.2.4 Parameter transferability and reusability One promising approach to finding optimal QAOA parameters lies in the transferability and reusability of optimal parameters across different problem instances. This concept builds on the observation that optimal parameters tend to concentrate around specific values and that these values can be transferred from one problem instance to another based on their local characteristics. Galda et al. [115] provided a theoretical foundation for parameter transferability in QAOA. They showed that optimal QAOA parameters converge around specific values, and the transferability of these parameters among different QAOA instances can be predicted and described based on the local characteristics of the subgraphs composing the original graph. This observation provides a method for identifying categories of combinatorial optimization problems where QAOA and other VQAs can offer significant speedup. Based on this idea, Shaydulin et al. [116] proposed to reuse the optimal QAOA parameters for a given problem as an initial point for similar problem instances. They demonstrated that this not only improves the quality of the solution by avoiding the local optima but also reduces the number of evaluations required to reach it. Building on this, Shaydulin et al. [117] leveraged the property of optimal parameter transfer across similar graphs and proposed QAOAKit, a Python framework that includes a set of pre-optimized parameters and circuit templates for QAOA. Given an input graph, quasi-optimal parameters are obtained through a graph isomorphism certificate for the input graph, which is then employed as a key to extract the angles from QAOAKit's database. If optimal angles are not present in the database for a specific graph instance, the system will provide the closest fixed angles instead. #### 3.2.5 Leveraging parameter symmetries Parameter symmetries can be leveraged to simplify the optimization process and eliminate degeneracies in the parameter space, contributing to more efficient QAOA performance. This subsection discusses various works exploring and exploiting QAOA parameters' symmetries for improved optimization, focusing on symmetry in objective functions, cost Hamiltonians, and QAOA parameters. Symmetry in Objective Functions and Cost HamiltoniansShaydulin and Wild [136] established a connection between the symmetries of the objective function and the cost Hamiltonian concerning the QAOA energy. They showed that excluding terms connected by symmetry could significantly reduce the cost of evaluating the QAOA energy. They used fast graphs and automorphism solvers to compute the problem's symmetries. Although their approach provides a median speedup of 4.06 for \(p=1\) on 71.7% of the graphs considered, on a benchmark where 62.5% of the graphs are known to be hard for automorphism solvers, the automorphism calculation could require more time than the one saved by exploiting symmetry in the worst-case scenario. Shaydulin et al. [137] investigated the correlation between QAOA and the inherent symmetries of the target function to be optimized. They revealed how the symmetries of the objective function result in invariant measurement outcome probabilities among states connected by those symmetries, regardless of the number of layers or algorithm parameters employed. Using machine learning techniques, the authors leveraged these symmetry considerations to predict the QAOA performance accurately--by analyzing a small set of graph symmetry properties, they could predict the minimum QAOA depth required to achieve a desired approximation ratio on the MaxCut problem. Symmetry in QAOA ParametersAkshay et al. [135] investigated the phenomenon of parameter concentration in QAOA circuits, and they found that the optimal parameters concentrate as an inverse polynomial concerning the problem size, which is beneficial for improving circuit training. From empirical investigation, authors identified two symmetric branches of optimal parameters; within these branches, parameters concentrated in a fixed range of values as the number of qubits varied. The concentration effect allows training a depth-\(p\) QAOA on a fraction \(w<n\) of qubits and asserting that these parameters are nearly optimal on \(n\) qubits and \(p\) layers, thereby reducing training time. In other words, if parameter concentration occurs, the optimal set of parameters for \(n\) qubits is polynomially close to the optimal set of parameters for \(n+1\) qubits. Shi et al. [93] investigated the connection between symmetries in the input graphs and redundancy in ma-QAOA parameters [77], revealing that symmetries can result in a reduction of the number of parameters without decreasing the quality of the solution. The authors analyzed all the connected, non-isomorphic graphs with eight nodes, noticing that in over two-thirds of these graphs, the same accuracy ratio on the MaxCut problem can be obtained by reducing the number of parameters by \(28.1\%\), exploiting the natural symmetries of the graphs. Furthermore, they demonstrated that in \(35.9\%\) of the graphs, the aforementioned reduction could be accomplished by utilizing the largest symmetry. On the other hand, in the cases where the reduction in the number of parameters led to a decrease in performance, utilizing the largest symmetry led to a mere \(6.1\%\) decrease in the cost while successfully reducing the parameter count by \(37.1\%\). Zhou et al. [15] suggested an approach to parameterize QAOA, which could simplify the optimization process by decreasing the dimension of the search space. This was achieved by identifying and eliminating degeneracies arising from inherent symmetries in the parameter space, including time-reversal symmetry and \(\mathbb{Z}_{2}\) symmetry, before searching for patterns in optimal QAOA parameters. Moreover, for QAOA applied to MaxCut problems on regular undirected (udR) graphs, there was an additional symmetry due to the structure of the problem that created redundancy. This process of removing degeneracies creates a more navigable parameter space, facilitating a smoother and more effective search for optimal QAOA parameters. Apart from using classical optimizers in a variational loop, optimal QAOA parameters can sometimes be derived analytically. Wang et al. [138] derived analytical expressions to solve optimal parameters for the level-1 QAOA applied to MaxCut on general graphs. Although the analysis can be extended to higher \(p\) values theoretically, the number of terms involved quickly becomes intractable for direct calculation. Therefore, the authors proposed a fermionic representation for a specific instance of MaxCut: the ring of disagrees, or the one-dimensional antiferromagnetic ring. This approach translates the QAOA-induced evolution of the system into quantum control of a group of independent spins, allowing for the derivation of analytical solutions for any \(p\), thus simplifying the search for optimal parameter values. By exploring symmetries among parameter values, the authors identified a lower-dimensional sub-manifold that could minimize the search effort. Furthermore, they conducted a numerical investigation into the parameter landscape and empirically demonstrated that all minima are global minima. ### Computational Resource Efficiency In this section, we delve into the computational resource efficiency of QAOA, focusing mainly on the time it takes to reach desired solutions. We explore the speedup potential of QAOA over classical algorithms, highlighting instances where it has demonstrated superiority (Section 3.3.1). However, obstacles that hinder the realization of quantum speedups with QAOA also exist, such as the challenges in parameter optimization and the overheads of error correction in early fault-tolerant quantum computers (Section 3.3.2). Moreover, we highlight several strategies aimed at enhancing the runtime performance of QAOA (Section 3.3.3), in addition to the ones that focus on improving parameter optimization discussed in Section 3.2. By examining these aspects, we aim to provide a comprehensive analysis of the computational resource efficiency of QAOA, offering insights into its potential for enabling quantum advantage in optimization problems. #### 3.3.1 Speedup potential The QAOA is considered one of the leading candidate algorithms to achieve a quantum advantage over classical algorithms and quantum annealing. One key aspect in determining this advantage is the ability of the QAOA to solve problems faster than other algorithms, thereby achieving a quantum speedup. This subsection will explore recent research efforts that shed light on the QAOA's potential for achieving quantum speedup in different problem domains. Based on the amortization of the training cost by optimizing batches of MaxCut problem instances, Crooks [124] argued that an analysis of the computational complexity of QAOA hinges on the number of gates needed to implement a single instance of the quantum evolution. Since the required number of QAOA steps for a given performance does not appear to be strongly correlated with the problem size, the primary limiting factor on a gate-based quantum computer is the number of two-qubit gates required to implement a single round of the algorithm. Based on their analysis, QAOA for MaxCut on an \(n\)-node graph is expected to require \(O(n^{2}p)\) gates and have a run time of \(O(np)\) (assuming \(O(n)\) gates can be applied in parallel), where \(p\) is the number of layers in the QAOA circuit. In contrast, the classical Goemans-Williamson (GW) algorithm requires a run time of \(O(nm)\) for irregular graphs (ignoring logarithmic factors), where \(m\) is the number of edges. These findings suggest that QAOA, even at modest depths, may offer a speedup over its classical counterpart for dense graphs. QAOA also exhibits a speedup potential in the Minimum Vertex Cover (MVC) problem, as demonstrated by Zhang et al. [144]. In their study, they applied QAOA with varying numbers of layers (\(2\leq p\leq 10\)) to find the MVC on an undirected graph consisting of 10 vertices and 16 edges. They found that QAOA was able to effectively solve the MVC problem within a computational time complexity of \(O[\text{poly}(k)+\text{poly}(p)]\), where \(k\) represents the number of iterations in the optimization process. In contrast, the time complexity of a competitive decision algorithm for MVC is \(O(2n^{2}+n^{4})\), with \(n\) being the total number of vertices in the graph. Based on this comparison, the authors concluded that QAOA provides an exponential acceleration for large MVC problems. In another study, Ebadi et al. [26] implemented QAOA on 2D Rydberg atom arrays with up to 289 qubits. In their experiment, QAOA was applied to find the Maximum Independent Set (MIS) on 115 randomly generated graphs of various sizes, with the number of vertices ranging from 80 to 289. They benchmarked the results of the quantum algorithm against a classical counterpart, Simulated Annealing (SA) [145]. On graph instances in the deep-circuit regime, where \(\delta_{\text{min}}>1/T\), that is, the minimum energy gap \(\delta_{\text{min}}\) of a graph is large enough to be resolved in the duration of the quantum evolution \(T\), QAOA exhibited a superlinear quantum speedup compared to SA. Specifically, for SA, the probability of observing an MIS scales as \(P_{\text{MIS}}=1-\exp\bigl{(}-C\eta^{-1.03}\bigr{)}\), where \(\eta\) is a parameter characterizing the difficulty to reach the global optimum for a particular graph. On the other hand, QAOA gave rise to an improved scaling, \(P_{\text{MIS}}=1-\exp\bigl{(}-C\eta^{-0.63}\bigr{)}\). Since the runtime needed to find a solution is proportional to \(1/P_{\text{MIS}}\) by repeating the experiment, the smaller exponent in the scaling for QAOA thus implies a quantum speedup over SA. However, it should be noted that the observed speedup is specific to graph instances in the deep-circuit regime, and it remains an open question whether this advantage can be extended to more general cases. QAOA has demonstrated its ability to provide quantum speedup in optimization problems and other domains. One notable example is the unstructured search problem, where Jiang et al. [5] proposed a novel quantum algorithm by incorporating the QAOA circuit and replacing the original diffusion operator in Grover's algorithm with the transverse field. Their approach showcased the potential of QAOA in unstructured search, achieving a near-optimal query complexity of \(T\sim O(\sqrt{N})\) with an intermediate number of layers (\(p\gg 1\)), where \(N\) represents the search space size. This result demonstrates a quadratic quantum speedup similar to Grover's original algorithm. In a separate study, Niu et al. [146] investigated the dependence of the success probability of the QAOA on its circuit depth \(p\) by analyzing its performance for the state transfer problem in a one-dimensional qubit chain of length \(N\) using two-qubit XY Hamiltonians and single-qubit Hamiltonians. The authors derived analytical expressions for QAOA's success probability scaling as a function of the circuit depth. Interestingly, they established a connection between the state transfer problem and Grover's search algorithm, where the \(p\)-th Grover iteration for searching the transferred state from an initial state can be represented by the unitary realized by a depth-\(p\) QAOA circuit. In the low-depth limit, \(p\to 1\), the total number of steps required to achieve the target state is of order \(O(N)\), producing the quadratic Grover-like speedup. More recently, An and Lin [147] demonstrated that QAOA could also solve a Quantum Linear System Problem (QLSP) nearly optimally, with \(O(\kappa\text{ poly}(\log(\kappa/\epsilon)))\) runtime, where \(\kappa\) is the condition number and \(\epsilon\) is the target accuracy. In contrast, the best classical algorithm, the conjugate gradient method, has a time complexity \(O(N\sqrt{\kappa}\log(1/\epsilon))\), where \(N=2^{n}\) is the size of the system. QAOA thus has an exponential advantage concerning \(N\) over the classical algorithm. It also outperforms various quantum algorithms for this problem, such as the HHL algorithm [148] and those based on Linear Combination of Unitaries (LCU) [149] Quantum Signal Processing (QSP) [150]. #### 3.3.2 Obstacles to quantum speedup While the QAOA has demonstrated quantum speedup in specific problem domains (see Section 3.3.1), it is crucial to recognize that there are still numerous obstacles to overcome in achieving a quantum advantage in a more general context. Challenges related to circuit depth, entanglement, optimization of variational parameters, and noise pose significant hurdles. For example, Guerreschi and Matsuura [151] attempted to assess the feasibility of achieving a quantum speedup using QAOA in a more realistic setting that accounts for decoherence and dissipation effects. They aimed to determine the minimum number of qubits required for QAOA running on a quantum computer to outperform state-of-the-art classical solvers on combinatorial problems. They concluded that, in such a scenario, QAOA must run in a maximum of a minute to compete with classical solvers on problems with less than 400 variables. It would only be possible to achieve a quantum speedup for a specific combinatorial problem once several hundreds of qubits are available. They suggested that the reason for the exponential cost of the QAOA protocol is not due to the complexity of the quantum circuits but rather the challenge of optimizing the variational parameters used in the algorithm. In this sense, the efficiency of QAOA is strongly tied to its depth, i.e., the number of layers (\(p\)) chosen, since a larger QAOA depth also implies more parameters to optimize. The challenge of parameter optimization is further supported by the work of Herrman et al. [77]. In their ma-QAOA (Section 3.1.1) proposal, the performance of the standard QAOA is enhanced by adding more parameters to the circuit. Even though the approximate solution found by the ma-QAOA is better or equal to that of the standard QAOA, the increased number of variational parameters made the optimization process more challenging. The authors noted that the time required for each iteration of the optimization algorithm was slower for the ma-QAOA than the standard QAOA, as the number of gradient components increases linearly with the number of variables to optimize. However, it should be noted that from a theoretical perspective, the number of layers in ma-QAOA is always less than or equal to the number of layers required for the standard QAOA to achieve the same approximation ratio. This implies that the ma-QAOA potentially requires fewer samples or shallower circuits than the standard version. Further studies are needed to compare the performance of the two algorithms with an equal number of parameters to analyze their convergence properties. Overall, empirical results indicate that finding good parameters for ma-QAOA typically requires polynomial time. The challenge of finding optimal parameters was noted by Shaydulin et al. [116], who conducted a benchmark study comparing six different derivative-free local optimization methods for QAOA. With a budget of 1000 optimization steps and an optimistic assumption of 1000 measurements needed for obtaining the statistics to calculate the cost function value, the estimated time cost for a complete QAOA optimization is about 16 minutes, which is orders of magnitude greater than the runtime of classical state-of-the-art AKMAXSAT solvers [151]. They also observed that as the number of QAOA layers increased, the fraction of problems solved within a given optimization budget significantly decreased across all the derivative-free methods tested. These findings suggest that achieving a quantum speedup with QAOA, even at low depths, is challenging. Moreover, problems such as barren plateaus [152, 71] further exacerbate such situation, and we will need better parameter optimization strategies to improve the runtime of QAOA. More details on parameter optimization in QAOA can be found in Section 3.2. Even though QAOA was a heuristic algorithm designed to be run on NISQ devices, Sanders et al. [153] considered its utility on small fault-tolerant surface code quantum computers with around a million physical qubits or less to search for practical quantum advantages. They estimated the resources required to implement several heuristic quantum algorithms, including QAOA for the Sherrington-Kirkpatrick (SK) model and the Low Autocorrelation Binary Sequence (LABS) problem on early fault-tolerant quantum processors. The study revealed that the substantial overhead of state distillation in error correction introduces significant slowness and inefficiency in executing these algorithms. As a result, they concluded that any quantum optimization algorithm offering only a quadratic speedup is unlikely to produce any quantum advantage on these processors unless significant improvements in the surface code implementation, such as faster state distillation, are in place. However, the prospects for an error-corrected quantum advantage on a modest processor are significantly more promising with quartic speedups [154]. To this end, McClean et al. [155] advocated for a better understanding of the structure of classical optimization problems and the underlying physical mechanism of quantum optimization algorithms. This is because, for problems lacking structure, quadratic speedups are the best one could hope for. #### 3.3.3 Runtime performance improvements Despite the challenge in achieving a quantum speedup with QAOA due to many factors, many methods have been proposed in the recent literature to enhance the efficiency of the algorithm to reach desired solutions. Most proposals focus on improving the parameter optimization process, extensively surveyed in Section 3.2. Below we will highlight a few techniques that target other aspects of the algorithm. The efficiency of QAOA in reaching optimal solutions has been enhanced through various modifications to its ansatz design. One such modification is the ADAPT-QAOA (Section 3.1.5) proposed by Zhu et al. [82], which empirically demonstrated a faster convergence than the original QAOA due to shortcuts to adiabaticity. By incorporating entangling gates in the mixer operator pool, ADAPT-QAOA achieves a reduction in both the number of variational parameters and the number of CNOT gates by approximately 50%, while simultaneously yielding better results than the original algorithm as the num ber of layers increases. Another approach to improving the runtime of QAOA is the introduction of the ab-QAOA (Section 3.1.4), as proposed by Yu et al. [81]. In ab-QAOA, local fields are integrated into the operators themselves. Numerical simulations conducted on MaxCut problems have shown that ab-QAOA significantly decreases the QAOA runtime compared to the original algorithm for a given level of accuracy. Significantly, this improvement in runtime increases with problem size, resulting in polynomially shorter computation times than the vanilla QAOA concerning the number of nodes in the graph. Additionally, ab-QAOA requires the same number of quantum gates and measurements, making it a promising candidate for achieving a quantum advantage in combinatorial optimization problems. Li et al. [23] hierarchically analyzed the effects of influential factors on QAOA's runtime performance and proposed a 3-level improvement of the hybrid quantum-classical optimization for object detection. They achieved a significant speedup of more than 13 times at the first level by selecting the L-BFGS-B classical optimizer. This choice improved the efficiency of the classical optimization process. For the second level improvement, the authors constrained the QAOA circuit parameters to the range \((0,\pi)\), exploiting the symmetry of these parameters. This constraint led to a runtime acceleration of up to 5.5 times, with the expectation of further improvements for deeper QAOA circuits. Moreover, an acceleration of more than 1.23 times was obtained using parameter regression. Finally, at the third level, they empirically demonstrated that the circuit would achieve better fidelity by optimally rescheduling gate operations, especially for deeper circuits--a shorter critical path would not only make the QAOA's circuit execution faster, it would also mitigate the impacts of noise and decoherence. Larkin et al. [156] proposed a new metric for evaluating the runtime performance of QAOA in solving the MaxCut problem. They focused on the probability of observing a sample above a certain threshold. Given a desired approximation ratio value, QAOA's efficiency is measured by the time needed to observe at least one sample with the desired approximation ratio, with a probability of at least 50%. In this sense, the efficiency could also be seen as the number of circuit repetitions before a cut value above a fixed approximation ratio is observed. Specifically, they considered the approximation ratio \(\alpha\) and calculated the likelihood of observing a cut value above \(\alpha C_{\text{max}}\) in the first \(K\) samples. When this probability exceeded 50%, they took \(K\) as the expected number of repetitions needed to obtain the desired approximation. By doing so, in the training phase of the algorithm, the authors managed to reduce the execution time for MaxCut on random 3-regular graphs by two orders of magnitude compared to previous estimates reported in [151]. This was achieved by reducing the number of samples used to calculate the average approximation ratio and also by treating single samples as candidate solutions without waiting for the parameters' optimization to converge. The performance of QAOA was also evaluated in comparison with some of the best classical alternatives, i.e., an exact solver (AKMAXSAT) [157], an approximate solver (GW) [53], and a heuristic solver (DESOUSA2013) [158]. Thanks to the improved efficiency in the proposed parameter optimization process and approximation ratio calculation, the runtime performance of the QAOA was competitive concerning the aforementioned solvers. However, further studies are required to assess the effectiveness and scalability of this approach on different types of graphs and larger instance sizes. ### Quality of Solution Since its inception, there has been a persistent pursuit of establishing rigorous mathematical frameworks to analyze the performance of the QAOA and thereby provide specific theoretical bounds. These theoretical bounds play a crucial role in understanding the capabilities and limitations of this quantum optimization algorithm. On the one hand, they allow us to compare the performance of QAOA with classical optimization algorithms, providing a basis for evaluating the quantum advantage it may offer. This analysis aids in determining the feasibility and potential real-world impact of QAOA in various application domains. On the other hand, it enables us to assess the algorithm's inherent limitations and identify scenarios where it may be suboptimal or inefficient. This understanding helps guide the development of alternative optimization approaches or identify areas where the algorithm can be improved. In the following, we will survey previous studies that explore the performance guarantees (Section 3.4.1) and limitations (Section 3.4.2) of QAOA as well as some classical algorithms on different instances of optimization problems. Although most theoretical studies have primarily focused on low-depth QAOA analysis due to the complexity involved in higher depths, results applicable to higher depths, and even any depths, also exist. These results are summarized in Table 3. In presenting these results, we will focus on whether QAOA can outperform classical algorithms in specific problems. Furthermore, we will highlight empirical evidence (Section 3.4.3) that explores the potential advantages of QAOA compared to quantum annealing or classical algorithms. \begin{table} \begin{tabular}{l l l l} \hline \hline **Problem** & **Graph/Hypergraph** & **Algorithm** & **Performance** \\ \hline MaxCut & 2-regular & QAOA\({}_{p}\)[12, 138, 159] & \(\geq\frac{2p+1}{2p+2}\) \\ & 3-regular & QAOA\({}_{1}\)[12] & \(\geq 0.6924\) \\ & & QAOA\({}_{2}\)[160] & \(\geq 0.7559\) \\ & & QAOA\({}_{3}\)[160] & \(\geq 0.7924\) \\ & \(D\)-regular with girth \(>\) 3 & QAOA\({}_{1}\)[138] & \(\geq\frac{1}{2}+\frac{0.303}{\sqrt{D}}\) \\ & & 1-step threshold algorithm [95] & \(\geq\frac{1}{2}+\frac{0.335}{\sqrt{D}}\) \\ & \(D\)-regular with girth \(>\) 5 & QAOA\({}_{2}\)[161] & \(\geq\frac{1}{2}+\frac{0.407}{\sqrt{D}}\) \\ & & 2-step threshold algorithm [162] & \(\geq\frac{1}{2}+\frac{0.417}{\sqrt{D}}\) \\ & & Any 1-local algorithm [161] & \(\leq\frac{1}{2}+\frac{1/\sqrt{D}}{\sqrt{D}}\approx\frac{1}{2}+\frac{0.707}{ \sqrt{D}}\) \\ & \(D\)-regular with girth \(>2p+1\) & \(p\)-local ALR algorithm [162] & \(\geq\frac{1}{2}+\frac{2/\pi}{\sqrt{D}}\approx\frac{1}{2}+\frac{0.6366}{\sqrt{D}}\) \\ & & QAOA\({}_{p}\)[163] & \(\geq\frac{1}{2}+\frac{0.6408}{\sqrt{D}}\) at \(p=11\) \\ & & Gaussian wave process [164] & \(\geq\frac{1}{2}+\frac{c}{\sqrt{D}}\) with \(c>2/\pi\) not optimized \\ & & Bipartite \(D\)-regular & QAOA\({}_{p}\) with \(p\lesssim\epsilon\log n\)[165] & \(\leq\frac{1}{2}+O(\frac{1}{\sqrt{D}})\) \\ SK model & Infinite size & QAOA\({}_{p}\)[166] & \(\geq 0.6393\) at \(p=11\) \\ & & SDP algorithms [167, 168, 169] & \(\geq 2/\pi\approx 0.6366\) \\ & & AMP algorithm (no OGP) [170, 171] & \(\geq(1-\epsilon)P_{\rm opt}\) with \(P_{\rm opt}\approx 0.7632\) \\ Max-3XOR & \(D\)-regular & QAOA\({}_{1}\)[13] & \(\geq\frac{1}{2}+O(\frac{1}{\sqrt{D}\ln D})\) \\ & \(D\)-regular with random signs or no overlapping constraints & QAOA\({}_{1}\)[13, 172] & \(\geq\frac{1}{2}+O(\frac{1}{\sqrt{D}})\) \\ Max-\(k\)XOR & \(D\)-regular with random signs or no overlapping constraints & QAOA\({}_{1}\)[173] & \(\geq\frac{1}{2}+\frac{0.377}{\sqrt{D}}\) at \(k=5\) \\ & & 1-step threshold algorithm [173] & \(\geq\frac{1}{2}+\frac{0.370}{\sqrt{D}}\) at \(k=5\) \\ & \(D\)-regular with girth \(>2p+1\) & QAOA\({}_{p}\)[163] & \(\geq\frac{1}{2}+O(\frac{1}{\sqrt{D}})\) \\ MIS & Random graphs with bounded average degree & QAOA\({}_{p}\) with \(p\lesssim\epsilon\log n\)[174] & \(\leq 0.854\) \\ Diluted \(k\)-spin model & Even \(k\geq 4\) & QAOA\({}_{p}\) with \(p\lesssim\epsilon\log n\)[175] & \(<P_{\rm opt}(k)\) (bounded away from optimality due to OGP) \\ Fully connected \(k\)-spin model & Even \(k\geq 4\) & QAOA\({}_{p}\)[176] & \(<P_{\rm opt}(k)\) (bounded away from optimality due to OGP) \\ \hline \hline \end{tabular} \end{table} Table 3: Summary of established theoretical bounds on the performance of QAOA and selected classical algorithms on optimization problems. #### 3.4.1 Solution guarantees In the original QAOA proposal by Farhi et al. [12], the authors focused on applying QAOA of fixed depth \(p\), denoted as \(\text{QAOA}_{p}\) for convenience, to the MaxCut problem on 2-regular and 3-regular graphs. On 2-regular graphs, also known as the ring of disagrees, they conjectured that on rings of size \(n>2p\), \(\text{QAOA}_{p}\) can achieve \(\alpha\geq(2p+1)/(2p+2)\), which was later proved by Mbeng et al. [159]. This means that at large \(p\), QAOA can produce solutions arbitrarily close to the true optima on the ring of disagrees. On 3-regular graphs, they focused on the single-layer QAOA, \(\text{QAOA}_{1}\). They proved that with the optimal set of angles, \(\text{QAOA}_{1}\) on any 3-regular graph will always produce a cut whose size is at least \(0.6924\) times the size of the optimal cut, which translates to an approximation ratio \(\alpha\geq 0.6924\), an improvement from the average outcome of random guessing, which yields \(\alpha=0.5\). However, generalizing their analysis to higher values of \(p\) becomes challenging due to the double-exponential growth of the complexity of the classical algorithm used to find optimal parameters, which scales as \(\text{\it O}(2^{2^{p}})\). Wurtz and Love [160] subsequently extended the worst-case guarantees on 3-regular graphs to \(\text{QAOA}_{2}\) and \(\text{QAOA}_{3}\). They found that for \(p=2\), \(\alpha\geq 0.7559\) on 3-regular graphs with \(\text{girth}>5\). The girth of a graph is the length of its shortest cycle. Therefore, girth \(>5\) means a graph contains no triangles, squares, or pentagons. For \(p=3\), \(\alpha\geq 0.7924\) on graphs with girth \(>7\). The girth requirements stem from the large loop conjecture, which was one of the bases of their analysis. This conjecture posits that the worst-case graphs for fixed \(p\) are \(p\)-trees, which have no cycles less than \(2p+2\). While this study did confirm the theoretical performance increase with an increasing \(p\), it also showed that up to \(p=3\), QAOA is not able to outperform the best classical algorithms on 3-regular graphs in the worst-case scenario, such as the GW algorithm [53] that gives a performance guarantee of \(0.8786\) on any type of graph, and another classical algorithm based on Semidefinite Programming (SDP) [177] that achieves at least \(0.9326\) on graphs of maximum degree \(3\). Another extension to [12] was provided by Wang et al. [138]. They showed that on any triangle-free \(D\)-regular graphs, where \(D\) denotes the degree, the optimized \(\text{QAOA}_{1}\) could provide an approximation ratio of at least \(\frac{1}{2}+\frac{c}{\sqrt{D}}\) with \(c\approx 0.303\), which outperforms the lower bound of the best known local classical algorithm at the time, the threshold algorithm, which gives \(c\approx 0.281\)[178]. However, Hastings [95] soon argued that this bound is not tight and showed through a direct calculation that the 1-step threshold algorithm with the optimal parameter exhibits an immense improvement over random assignment compared to \(\text{QAOA}_{1}\) in the asymptotic limit as \(D\to\infty\), i.e., \(\alpha\geq\frac{1}{2}+\frac{c}{\sqrt{D}}\) with \(c\approx 0.335\). Moreover, it was pointed out that the threshold algorithm belongs to a broader class of local algorithms called the local tensor algorithms. Numerical calculations revealed that for all \(2\leq D<1000\), a single step of this classical algorithm would outperform \(\text{QAOA}_{1}\) on all triangle-free MaxCut instances. Later, Marwaha [161] generalized the 1-step threshold algorithm to an \(n\)-step version with \(n\) parameters. In particular, they first showed by direct calculation that for degrees \(2\leq D<500\) and every \(D\)-regular graph \(G\) of girth \(>5\), while \(\text{QAOA}_{2}\) has a larger expected cut fraction than \(\text{QAOA}_{1}\) on \(G\), there exists a 2-local classical MaxCut algorithm outperforms \(\text{QAOA}_{2}\) for all \(G\). They concluded that this is likely to hold for all \(D\) since the coefficient in the improved fraction \(c\) for the 2-step threshold algorithm stabilizes at around \(0.417\) in the asymptotic limit, compared with that of \(\text{QAOA}_{2}\), \(c\approx 0.407\). One may wonder if an optimal value exists for the maximum cut on random \(D\)-regular graphs. Interestingly, insights into this question emerge from another problem known as the Sherrington-Kirkpatrick (SK) model, which serves as a mean-field model for spin glass in physics. This model describes a classical spin system with all-to-all couplings. For \(n\) spins, the Hamiltonian, or the cost function of the SK model, is given by \[C(\mathbf{z})=\frac{1}{\sqrt{n}}\sum_{i<j}J_{ij}z_{i}z_{j}, \tag{39}\] where \(z_{i}\in\{-1,1\}\), \(\forall i\), and each \(J_{ij}\) is independently chosen from a distribution with mean \(0\) and variance \(1\). Making use of the so-called replica trick, Parisi [179] first computed the ground state energy of the SK model in the infinite-size limit, that is, \[\lim_{n\to\infty}\min_{\mathbf{z}}\frac{C(\mathbf{z})}{n}\simeq 0.7632. \tag{40}\] Farhi et al. [166] first conducted an extensive study on the performance of QAOA in solving the SK model in the limit of infinite problem size and the comparison with the classical algorithm based on SDP. Although recently a classical Approximate Message Passing (AMP) algorithm [170, 171] was proposed to efficiently find an approximate solution that is asymptotically close to the true optimum of the SK model, i.e., the Parisi value in Eq. (40), the analysis relies on the assumption of no Overlap Gap Property (OGP) [180]. The OGP refers to the geometric structure of near-optimal solutions for a problem. It implies that the overlap between any two nearly optimal solutions does not take values in a certain nontrivial interval; the overlap is either big or small, and there is no middle ground. The best-performing classical algorithm known to date without such assumption is the one based on SDP, the ALR algorithm [167] and its variants [168, 169], which yield \(C/n=2/\pi\approx 0.6366\). In [166], the authors introduced a formula for calculating the typical-instance energy of QAOA applied to the SK model for given parameters at an arbitrary fixed \(p\), which is denoted by \(V_{p}(\boldsymbol{\gamma},\boldsymbol{\beta})\), where \(\boldsymbol{\gamma}\) and \(\boldsymbol{\beta}\) are parameters of the QAOA circuit. This formula can be evaluated on a classical computer with a run time complexity \(O(16^{p})\). In this study, they numerically computed the QAOA performance for \(1\leq p\leq 12\), from which they found that the performance with optimal parameters, \(\bar{V}_{p}\), monotonically increases with an increasing \(p\). While for \(9\leq p\leq 12\), the values of \(V_{p}(\boldsymbol{\gamma},\boldsymbol{\beta})\) were not optimized but evaluated at some parameters extrapolated from those at lower \(p\), which can be seen as lower bounds of \(\bar{V}_{p}\), it was found that at \(p=11\), QAOA already outperforms the classical SDP algorithm mentioned above, where \(\bar{V}_{11}\geq 0.6393>2/\pi\). At \(p=12\), QAOA produces \(\bar{V}_{12}\geq 0.6466\). They also demonstrated the concentration property, that is, with probability tending to one in the infinite-size limit, measurements of the QAOA circuit will produce strings whose energies concentrate at the calculated value \(V_{p}(\boldsymbol{\gamma},\boldsymbol{\beta})\). This implies landscape independence [181], meaning that optimal parameters found for one large instance will also be good for other large instances. Returning to the question of optimal value for the MaxCut problem on random \(D\)-regular graphs, Dembo et al. [182] addressed it by revealing the connection between MaxCut and the SK model. They proved that the maximum cut of a random \(D\)-regular graph tends to \(\frac{1}{2}+\frac{P_{*}}{\sqrt{D}}\) as \(D\to\infty\), where \(P_{*}\approx 0.7632\) is exactly the Parisi value, i.e., the ground state energy of the SK model. Recently, an enhancement in the algorithmic performance on the MaxCut problem was presented by Barak and Marwaha [162]. The authors considered classical \(p\)-local algorithms, where a vertex's assignment depends on its radius \(p\) neighborhood. It was proved that for every \(p\), there exists a polynomial-time classical \(p\)-local algorithm for all \(D\)-regular graphs of girth \(>2p+1\) that outputs a cut value lower bounded by \(\frac{1}{2}+\frac{c}{\sqrt{D}}-O(\frac{1}{\sqrt{D}})\), where \(c=2/\pi\approx 0.6366\). In the limit of large degree and girth, the cut fraction tends to \(\frac{1}{2}+\frac{2/\pi}{\sqrt{D}}\), therefore surpassing that given by QAOA\({}_{1}\) and QAOA\({}_{2}\)[161, 95]. It is worth noting that in this limit, the performance guarantee is the same as that of the SDP algorithms for the SK model [167, 168, 169]. Nevertheless, neither of these two classical algorithms achieves the optimal Parisi value. Can QAOA at larger depths match or outperform this classical algorithm on the MaxCut problem? Boulebnane and Montanaro [183] took an essential step towards addressing this question by establishing an exact expression for the QAOA\({}_{p}\) energy on sparse random \(D\)-regular graphs in the infinite-size limit for any \(p\). In the limit \(D\to\infty\), this expression is closely connected to an analogous expression for the SK model [166], allowing parameters obtained for one model to be mapped to the other. Therefore, in this limiting case, the optimal QAOA parameters can be determined based on the techniques proposed in [166] with a time complexity that is exponential in the number of QAOA layers but not in the problem size. The authors additionally proposed an efficient Monte Carlo algorithm for estimating the QAOA energy on finite-size instances of the SK model. However, its applicability is limited to low depths, i.e., \(p\leq 3\). Around the same time, Basso et al. [163] also derived an iterative formula for evaluating the QAOA performance on any large-girth \(D\)-regular graphs for any fixed \(p\). While solving this iteration for any finite \(D\) requires a time complexity of \(O(p16^{p})\), in the \(D\to\infty\) limit, the complexity reduces to \(O(p^{2}4^{p})\). Therefore, the authors could perform numerical evaluations up to \(p=20\) in the large \(D\) limit. It was discovered that at \(p=11\) and beyond, QAOA achieves a cut fraction better than \(\frac{1}{2}+\frac{2/\pi}{\sqrt{D}}\), beating the classical local algorithm for MaxCut on large-girth random regular graphs considered by Barak and Marwaha [162]. This result is reminiscent of the finding made by Farhi et al. [166] for the infinite-size SK model, where QAOA\({}_{11}\) and beyond also outperform the classical algorithms yielding a ground-state energy estimation of \(2/\pi\). Indeed, Basso et al. [176] showed that the iterative formula for MaxCut on large-girth regular graphs also gives the ensemble-averaged performance of QAOA on the SK model defined on the complete graph. Hence, these two observations are not coincidental. The authors also conjectured that as \(p\to\infty\), QAOA will be able to achieve the Parisi value on random \(D\)-regular graphs as \(D\to\infty\), and if this were true, it could optimally solve the SK model, too. However, this remains to be rigorously proven. On the other hand, Hastings [164] demonstrated that a simple modification of the classical algorithm, the Gaussian wave process, can also achieve some other \(c>2/\pi\) in the improved cut fraction, but \(c\) was not optimized in this work. Therefore, it is still unclear if QAOA can outperform any classical algorithm for MaxCut on \(D\)-regular graphs. Another class of optimization problem that has garnered significant attention in the field is the Constraint Satisfaction Problems (CSPs). These problems involve assigning values to variables from a given domain while adhering to constraints. For example, the combinatorial problem Max-E3LIN2 over \(n\) Boolean variables (bits) consists of a set of clauses, each containing exactly three problem variables (E3), where a clause is deemed satisfied if its variables sum to 0 or 1 modulo 2 (LIN2), representing the parity as even or odd. The objective is to find an assignment of the variables that maximizes the number of satisfied constraints. Additionally, if each variable is guaranteed to appear in no more than \(D\) clauses, it is an instance of bounded degree \(D\). This problem belongs to a class of CSPs known as Max-\(k\)XOR, where each clause represents the XOR operation performed on \(k\) variables or their negations. Therefore, Max-E3LIN2 is equivalent to Max-3XOR, while the MaxCut problem can be seen as a special case of Max-2XOR, where all the clauses exhibit even parity. Farhi et al. [13] considered solving the Max-3XOR problem of bounded degree with QAOA\({}_{1}\). They proved that for any instance, QAOA\({}_{1}\) could achieve an average fraction of satisfied constraints of at least \(\frac{1}{2}+O(\frac{1}{\sqrt{D}\ln D})\). When applied to "typical" instances with random signs, i.e., the parity of each constraint is randomly chosen with equal probability, the scaling of the satisfied fraction improves to \(\frac{1}{2}+O(\frac{1}{\sqrt{D}})\). This matches that of a classical algorithm on both the set of instances with random signs and those with "triangle-free" constraints (also known as no overlapping constraints), where any two variables are involved in at most one constraint, and they share no neighbors outside of that specific constraint [184]. Lin and Zhu [172] extended the analysis of the performance of QAOA\({}_{1}\) to general CSPs with typicality, including Max-\(k\)XOR and Max-\(k\)SAT. Max-\(k\)SAT is similar to Max-\(k\)XOR but differs in that each constraint represents the logical OR operation applied to \(k\) variables or their negations. They demonstrated that for such problems with bounded degree, QAOA\({}_{1}\) could efficiently find, on average, an assignment that satisfies \(\mu+O(\frac{1}{\sqrt{D}})\) fractions of constraints, where the constant \(\mu\) denotes the expected fraction of constraints satisfied by a random assignment. Moreover, it was shown that on triangle-free instances, QAOA could also give an advantage of \(O(\frac{1}{\sqrt{D}})\) over a random assignment. In a subsequent study, Marwha and Hadfield [173] pointed out that a 1-local algorithm such as QAOA\({}_{1}\) on CSPs of bounded degree cannot distinguish between the instances of random signs and those of triangle-free constraints, which explains the same performance scaling in these two settings. Specifically, the authors compared the performance of QAOA\({}_{1}\) and a generalization of the classical threshold algorithm [178] on triangle-free Max-\(k\)XOR problems of bounded degree \(D\). While the previous studies [172, 184] did not focus on finding the best possible constant \(c\) in the improved satisfying fraction \(\frac{c}{\sqrt{D}}\), in this study, they numerically optimized and evaluated that of both algorithms at every \(k<200\), for each \(D<300\) and when \(D\rightarrow\infty\). At \(k=2\), the results verified the asymptotic performance for MaxCut (a special case of Max-2XOR) [95, 185], in which the threshold algorithm outperforms QAOA\({}_{1}\). On Max-3XOR, while QAOA\({}_{1}\) beats the threshold algorithm for some values of \(D\leq 27\), as \(D\) increases, the threshold algorithm performs better. This observation is also reflected in their asymptotic performances as \(D\rightarrow\infty\). Importantly, they found that when \(k>4\), QAOA\({}_{1}\) starts outperforming the threshold algorithm in the large degree limit, demonstrating a quantum advantage. However, this does not rule out the possibility that a different local tensor algorithm [95] will match or outperform QAOA at larger \(k\). More recently, efforts have been made to analyze the performance of higher-depth QAOA on Max-\(k\)XOR. For instance, Basso et al. [163] showed that the iterative formula they developed to evaluate the performance of QAOA at any \(p\) on MaxCut could be easily generalized to Max-\(k\)XOR instances with large-girth regular hypergraphs. They effectively employed this formula to identify optimal QAOA parameters and assess performance for \(3\leq k\leq 6\) and \(1\leq p\leq 14\), showing that QAOA is capable of producing solutions for Max-\(k\)XOR problems that approach the true optima more closely as \(p\) increases. Building upon this progress, a subsequent study by Basso et al. [176] further extended the established equivalence between MaxCut and the SK model (a 2-spin model) [182]. They generalized this equivalence to encompass the relationship between Max-\(k\)XOR and the fully connected \(k\)-spin model, which characterizes ensembles of combinatorial optimization problems with random all-to-all \(k\)-body couplings. They showed that QAOA's performance at any constant depth \(p\) for the \(k\)-spin model matches the ones for Max-\(k\)XOR asymptotically on random sparse Erdos-Renyi hypergraphs and large-girth \(D\)-regular hypergraphs in the \(D\rightarrow\infty\) limit. Given such equivalence, it would be interesting to see if the state-of-the-art classical algorithm for the \(k\)-spin models, which is the AMP algorithm [170, 171], can outperform QAOA at larger \(k\), primarily when \(k\geq 4\) and is even, where the OGP is known to exist and AMP is bounded away from optimality [186]. In addition to optimization problems, theoretical analysis was conducted to evaluate QAOA's performance in a Boolean satisfaction problem, random \(k\)-SAT. Like Max-\(k\)SAT, random \(k\)-SAT comprises a set of clauses, each being a randomly generated Boolean formula involving \(k\) variables. However, the goal for \(k\)-SAT is to find an assignment that exactly satisfies all the clauses. Therefore, the performance of QAOA is measured by the success probability that the algorithm outputs a satisfying assignment. Since QAOA is repeatedly run until satisfiability is reached, a success probability \(p_{\text{succ}}\) is translated to an expected running time \(1/p_{\text{succ}}\). Based on a technique for estimating the "generalized multinomial sums", Boulebnane and Montanaro [187] derived analytical bounds on the average success probability of QAOA on random \(k\)-SAT in the limit of infinite problem size, which holds for fixed, sufficiently small QAOA parameters and when \(k\) is a power of \(2\). In particular, the authors showcased the performance of QAOA on random \(8\)-SAT instances. They computed the analytical bounds for QAOA with up to \(p=10\). Moreover, numerical calculations of the median, running time, and the inverse of the success probability were performed for up to \(p=60\). Both analytical and numerical results suggested that QAOA with a depth \(p\approx 14\) would match the performance of a random-local-search-based classical algorithm, WalkSATlm [188], with an estimated running time \(\lesssim 2^{0.33n}\), where \(n\) is the number of problem variables. QAOA is expected to outperform WalkSATlm with larger numbers of layers. However, the extent of the advantage is still unclear because numerical estimates of the median running for small instances (\(12\leq n\leq 20\)) starts to deviate significantly from the theoretical and numerical results based on the success probability as \(p\) gets large. #### 3.4.2 Performance limitations In order to understand the full capability of QAOA and to determine the circumstances in which it surpasses classical algorithms, it is equally crucial to explore its performance limitations as it is to understand its solution guarantees. Bravyi et al. [83] first realized a limitation of fixed-depth QAOA in solving the MaxCut problem on bipartite \(D\)-regular graphs. For graphs with \(n\) vertices, they proved that the approximation ratio of \(\text{QAOA}_{p}\) is at most \(\frac{5}{6}+\frac{\sqrt{D-1}}{3D}\sim\frac{5}{6}+O(\frac{1}{\sqrt{D}})\) for any constant \(D\geq 3\), as long as \(p<(\frac{1}{3}\log_{2}n-4)/(D+1)\). As the degree \(D\) gets large, this bound approaches \(\frac{5}{6}\approx 0.833\), which falls behind the GW bound \(0.8786\), indicating that QAOA, in this case, cannot outperform the best classical algorithm. As pointed out in [173], such obstruction in the \(O(\log n)\)-depth regime is related to the No Low-energy Trivial States (NLTS) conjecture [189]. The NLTS conjecture suggests the existence of families of local Hamiltonians whose low energy states are all nontrivial, whereas a "trivial" one can be achieved by evolving a product state \(|S\rangle\) with a low-depth quantum circuit \(U\). Bravyi et al. [83] showed that when the local Hamiltonians correspond to the MaxCut instances, both the initial state \(|S\rangle=|+\rangle^{\otimes n}\) and the associated QAOA circuit \(U\) possess a \(\mathbb{Z}_{2}\) symmetry, i.e., a global spin flip. This symmetry property gives rise to the NLTS property, resulting in the nontrivial ground states and leading to the observed obstruction. Marwaha and Hadfield [173] later generalized this obstruction from Max-2XOR (MaxCut) to Max-3XOR instances. Although the QAOA circuits for Max-\(k\)XOR at odd \(k\) lack the global \(\mathbb{Z}_{2}\) symmetry, the authors identified partial \(\mathbb{Z}_{2}\) symmetry within these instances, that is, the corresponding unitary has \(\mathbb{Z}_{2}\) spin-flip symmetry only concerning some large, fixed subset of vertices \(V_{+}\subsetneq V\). Notably, such partial symmetry leads to a constant-fraction obstruction for QAOA at sub-logarithmic depth in solving Max-3XOR. The study established an upper bound on the satisfying fraction of \(0.99+O(\frac{1}{\sqrt{D}})\) in the large \(D\) limit. However, the authors did not focus on optimizing the constant in this analysis. Furthermore, it was conjectured that such constant-fraction obstruction exists for some instances of Max-\(k\)XOR at every \(k\) when QAOA operates at sub-logarithmic depths. Numerically, they evaluated the performance of QAOA\({}_{1}\) and the local threshold algorithm on triangle-free, large \(D\) Max-\(k\)XOR instances for \(k\) up to \(200\). They found that both algorithms are bounded away from the optimum, a satisfying fraction \(\frac{1}{2}+\frac{P(k)}{2}\sqrt{\frac{k}{D}}\), with \(P(k)\) being the generalized Parisi value for Max-\(k\)XOR. Another well-known limitation of QAOA stems from the locality constraint. For MaxCut on \(D\)-regular graphs of girth \(>5\), Barak and Marwaha [162] proved that every \(1\)-local algorithm, quantum or classical, can only produce a maximum cut of at most \(\frac{1}{2}+\frac{c}{\sqrt{D}}\), where \(c=1/\sqrt{2}\approx 0.7071\). This result falls short of the true optimum, which is \(\frac{1}{2}+\frac{P_{*}}{\sqrt{D}}\) with \(P_{*}\approx 0.7632\)[182]. More generally, when the algorithm is run at a fixed depth \(p\), the measurement outcomes of a particular qubit depend solely on the qubits within its \(p\)-neighborhood, that is, qubits that are within distance \(p\) to the given qubit on the graph. Consequently, if these neighborhoods are small, QAOA does not "see" the whole graph and, in some cases, becomes limited in its algorithmic performance. In light of this, Farhi et al. [165] refined the bound in [83] by utilizing the property that local neighborhoods of random \(D\)-regular graphs resemble trees. It was proved that when QAOA does not see the whole graph, i.e., when \(p\lesssim\epsilon\log n\) with \(\epsilon\) being a small constant, the upper bound on bipartite random \(D\)-regular graphs is \(\frac{1}{2}+O(\frac{1}{\sqrt{D}})\). In another study by Farhi et al. [174], the focus was shifted to the Maximum Independent Set (MIS) problem on sparse random graphs with average degree \(D\). They proved that in the low-depth regime \(p\lesssim\epsilon\log n\), QAOA fails to produce an independent set larger than \(0.854\) times the optimal as \(D\to\infty\). The proof uses the OGP exhibited by the large independent sets of random graphs with bounded average degree [190]. The OGP has been identified as an obstruction for various classical algorithms, ranging from local algorithms [190, 191, 192] to AMP algorithms [186]. This result establishes that the OGP also obstructs QAOA, limiting its performance in generating large independent sets. Moreover, in the "worst" case where QAOA is applied to the MIS on bipartite random \(D\)-regular graphs, the approximation ratio approaches \(0\) at large \(D\)[165]. Chou et al. [175] generalized this obstruction-by-OGP result on the MIS to random sparse instances of CSPs. In particular, they focused on the diluted \(k\)-spin glass model, where the interactions are sampled from a random sparse \(k\)-uniform hypergraph. They demonstrated that the OGP exhibited by diluted \(k\)-spin glasses, for every even \(k\geq 4\) as previously shown by Chen et al. [192], poses an obstacle for QAOA at sub-logarithmic depths. Building on the relation to the mean-field \(k\)-spin glasses, they further extended this obstruction to the signed random Max-\(k\)XOR instances when \(k\geq 4\) is even, where each variable in all clauses carries by a random sign. What happens when QAOA can see the whole graph? Basso et al. [176] tried to address this question by studying the fully connected \(k\)-spin models, where the locality-based arguments used in previous studies of sparse instances [174, 165, 175] no longer apply. Exploiting the equivalence of the performance of QAOA on dense and sparse graphs in the asymptotic limit, the authors established that any constant-\(p\) QAOA is still bounded away from optimality on dense \(k\)-spin models for any even \(k\geq 4\) due to the obstruction by the OGP. This finding reveals a hardness of approximation for QAOA in a new regime where the whole graph is seen. However, it is essential to note that potential quantum advantage is still possible in these problems, particularly when QAOA surpasses sub-logarithmic depths, as classical AMP algorithms have also exhibited suboptimality [186]. #### 3.4.3 Empirical evidence While theoretical analysis of QAOA has shown its potential to outperform classical optimization algorithms in certain problems (Sections 3.4.1 and 3.4.2), it is equally important to examine its empirical performance to assess its practical utility. Empirical investigations of QAOA's performance become more relevant at higher depths as obtaining rigorous theoretical bounds for a wide range of problem instances becomes challenging. Moreover, these empirical studies can provide insights into areas of the quantum algorithm that could be improved to enhance performance. Simulator-based studiesCrooks [124] was among the first to conduct an analysis comparing the performance of QAOA on a quantum circuit simulator with a classical solver to look for a quantum advantage. The empirical study showed that the QAOA optimized via stochastic gradient descent could achieve an approximation ratio exceeding that of the classical GW algorithm, even with a modest circuit depth. By considering \(10\)-node random graphs generated from the Erdos-Renyi configuration with a \(50\%\) edge probability, the author measured the average approximation ratio of QAOA for up to \(p=32\) on the MaxCut problem on these graphs. Following the theory [12], the average approximation ratio obtained by QAOA monotonically improves as the number of layers increases; at \(p=6\), QAOA starts to outperform the classical GW algorithm. Furthermore, the author also compared the performance of both algorithms at various problem sizes, ranging from \(n=6\) to \(n=17\), where \(n\) is the number of nodes or vertices. It was observed that both algorithms maintained their performance even as the graph size increased, suggesting that QAOA's solution quality, at a fixed circuit depth, remains insensitive to problem size. Notably, for \(p\geq 8\), the quantum algorithm consistently outperforms the GW algorithm across different problem sizes, demonstrating a potential quantum advantage. Lotshaw et al. [128] later conducted a more comprehensive study by investigating an exhaustive set of MaxCut instances on all connected non-isomorphic graphs with \(n\leq 9\) vertices. However, the depth of QAOA was limited to \(p\leq 3\). In addition to the approximation ratio, they introduced another measure of performance--the probability of obtaining the optimal solution. As the QAOA depth and the graph size increased, they observed a convergence of the approximation ratio across different graph structures, resulting in a narrower distribution. Moreover, it was found that on most graphs with \(n\leq 9\), QAOA exceeds the worst-case GW bound even by \(p=3\), consolidating the viability of modest-depth QAOA to outperform the classical algorithm in many instances. Interestingly, while the average probability of obtaining an optimal solution increased with larger \(p\), the distributions of this probability widened with increasing \(p\) This contrasts the distributions of the approximation ratio, indicating that the probability of success is more sensitive to the graph structure. On the other hand, as the QAOA depth increases, a significant challenge arises from the growing complexity of the algorithm, particularly the amount of entanglement in the QAOA circuit. A recent study on the \(p=1\) QAOA indicated that excessive entanglement might hinder the algorithm's performance on problems such as the Hamming ramp and bush of implications [155]. In another study by Chen et al. [193], the authors pointed out that removing excess entanglement introduced by intermediate layers of the QAOA circuit might yield improved outcomes on the MaxCut problem. QAOA circuits with large depths can also create an entanglement barrier between the initial and final states, complicating both their classical simulation and subsequent benchmarks [143, 194]. To this end, Sreedhar et al. [195] performed numerical simulations of QAOA that restrict the allowed entanglement in the circuit by using the Matrix Product States (MPS) with reduced bond dimensions. The bond dimension acts as a parameter bounding the system entanglement. Utilizing layer-wise optimization, they could extend the analysis to high depths, up to \(p=100\). They also employed a deterministic method to sample only one final bitstring of the algorithm. Under such a restricted simulation, QAOA still provided successful results for small bond dimensions (comparable to the system size), for \(p\approx 30\) and up to \(60\) qubits. Based on their findings, they concluded that entanglement plays a minor role in solving the MaxCut and Exact Cover 3 problems studied: provided the depth \(p\) of the algorithm is sufficiently high, QAOA can solve the optimization problem exactly or approximately even when the bond dimension is small. Therefore, even if high-depth QAOAs have an entanglement barrier that inhibits the classical simulability of the algorithm [143, 193], such entanglement might have little impact on QAOA's ability to find optimal solutions. However, the interplay between entanglement and circuit depth and their impact on the QAOA performance in other problem instances remains an open research problem. Besides its depth, parameter initialization is also crucial for achieving an advantage with QAOA, as reported by Zhou et al. [15]. They proposed heuristic strategies to optimally initialize QAOA's parameters, as discussed in Section 3.2.1. They discovered that by employing these strategies, quasi-optimal parameters for QAOA at depth \(p\) can be determined in \(O[\text{poly}(p)]\) time. In contrast, random initialization would necessitate \(2^{O(p)}\) optimization runs to attain comparable performance. The researchers evaluated the performance of QAOA using these optimized parameter values for up to \(p=50\). They observed that, on average, the approximation ratio obtained by QAOA improved exponentially (or stretched exponentially) when applied to random graphs. They also compared the performance of QAOA\({}_{3}\) initialized with an interpolation-based heuristic strategy INTERP to that of quantum annealing, showing that QAOA was able to converge to a better solution even on difficult instances where adiabatic quantum annealing failed due to small spectral gaps. This was attributed to QAOA's ability to use non-adiabatic mechanisms to overcome the challenges of vanishing spectral gaps. In addition, Akshay et al. [110] showed that the density of the graph also plays a vital role in QAOA's final result. Given a constraint satisfiability problem with \(n\) variables and \(m\) clauses (constraints), its density \(\alpha_{d}\) is defined as the clause to variable ratio, i.e., \(\alpha_{d}=m/n\). For any fixed ansatz, there seem to be high-density instances that are inaccessible to the quantum algorithm, thus limiting its performance. From empirical investigation, it turned out that higher-depth versions of QAOA are necessary for achieving satisfactory results for densities \(\alpha_{d}>1\). Subsequently, Herrman et al. [196] extensively evaluated the impact of various graph characteristics on the performance of QAOA with up to \(p=3\) for the MaxCut problem. The authors investigated all connected non-isomorphic graphs with at most eight vertices and identified some exciting predictors of QAOA's success, such as graph symmetries, odd cycles, and density. Despite the limited scope of the investigation, the authors found that graphs without odd cycles have a \(12\%\) higher mean probability of obtaining an optimal solution than those with odd cycles. Moreover, the number of edges, clique size, and small odd cycles are positively correlated with QAOA's success and bipartite, Eulerian, and distance regular graphs. On the other hand, the diameter of the graph has a negative correlation with the expected cost \(F_{p}(\mathbf{\gamma},\mathbf{\beta})\) and, therefore, the approximation ratio. These correlations between graphs' structure and QAOA's performance can be used to identify problem instances where the quantum algorithm is likely to exhibit a quantum advantage. When working with QAOA, as well as any other quantum algorithm, an important question to consider is when it is advantageous to use it over classical algorithms: in an algorithm selection scenario, it is crucial to thoroughly evaluate factors such as problem size, structure, and available quantum resources before deciding whether to use QAOA or a classical solver. Lykov et al. [197] aimed to identify under which conditions QAOA could achieve quantum advantage over classical algorithms, in terms of both quality of solution and runtime performance. Specifically, inspired by parameter transferability, they adopted a fixed-angle approach similar to the ones proposed in Refs. [115, 117, 125] such that with just one round of circuit sampling, one could speed up QAOA while maintaining good performance compared to slower conventional approaches. However, their analysis indicated that multi-shot circuit sampling was necessary after all to match the classical solution quality. They observed that classical heuristic solvers were capable of producing high-quality approximate solutions in linear time complexity, which is very difficult to beat. According to their results, the main obstacle to QAOA's advantage over classical solvers is the exponential sampling time required for large graph sizes. Therefore, even if an experiment might demonstrate an advantage for intermediate values of \(n\) and \(p\), such advantage would be lost with larger problems, independently of the rate of quantum circuit sampling. They suggested that a QAOA circuit must be implemented with depth \(p>11\) to match the performance of classical algorithms for large graph sizes and hope to see a quantum advantage. In another study, Moussa et al. [198] used ML techniques to detect MaxCut problem instances where QAOA is most likely to obtain a quantum advantage over the classical GW algorithm regarding approximation ratio. The proposed ML model achieved up to 96% cross-validated accuracy in predictions. The capacity to predict scenarios where quantum solutions are likely to surpass classical approaches, allows for strategic allocation of computational resources, thus establishing a methodological foundation for quantum advantage. It was also shown that QAOA outperformed GW on most instances of 4-regular graphs up to 24 nodes at depth \(p=10\). This result partially corroborates the assertions made by Crooks [124] a few years prior, albeit under different experimental conditions. Despite the absolute validity of such a finding, which holds for the specific parameters and scenarios studied, the possibility of divergent outcomes under different conditions or with varying parameters cannot be discounted. This emphasizes the inherent complexity in analyzing QAOA's performance, which is susceptible to variations in multiple factors. Nevertheless, through ML and explainability methods, Moussa et al. [198] were able to demonstrate that spectral properties of the graph and basic graph density were the most influential graph features affecting the approximation ratio, deepening what was found in [15]. Similarly, Deshpande and Melnikov [132] employed GNNs as a tool to choose whether to use QAOA or a classical solver for MaxCut instances based on their approximation ratios. Specifically, they used a GNN trained on [199] to predict QAOA's performance, with a relative error of less than 19.7%. This could allow for a comparison between the performance of QAOA and that of a classical solver, enabling the choice of the most appropriate solver for a particular optimization problem. Quantum hardware-based evaluationsIn order to outperform classical solvers, the QAOA must demonstrate optimal performance on real quantum hardware, as opposed to just noiseless simulations. Most of the works in the literature proposed a QAOA implementation on a superconducting hardware platform, the prevalent technology for building quantum computers. One of the earliest experimental demonstrations of QAOA on real quantum hardware was conducted by Otterbach et al. [65], who translated a clustering task into a MaxCut problem that QAOA\({}_{1}\) could solve. The study implemented the quantum algorithm on a Rigetti 19Q quantum processor of 19 functional superconducting transmon qubits. Bayesian optimization was employed to optimize QAOA's parameters. The results showed that the algorithm reached the optimal solution in significantly fewer steps than expected by drawing cluster assignments uniformly at random. Although this study provided an early experimental demonstration of the effectiveness of QAOA, it did not draw a comparison with leading classical solvers or other methods. QAOA's performance was benchmarked against quantum annealing by Willsch et al. [200] on a set of weighted MaxCut and 2-SAT problems with up to 16 and 18 variables, respectively, executed on the IBM Q 16 Melbourne quantum computer and the D-Wave 2000Q quantum annealer. Three different measures were used to evaluate the algorithm's performance: the probability of finding the ground state, the energy expectation value, and a ratio closely related to the approximation ratio. The IBM Q processor produced poor results when solving a nontrivial 2-SAT problem with eight variables, even though the \(p=1\) QAOA on the simulator had yielded good results for the same problem. This suggests that the limitations and noise of the QPU may significantly impact QAOA's performance. The study also found that for the set of problem instances considered, QAOA with \(p=1,\ldots,5\) could not compete with quantum annealing when no minor embedding was necessary: the D-Wave machine was even able to outperform QAOA executed on a simulator. Interestingly, a correlation was also observed between instances hard for quantum annealing and those hard for QAOA. Bengtsson [201] implemented QAOA on their proprietary quantum hardware platform, consisting of two superconducting transmon qubits and one parametrically modulated coupler. They applied the QAOA up to a depth of \(p=2\) and solved small instances of the NP-complete Exact-Cover problem with a success probability of 96.6%. This high success probability was achieved regardless of the optimizer used, thanks to the high gate fidelities of their quantum hardware. The authors benchmarked their quantum hardware's measured state probabilities and cost functions against those of an ideal quantum computer without any noise. They found excellent agreement between the two; this indicates low coherent and incoherent error rates. However, the authors noted that even with high gate fidelities, high algorithmic fidelity is not guaranteed. They predicted that increasing the depth of the QAOA up to \(p=3\) on their hardware would not yield a higher success probability since it would result in a longer circuit and hence in a reduced total fidelity, which they estimated to be \(94.2\%\). Overall, their results demonstrated promising progress toward the practical implementation of QAOA for solving optimization problems on small quantum devices. However, the challenge of scaling up quantum hardware to solve larger and more complex problems still remains a major issue. Harrigan et al. [202] studied three families of graph problems with QAOA up to \(p=5\) on a superconducting quantum processor with 23 physical qubits. The problems included hardware grid problems whose topology matches the device's, MaxCut on 3-regular graphs, and the SK model; the latter two instances required compilation to be implemented on the hardware. The study demonstrated a robust performance of QAOA on hardware grid problems, even for the largest instances with 23 qubits. Notably, \(n\)-independent noise effect on the approximation ratios was observed, i.e., the approximation ratio was observed to be more or less independent of the problem size, which is in agreement with the simulation results in [124]. Moreover, the performance increased with circuit depth up to \(p=3\); however, the noise degraded the performance for \(p=4\) and \(p=5\). This is in accordance with an earlier experimental result provided by Alam et al. [203], who pointed out that noise in real hardware limits the optimal number of layers for any QAOA instance. On the other hand, it is important to note that most real-world instances of combinatorial optimization problems cannot be mapped to hardware-native topologies. Instead, they require compilation, which involves routing qubits with swap networks. This additional overhead can significantly impact the performance of the algorithm. For these two families of problems investigated, i.e., MaxCut on 3-regular graphs and the SK model, QAOA's performance decreased with problem size but still provided an advantage over random guessing. This study highlights the challenge of using near-term quantum computers to optimize problems on graphs that differ from hardware connectivity. Some experiments on trapped-ion systems were conducted recently. For example, Pagano et al. [204] applied QAOA to approximate the ground-state energy of both the quantum and classical Ising model and investigated the algorithm's performance on a trapped ion quantum computer with up to 40 qubits. Specifically, they investigated QAOA's performance as a function of the number of qubits, ranging from 20 to 40. They observed that the performance does not degrade significantly as the system size increases, which was consistent with previous findings by [124, 202]. The study also indicated that increasing the number of QAOA layers from \(p=1\) to \(p=2\) did not significantly improve the performance of QAOA due to the limitations imposed by the hardware, such as decoherence and bit-flip errors. Further extensive benchmarks of QAOA were conducted by Baker and Radha [21] on various QPUs from IBM (5 different devices), Rigetti (Aspen-10 with 32 qubits), and IonQ (11 qubits). The study examined the QAOA performance up to \(p=5\) for portfolio optimization, where the quality of solutions was measured using the Wasserstein distance. The results showed that the solution quality peaked at \(p=5\) for most QPUs with \(n=2\) qubits and \(p=4\) for the trapped ion QPU with \(n=3\) qubits. The study also observed an increase in performance with \(p\) using variants of the more general QAOAnsatz at \(p=2\) for \(n=2\) and \(n=3\). Interestingly, among the IBM QPUs, the authors found that a QPU with a lower quantum volume produced higher quality solutions than QPUs with a higher quantum volume and the same qubit topology. This highlights the need for application-specific benchmarking, as general benchmarking metrics may not predict application performance. Additionally, the study observed that the quality of solutions produced by all QPUs varied at a level much larger than the stochastic noise from the finite number of shots, suggesting that variability should be regarded as a QPU performance benchmark for given applications. Finally, QAOA performance on photonic quantum hardware was also demonstrated. Qiang et al. [205] realized a fully programmable two-qubit quantum processor with large-scale silicon photonic circuits based on the linear-combination protocol. The authors programmed the device to implement 98 different two-qubit unitary operations with an average quantum process fidelity of \(93.2\pm 4.5\), a two-qubit QAOA with \(p=1\), and efficient Szegedy directed quantum walks simulation. They applied QAOA to three examples of CSPs, with the classical fidelities between experiment and theory of \(99.88\pm 0.10\%\), \(96.98\pm 0.56\%\), and \(99.48\pm 0.27\%\), respectively. ### Noise and Errors Considerations As mentioned in Section 2.4.2, the QAOA is a quantum algorithm inspired by adiabatic quantum computing [67, 68, 69] that leverages layering to achieve the desired solution state. Theoretically, its approximation ratio (\(\alpha\)) to the true solution increases as the number of alternating cost and mixer layers (\(p\)) increases. However, in reality, the performance gain through an increasing number of layers may be seriously challenged by the noise and errors accumulated in a deeper circuit in NISQ-era devices. Besides circuit depth, increasing system sizes (e.g., for larger graphs) and higher noise rates of hardware can also contribute to an increase in the total amount of noise accumulated and hinder the algorithm's effectiveness in practical settings. Recent findings have revealed that noise and errors pose substantial challenges to the scalability and performance of VQAs such as QAOA. Both local and correlated noise can adversely affect QAOA, as discussed in Section 3.5.1. Consequently, while QAOA may demonstrate a potential quantum advantage in certain noiseless settings, its performance in near-term hardware can be severely compromised, as noted in Section 3.5.2. However, there is hope for overcoming some of these challenges, as various error mitigation techniques have been proposed to improve QAOA's practicality in the near future. Section 3.5.3 explores some of these techniques in more detail. #### 3.5.1 Characterizing the sources of noise In general, noise can be classified into two types: local (uncorrelated) and correlated. Effects of local, uncorrelated noise and errors on different VQAs have been under active investigation in recent years [206, 207, 208, 209, 1]. For example, Kungurtsev et al. [211] highlight that, due to the noise inherent in near-term quantum devices, the evaluations of the objective function are systematically biased, necessitating a different perspective on the convergence analysis of these classical optimization procedures. In the case of QAOA, typical local quantum noise channels, including dephasing, bit-flip, and depolarizing channels, were considered. Theoretical studies showed that the QAOA performance, characterized by the output state fidelity and the approximation ratio, degrades as a power law with the noise strength, with the power being proportional to the system size [212, 213]. Based on these analytical equations, one can model the trade-off between a deeper QAOA circuit with more layers which gives a better approximation ratio per se, but at the expense of greater performance degradation due to noise. In other words, an optimal number of layers can be determined for QAOA for any given noise rate. Moreover, the scalability of QAOA can also be affected by noise. A noisy implementation of QAOA would be effective if it could produce a measurement result from the intended, noiseless quantum state distribution. Using a local noise model, Lotshaw et al. [214] showed that the number of measurements needed to achieve the above goal increases exponentially with the gate error rates, which translates to exponential time complexity, assuming that the number of measurements is proportional to the time taken to reach a solution. Such a measurement scaling, therefore, significantly limits the scalability of QAOA implementations on near-term devices. Another factor that affects QAOA's scalability is intimately tied to the variational nature of the QAOA circuit. Wang et al. [152] showed that even local noise, which includes depolarizing noise and certain kinds of Pauli noise, can induce barren plateaus, where the gradient of the cost function landscape vanishes exponentially with an increasing circuit depth. Such noise-induced barren plateaus (NIBPs) are conceptually distinct from the noise-free barren plateaus first introduced by McClean et al. [71] (Section 2.4.3), as the gradient vanishes with increasing problem sizes at every point on the cost function landscape, rather than probabilistically. Furthermore, they cannot be mitigated with strategies that are used to avoid the noise-free ones, such as layerwise training [134, 215]. NIBPs not only pose a challenge to parameter optimization in QAOA even with gradient-free optimizers, inhibiting its near-term scalability but could also destroy any potential quantum speedup. Conversely, local noise sources may not always be detrimental and, in some cases, may aid the algorithm's optimization process. One example was shown by Campos et al. [216], who employed the layerwise training technique to QAOA, where instead of training all the parameters in a variational circuit at once, layers of the circuit are added to the training routine successively. They observed that training saturation occurs, that is, the fidelity between the output and the target states stops improving past a certain number of layers. Interestingly, the authors also showed that local coherent dephasing noise could remove such training saturation, recovering the effectiveness of layerwise learning. Further investigation is needed to determine if other noise sources beyond the simple noise model can play a similar role. On the other hand, correlated errors such as crosstalk [217, 218], the non-Markovian \(1/f\) noise [219], and interactions with environmental fluctuators [220], are shown to be relevant and prevalent in NISQ devices. The implications of crosstalk noise in measurements for QAOA were studied by Maciejewski et al. [221]. The authors introduced a correlated measurement noise model that can be efficiently characterized by the Diagonal Detector Overlapping Tomography (DDOT) performed on IBM's and Rigetti's superconducting quantum processors. It allows estimating \(k\)-local crosstalk noise in an \(N\)-qubit device using \(\mathcal{O}(k2^{k}\log N)\) quantum circuits. Furthermore, through a numerical simulation on random MAX-2-SAT instances and the SK model on an 8-qubit system, the authors demonstrated that while the correlated readout noise has a mild effect on the optimization, it could still alter the energy landscape of QAOA, rendering the solution in sub-optimal regions. Another correlated error, denoted as the precision errors, resides in the misspecification of QAOA's parameters due to imperfect control and is more prominent as the QAOA order, i.e., the number of alternating layers, grows. Quiroz et al. [222] found analytically that precision errors lead to an exponential decrease in success probability of QAOA with increasing QAOA order and noise strength, which was also supported by the numerical results of the QAOA variants for Grover's search and the 1D transverse-field Ising model. Another study conducted by Karamlou et al. [28] on IBM's ibmq_boeblingen quantum computer also revealed that a coherent error induced by the residual ZZ-coupling between the transmon qubits is the major bottleneck that limits the performance of QAOA on near-term superconducting quantum processors. More recently, the effects of both temporally and spatially correlated noise on the performance of QAOA were studied by Kattemolle and Burkard [223], based on a toy error model in which every qubit interacts with a single binary fluctuator that can travel through space or time. Curiously, it was observed in numerical simulation with the SK model on 6 qubits that the QAOA performance improves as the noise correlation strength increases at fixed local error rates. This suggests a certain degree of noise resilience of QAOA and that the correlation by itself may not have a negative impact on variational algorithms like QAOA. However, further studies are required to test the generalizability of this result. #### 3.5.2 Quantum advantage in the presence of noise As mentioned in Section 3.4, QAOA in the noiseless limit is potentially able to outperform the best classical algorithm in solving certain types of optimization problems, thus achieving a quantum advantage. However, as alluded to earlier, noise in quantum hardware and environment poses great challenges to its performance. It is, therefore, unclear if such quantum advantage could still be retained in more realistic settings. Two approaches have been taken to address the above question: theoretical analysis and empirical studies using real quantum hardware or noisy simulators. On the theoretical front, recent studies concluded that substantial quantum advantages are unlikely for the QAOA with the noise level in current devices, especially for large and dense problems. Based on the relative entropy inequalities, Stilck Franca and Garcia-Patron [224] provided theoretical bounds on various quantities such as the output energy of noisy quantum circuits and the maximum depth of a quantum circuit can have before efficient classical algorithms outperform it. Applying these bounds to QAOA on the SK model and random 3-regular graphs, it was found that to match the performance of classical devices, the error rates of current quantum computers would need to improve by a couple of orders of magnitude, reaching the level of the expected fault-tolerance threshold. Building on these results, Weidenfeller et al. [225] performed a detailed analysis on scaling QAOA on superconducting qubit-based devices. They concluded that even though using appropriate SWAP strategies may help alleviate the conundrum [226], for dense problems, the required gate error rates would still lie far below the fault-tolerant threshold, based on the entropic argument. Similarly, De Palma et al. [227] used techniques from quantum optimal transport and considered simple noise models such as the one-qubit depolarizing noise with probability \(P\). They proved that with noisy quantum circuits at depths \(L\sim\mathcal{O}(P^{-1})\), the probability of observing a single string with better energy than that output by an efficient classical algorithm is exponentially small in the number of qubits, thus providing a stronger statement than the previous result [224], which held only for the expectation of the output. Moreover, Gonzalez-Garcia et al. [228] adopted a different approach, i.e., a random circuit model, to study the propagation of arbitrary single-qubit errors in variational quantum circuits and reached a similar conclusion. Applying such a model to the QAOA circuit, they estimated that the required error rate for a possible quantum advantage scales as \(P\sim 1/(nD)\), with \(n\) being the number of qubits, and \(D\) the circuit depth, which translates to a value lower than \(10^{-6}\) (assuming \(n=1000\) for a potential quantum advantage to start becoming practically useful [151]) with a two-dimensional random circuit architecture. Such a conclusion is supported by various empirical studies on noisy simulators and different quantum hardware platforms, demonstrating noise's detrimental effects on QAOA's performance. For example, Alam et al. [203] empirically demonstrated both in simulation and on IBM's quantum computer, IBM-MQX4, that QAOA's performance is noise-sensitive and its sensitivity is higher for high-depth QAOA. Their results thus suggest that it is impractical to gain better QAOA performance in realistic settings through unlimited layering and that the noise in the target hardware limits the optimal number of layers for any QAOA instance. Another piece of evidence comes from the work of Harrigan et al. [202], where the authors studied various families of problems with QAOA on Google's superconducting quantum processor. The problems considered include the hardware grid problems, whose topology matches that of the device, and those that are not native to the hardware and require compilation to be implemented, such as MaxCut on 3-regular graphs and the SK model with all-to-all connectivity. On the one hand, with the hardware grid problems, QAOA showed robust performance measured in terms of the approximation ratio, where the effect of noise showed minimal dependence on the number of qubits \(n\). Furthermore, by averaging 130 hardware grid problems with \(n>10\), it was reported that QAOA reached its performance maximization at \(p=3\) on the current hardware. On the other hand, for non-hardware-native problems, evident performance degradation was observed as the number of qubits increased. It is worth noting that similar \(n\)-independent performance of the \(p=1\) QAOA on hardware-native Ising model was also observed on a trapped ion quantum computer with up to 40 qubits [204]. However, at \(p=2\), QAOA with 20 qubits showed a similar performance to the \(p=1\) circuit, suggesting that decoherence and errors accumulated during longer evolution times already balanced out the 2% expected performance gain of one additional optimization layer. Overall, these results demonstrate the negative impact of noise and the difficulty in scaling and achieving any quantum advantage from the near-term implementations of QAOA, particularly for non-hardware-native problems. Heuristically, the challenge in achieving quantum advantages with the QAOA can be attributed to the following facts [229]. On the one hand, in order to achieve a quantum advantage, the depth of QAOA in its original form should grow at least logarithmically with the problem size [174, 165, 83]. At the same time, implementing QAOA on sparsely connected hardware would require routing qubits via SWAP networks that generally incur a linear cost [224, 202]. These two observations combined lead to the circuit depth scaling as \(O(n\log n)\) with the number of variables \(n\). On the other hand, QAOA's performance, measured in terms of, e.g., output state fidelity, success probability, etc., could suffer an exponential decay with a growing system size in the presence of a constant error rate [213, 212]. Therefore, it is unlikely that QAOA circuits with more than logarithmic depth could lead to any quantum advantage without significantly improved quantum hardware or error correction. #### 3.5.3 Noise mitigation techniques In the meantime, many mitigation strategies have also been proposed to reduce the negative impact of noise on QAOA and other VQAs. Gate count is a primary contributor of noise, and different hardware will have different connectivity that will impact the depth of the quantum circuit, in some instances, dramatically. This is because the circuit must be modified to align with the qubit coupling map of the device, and the gates must be translated into the set of native gates used by the particular device. This process is often called compiling. Therefore, a general strategy to mitigate the effects of errors on current quantum devices is to reduce the number of gates in the compiled quantum circuits. This can be accomplished by, for example, optimizing the SWAP networks, which are needed for running QAOA on non-hardware-native problems [226]. Hashim et al. [226] considered two methods to improve the SWAP networks: using an overcomplete two-qubit gate set for decomposing the relevant quantum gates and SWAP networks, as well as a technique called equivalent circuit averaging, which averages over a set of randomized but logically equivalent circuits to mitigate coherent errors. As a result, around 60% average reduction in error (total variation distance) was observed for depth-1 QAOA on four transmon qubits on a superconducting quantum processor. Other methods have also been proposed to optimize the compilation process to reduce the gate counts and improve the success probability of the compiled circuit, such as re-ordering of the multi-qubit CPHASE gate and the use of variation-aware compilation policies [230, 231, 232]. Compared with single-qubit gates, two-qubit gates such as the CNOT are typically more erroneous on a quantum computer. Therefore, CNOT gate is an important target for noise mitigation. Majumdar et al. [233] suggested two hardware-independent methods of reducing the total number of CNOT gates in the QAOA circuit. The first method is based on edge coloring of the input graph to minimize the depth of the circuit, and it reduces up to \(\lfloor\frac{n}{2}\rfloor\) CNOT gates in the first layer of the QAOA circuit, with \(n\) being the number of vertices in the graph. The other method is based on a Depth First Search (DFS) algorithm, which reduces \(n-1\) CNOT gates in the circuit, although it moderately increases the depth of the circuit in doing so. The depth of the circuit is proportional to the height of the DFS tree, which can be \(n-1\) in the worst case. Therefore, an analytical condition was derived from balancing such trade-offs while maintaining the lowest error probability. This condition was also satisfied in the experimental implementations on IBM's ibmq_manhattan, in which the authors demonstrated that the edge coloring-based method outperformed the standard QAOA, and the DFS method outperformed both of these implementations. In a successive paper by the same group [234], the DFS-based method was improved. The authors proposed an \(O(\Delta\cdot n^{2})\) greedy heuristic algorithm, where \(\Delta\) is the maximum degree of the graph. This algorithm finds a spanning tree of lower height, reducing the overall depth of the circuit while still maintaining the \(n-1\) reduction in the number of CNOT gates in the ansatz. It was shown numerically that this new algorithm achieves nearly a factor of 10 increase in the success probability for each iteration of QAOA for the MaxCut problem. Other strategies that target more specific errors or are more tailored to QAOA also exist. For example, to combat the crosstalk effects in measurement noise mentioned in Section 3.5.1, a mitigation technique was proposed to correct the marginal probability distributions affected by the correlated measurement noise [221]. Tested on IBM's (Rigetti's) devices with 15 (23) qubits, such technique led to an average reduction of errors by a factor \(>22\) (\(>5.5\)). Moreover, based on the numerical simulations, it led to a noticeable improvement in QAOA's performance across multiple graph instances. Likewise, even though the detrimental nature of the precision errors unveiled in [222], they may also be effectively mitigated via the digitization of the variational parameters in a binary representation at the cost of an increasing circuit depth. Proponents also exist to take advantage of symmetries possessed by the problem instances. Specifically, one such error mitigation technique is called symmetry verification, which was initially proposed for quantum simulation [235, 236]. Shaydulin and Galda [229] extended this method to QAOA, i.e., by determining the classical symmetries of the objective function preserved by the QAOA ansatz and then projecting the QAOA state into the symmetry-restricted subspace. By restricting the state from evolving under such subspace (i.e., the eigenspace of the corresponding symmetry operators), an error that pushes the system out of this subspace can be partially corrected by projecting the system back to the correct eigenspace. This technique was effective in solving four MaxCut instances on graphs with 3 and 4 nodes on IBM's ibmq_jakarta processor, leading to improvements in output state fidelity, expected objective function value, and probability of sampling the optimal solution. A similar approach based on symmetry verification through mid-circuit postselection was later extended to the quantum alternating operator ansatz, a generalization of the original QAOA. It was shown to be effective in solving the TSP [237]. Another example was a proposal by Streif et al. [238], which aims to reduce the overhead of a bit-flip error correcting code by leveraging the Local Particle Number Conservation (LPNC) in certain ansatze, such as the XY quantum alternating operator ansatz. This technique can detect and correct symmetry-breaking errors, and it is advantageous in the case of high error rates and/or deep circuits. In a more recent study, Weidinger et al. [239] introduced a decoding scheme to mitigate the errors that arise in a special variant of QAOA called parity QAOA [52, 240], which makes use of parity transformation to encode the optimization problem into a 2D local Hamiltonian that is suitable for implementation on current quantum devices with a planar architecture. This error mitigation technique exploits the redundant information introduced during parity transformation, where each physical qubit is mapped to multiple logical qubits, and was shown to be advantageous in numerical experiments where a depolarizing error on all circuit gates was considered. Last but not least, recent advances in general exponential error suppression schemes also offer some hope for making QAOA more practical in the near future [241, 242, 243]. However, to solve issues such as NIBPs, quantum technologists must work towards more accurate gates and, eventually, fault-tolerant quantum computing. ### Hardware-Specific Approaches There has been significant research interest in leveraging specific hardware to enhance the performance of QAOA across various platforms, such as trapped ions [244, 245, 246, 247], neutral atoms [248, 240], superconducting qubits [244, 251, 252, 253, 254, 255, 250, 251, 252, 254, 253, 254], and photonic quantum computers [74, 205]. The goals of these approaches include overcoming hardware connectivity limitations and mitigating noise-related issues to broaden the applicability of QAOA to a wide range of combinatorial optimization problems. A selection of the latest experimental realizations of the QAOA is reported in Table 4. Additionally, hardware implementations provide an opportunity to validate the effectiveness of error mitigation techniques, as discussed in Section 3.5.3. However, it is essential to note that different architectures have advantages and disadvantages. Different physical models display different natures in the interactions between qubits, each presenting \begin{table} \begin{tabular}{l l l l l l l} \hline \hline **Problem** & **Graph** & **Hardware** & \(n\) & \(p\) & **Device and Remarks** \\ \hline SK model & Complete & Superconducting & 8–72 & 1 & Rigetti Aspen-M-3 [244]. Quantum-enhanced iterative algorithm with a truncated 1-layer QAOA ansatz embedded; outperformed classical greedy threshold. & \\ & Complete & Trapped ions & 7–32 & 1 & Quantum H2 [245]. 1-layer QAOA embedded in Instantaneous Quantum Polynomial (IQP) circuits; \(\alpha_{\text{avg}}=0.985\). & \\ Max-3SAT & Random, & Trapped ions & 6–20 & 1–20 & IonQ Harmony and Quantum H1-1 [246]. At fixed \(n\), \(\alpha_{\text{avg}}\) degraded beyond certain \(p\) due to noise; at fixed \(p\), \(\alpha_{\text{avg}}\) remained insensitive to growing \(n\). & \\ MaxCut & Non-planar & Trapped ions & 20 & 1–16 & Quantum H1-1 [247]. Solution quality increased monotonically with \(p\) up to 10 for all problem instances, with largest \(\alpha=0.94\). & \\ & Hardware-grid/non-planar & Superconducting & 1–23 & 1–5 & Google Sycamore [202]. On hardware-grid problems, performance was independent of \(n\), and peaked at \(p=3\). On 3-regular graphs, performance degraded as \(n\) increased. & \\ & Heavy-hex & Superconducting & 27 & 1, 2 & IBM imbq\_numbai [225]. \(p=2\) QAOA produced better cuts than \(p=1\). & \\ & 4 nodes with & Neutral atoms & 4 & 1–3 & ColdQuanta [248]. \(\alpha=0.669\) (\(p=1\)), \(0.687\) (\(p=2\)), \(0.628\) (\(p=3\)). & \\ Ising model & Hardware-native with cubic interactions & Superconducting & 127 & 2 & IBM ibm\_washington [249] and DWave Advantage\_system4.1 \& 6.1. Quantum annealing outperformed QAOA on all instances. With dynamical decoupling [250], \(p=2\) QAOA marginally outperformed \(p=1\). & \\ & Hardware-native & Trapped ions & 20–40 & 1, 2 & U Maryland group [204]. Performance and runtime were approximately independent of system size. \(p=2\) QAOA performed similarly to \(p=1\) at 20 qubits. & \\ CSPs & - & Photonic & 2 & 1 & U Bristol group [205]. Classical fidelities with theory of \(\approx 99.88\%,96.98\%,99.48\%\) achieved for the three 2-bit CSPs considered. & \\ MIS & Random & Neutral atoms & 39–289 & 1–5 & Harvard U group [26]. On an 179-vertex graph instance, \(\alpha\) improved with increasing \(p\) up to \(p=4\). On instances with large enough spectral gaps, QAOA achieved speedup over simulated annealing. & \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of selected state-of-the-art experiments on various quantum hardware platforms based on different technologies (superconducting, trapped ions, neutral atoms, and photonics). \(n\) represents the number of qubits used in the experiment, \(p\) is the number of QAOA layers investigated, and \(\alpha_{\text{(avg)}}\) indicates the (average) approximation ratio. Note that in some experiments not all combinations of \(n\) and \(p\) were investigated. its unique implementation challenges. In the case of superconducting qubits, the mapping of interactions is predetermined by the device's architecture. When considering neutral atoms, the limitations arise from the decay of qubit couplings. Dlaska et al. [240] introduced an innovative four-qubit Rydberg parity gate, which facilitates the use of the parity architecture [50]. This architecture offers a scalable remapping of qubits, enabling more efficient interconnection between them. By laser-coupling these atoms to highly excited Rydberg states and employing adiabatic laser pulses, they were able to manipulate computational basis states through imprinting a dynamical phase on them. The gate, fully programmable and adjustable, enables a direct and straightforward implementation of the parity architecture, essential for encoding complex interaction graphs in atomic arrays. They numerically demonstrate implementations of QAOA for small-scale test problems, paving the way for experimental investigations beyond numerical simulations. The underlying interaction graph of the Sherrington-Kirkpatrick (SK) Hamiltonian has all-to-all connectivity, which prevents direct implementation on planar quantum processors. Rabinovich et al. [75] utilized ion native Hamiltonians to develop ansatz families that can prepare ground states of general problem Hamiltonians. By testing their algorithm on 6-qubit SK Hamiltonians, they demonstrate that overcoming symmetry protection allows for the minimization of arbitrary instances. This work highlights the potential of trapped-ion-based quantum processors for solving a broad range of combinatorial optimization problems. Rajakumar et al. [76] presented a method for constructing arbitrary unweighted and weighted coupling graphs using global entangling operations in quantum spin systems. They provided upper bounds on the number of operations required and proposed a mixed-integer program for finding optimal sequences. Their approach is less susceptible to noise compared to standard gate-based compilation, suggesting that global entangling operations may be more efficient for dense, unweighted coupling operations. Further research is needed to establish tighter upper bounds and explore how the complexity of compilation affects quantum algorithms. In addition to connectivity, another feature that can be leveraged based on the underlying model of the device is the utilization of higher energy levels beyond qubits, known as qudits, which have \(d\) energy levels. For instance, in the realm of photonic quantum computers, one can leverage their inherent capabilities to employ qudits rather than qubits. For a \(k\)-color graph, choosing \(k=d\) allows an efficient mapping of the problem to the hardware. In neutral atom-based quantum computers, qudits offer exciting possibilities for solving real-world problems more efficiently compared to classical methods. Deller et al. [255] have explored the use of qudits in the context of the \(k\)-graph coloring and electric vehicle charging problems with global power constraints. They introduced the use of the momentum operator \(L_{x}\) as a mixing operator for optimization problems with bounded integer variables. Their numerical simulations compared QAOA solutions with gradient-based and global evolutionary classical optimizers, with the latter showing better results for the instances considered. Photonic quantum computers operate on a distinct paradigm of quantum computation known as measurement-based quantum computing (MBQC), as opposed to the gate-based model. Proietti et al. [74] presented a MBQC QAOA for photonic quantum computers to solve the Max-\(k\)-Cut problem. They developed an MBQC algorithm using diagonal unitary evolution and demonstrated an up-to 30-fold improvement in terms of cluster state dimension when compared to QAOA gate-based circuit algorithms. This work highlights the advantages of tailoring algorithms for photonic quantum computing. One of the biggest technological challenges in photonic quantum computation are the single photon losses. To tackle this issue, Vikstal et al. [256] discussed how standard qubits can be replaced with cat qubits. Cat qubits are created as a superposition of two distinct coherent states of light with opposite phase. Generally, this kind of states become challenging to realize and manipulate. Despite this, what is found is that in the context of QAOA, of the Exact Cover problem in particular, they exhibit a performance advantage (i.e., higher fidelity rates at fixed resources). ## 4 Experimental Results Quantum computing has made significant strides in recent years, and there is growing interest in systematically comparing VQAs, particularly the QAOA, across both simulators and real devices. Understanding the performance and efficiency of these algorithms is essential for their practical application and the development of quantum computing. In this section we present preliminary experimental results obtained from our analysis of the QAOA and its variants. We focus on assessing their performance (approximation ratio), efficiency (number of optimization iterations times circuit depth), and the effect of noise and errors on real quantum computers. ### Related Tools Table 4 presents a summary of selected state-of-the-art experiments performed on various quantum hardware platforms based on different technologies (superconducting, trapped ions, neutral atoms, and photonics). There is significant interest in comparing between a range of combinatorial problems (like SK, MaxCut, Max-3SAT, Ising and CSPs), a wide range of hardware architectures and sizes (from 2 qubits to 289 qubits) and many different aspects of QAOA variants (for example, a varied number of QAOA layers, from 1 to 20). It is also interesting to note that the product of these last two quantities--number of qubits and number of layers--has been proposed as a benchmark to check the scalability of QAOA on hardware [247]. To facilitate these comparisons, tools such as QPack [257] and QAOAKit [117] have been developed. QPack is an application-oriented cross-platform benchmarking suite for quantum computers and simulators, which utilizes scalable QAOA and VQE applications to provide a holistic insight into quantum performance. It collects quantum execution data and transforms it into benchmark scores for application-oriented quantum benchmarking, including sub-scores based on runtime, accuracy, scalability, and capacity performance. QAOAKit is a Python toolkit for QAOA built for exploratory research. It serves as a unified repository of preoptimized QAOA parameters and circuit generators for common quantum simulation frameworks, allowing researchers to reproduce, compare, and extend known results from various sources in the literature. ### Implementation and Systematic Evaluation of QAOA Variants Our study focuses on the experimental results obtained using our custom implementations of the QAOA and its variants rather than relying on existing software tools. As cloud-based quantum computing platforms for VQAs are increasingly prevalent [258], we utilized IBM Quantum's cloud-based infrastructure to experimentally evaluate the performance of several QAOA variants on different MaxCut problems. Problem types.The MaxCut problems used for the experiments were generated beforehand and encompassed a diverse set of graph structures. This diversity was primarily motivated by the understanding that the performance of the QAOA ansatz is significantly dependent on the specific problem it is applied to. The chosen graph structures included complete graphs, 3-regular graphs, and random graphs with edge probabilities varying between 0.3 and 0.5 (Figure 6). The rationale for choosing these diverse types was to thoroughly evaluate the QAOA variants under various conditions, reflecting the inherent diversity in real-world quantum optimization problems. Complete graphs represent densely connected systems where each node is connected to every other node. On the other hand, 3-regular graphs provide a more uniform and structured network topology. Lastly, random graphs simulate less predictable scenarios, which might present unique challenges to the optimization process. Moreover, we varied the size of the graphs, with sizes ranging from 4 to 18 nodes. This decision is rooted in the understanding that the complexity of the problem--in this case, represented by the number of nodes--could substantially impact the efficacy of the QAOA ansatz. By examining smaller graphs (4 nodes), we tested the efficiency of the QAOA variants in solving more straightforward problems. We investigated how these variants scale with increasing problem complexity as we increased the number of nodes. It is crucial to note that while our selection of graph structures and sizes was diverse and aimed to capture everyday use cases, it only covered some potential quantum optimization scenarios. However, our research is ongoing, and we are actively working on expanding our dataset to encompass additional use cases, including various graph types, sizes, and problem types beyond MaxCut. To ensure a comprehensive evaluation, we generated multiple instances for each combination of parameters, enabling us to understand better the QAOA variants' performance across different problem instances and graph structures (Table 5). Figure 6: \((a)\) complete, \((b)\) 3-regular, and \((c)\) random graph from our generated dataset. QAOA variants.We evaluated an array of QAOA variants, namely: "QAOA" [12], "QAOA+'" [78], "FALQON+" (referred to just as "FALQON" in this section) [66, 89], "ModifiedQAOA" [31], "WSQAOA" [88] and "ma-QAOA" [77]. Each of these variants was chosen to represent a different methodological approach to improving the QAOA's performance. They illustrate some of the broad spectrum of ways that researchers have found to enhance the QAOA, such as adding features, speeding up optimization, adapting the ansatz, and utilizing previously computed information for a warm start. To examine the performance of these variants under various conditions, we used a varying number of layer depth, reported here ranging from 1 to 8 layers. Optimization.For the optimization process, we utilized the COBYLA optimizer provided by the scipy library to tune the variational parameters in the QAOA circuits, in line with the results in [119]. The optimizer iterated until convergence based on the specified tolerance level (0.0002) or until 500 iterations, whichever came first. The maximum number of iterations only influenced ma-QAOA, which often reached the maximum cap of iterations and needed many thousands of iterations to achieve just a slightly higher approximation ratio. We observed that this did not affect the overall trends that we discuss below. However, we will include the unconstrained versions in the future release of our results. The same optimization settings were used across all variants to ensure a fair comparison. Except for "WS-QAOA", which provides a specific initialization of the parameters, we randomly initialized the parameters from a uniform distribution \(\gamma\rightarrow(0..2\pi)\) and \(\beta\rightarrow(0..\pi)\). Implementation & HardwareWe implemented the QAOA and its variants in Python using the Qiskit quantum computing framework. Each QAOA variant was implemented as a separate class, inheriting a base QAOA class containing the shared functionalities. This modular design allows for easy extension and comparison of different QAOA variants. Our implementations were executed on both quantum simulator (qiskit) and real quantum devices provided by the IBM Quantum Experience. To assess the noise resilience of the proposed QAOA variants, we conducted experiments on actual IBM quantum devices, specifically the IBM Quantum Falcon r5.11H quantum processors, which include the ibm_oslo, ibm_lagos, ibm_nairobi and ibm_perth. The results presented here are part of an ongoing study, and the comprehensive details, source code, and detailed outcomes will be disclosed in a forthcoming research paper. ### Experimental Results and Discussion The experimental results provide insights into the performance of different QAOA variants when applied to various graph types and sizes. Using the mean approximation ratios as a key performance metric (Figures 7, 8 and 9; Tables 6, 7, 8 and 9), we identify some general trends and patterns and discuss the trade-offs between approximation performance, resource consumption, and the consequent implications for problem-solving efficacy. Variation of approximation ratio across graph types:As illustrated in Figure 7, the graph type--complete, regular or random--significantly influences the approximation ratio achieved by a QAOA variant. On both simulations and on real quantum hardware, all variants demonstrate superior approximation ratios when applied to complete and regular graphs, achieving mean ratios as high as 0.98, for complete graphs, and as high as 0.94 for regular graphs on simulations. Conversely, when applied to random graphs, the approximation ratio drops to at most 0.92. This trend gets even more pronounced as the number of nodes increases (Figure 8). In complete graphs most variants manage to retain a rather high mean approximation ratio of over 0.9 for at least up to 18 nodes. On the other hand, in random graphs of 18 nodes the approximation ratio typically lies between 0.6 and 0.8. \begin{table} \begin{tabular}{l c c} \hline \hline Graph Type & Number of nodes & Instances for each combination \\ \hline Complete & 4 to 18 & 1 \\ 3-regular & 4 to 18 & 3 to 8 \\ Random (edge \(p=0.3..0.5\)) & 4 to 18 & 3 to 8 \\ \hline \hline \end{tabular} \end{table} Table 5: Overview of the graph structures and instances used for our experiments. A total of 78 graphs where generated for the results we report here. This stark variance emphasizes the fundamental influence of the underlying graph structure on a QAOA variant's performance. It substantiates the observations discussed in Section 3.4.3 and underscores the need for a tailored selection of a QAOA variant keeping in mind the specific graph structure at hand. A clear understanding of the interplay between graph types and QAOA variants is essential for achieving optimal quantum optimization results. Degradation in approximation ratio with increasing graph size:Figure 8 highlights a tendency for mean approximation ratios to decrease as the number of graph nodes increases in regular and random graphs, and in a lesser degree in complete graphs. However, this downtrend is not universally applicable to all QAOA variants and is particularly noticeable when scaling from a minimal number of nodes Figure 8: Impact of graph size and type on approximation ratios by QAOA variant. A declining trend of mean approximation ratios with increasing graph size is observed. A strong dependence on graph type is evident. (Results from simulations) Figure 7: Mean approximation ratios of the QAOA and its variants on different MaxCut problems. For each variant: on the left we report the results from noiseless simulations and on the right the ones from real hardware. (specifically, 4). Beyond this point, the descending trend can be observed, albeit with less pronounced impact. Notably, FALQON and vanilla QAOA appear to exhibit greater robustness in the face of this trend, suggesting a relative resilience against the deteriorating effect of increasing graph sizes. This trends holds true even in the absence of noise. For instance, for QAOA, the approximation ratio falls from 0.98 and 0.94 (regular and random, respectively) for 4 nodes to 0.83 and 0.76 for 18 nodes. Similarly, FALQON's approximation ratio falls from 0.97 and 0.88 (regular and random, respectively) for 4 nodes to 0.78 and 0.80 for 18 nodes. These results indicate that larger graph sizes might make the optimization problem more challenging, and the effectiveness of QAOA variants diminishes as the graph size grows. Relative efficiency of QAOA variants:Our experimental findings highlight the variation in the performance of different QAOA variants across varying graph types and sizes. For instance, FALQON and the standard QAOA show the most steady performance across varying graph sizes, as shown in Figure 8. As another example, the performance of WS-QAOA reveals surges in approximation ratios. This variability, especially apparent in 10-node regular graphs, stems from its warm-starting algorithm that occasionally produces near-optimal parameters early in the optimization process. These observations underscore the necessity of choosing an appropriate QAOA variant to achieve optimal results as some variants may be better suited to specific problem instances or graph structures. However, it is important to note that the relative efficiency of a QAOA variant depends not only on its performance in isolation but also on how it compares to other variants under the same conditions. For example, while ma-QAOA might offer high approximation ratios, it also require significantly more gates, especially as the number of nodes increases (Figure 9). Therefore, the relative efficiency of a QAOA variant should be evaluated considering both its performance and the computational resources it requires. Balancing approximation ratio and resource use:The trade-off between the mean approximation ratio and the computational resources required by each QAOA variant is a vital metric to consider. As illustrated in Figure 9 and summarized in Table 6 variants present unique trade-offs between their approximation capabilities and the amount of computational resources they demand. Some variants achieve higher approximation ratios but require more gates, have higher circuit depth, or need more circuit evaluations, resulting in increased computation time and resource usage. For example, comparing the ma-QAOA and WS-QAOA variants, ma-QAOA might provide higher approximation ratios on larger graphs, however, it also requires significantly more resources in terms of effective circuit depth (circuit depth \(\times\) circuit calls). On the other hand, WS-QAOA could be less resource-intensive while still delivering reasonably high approximation ratios. However, it's crucial to note, further complicating the matter, that WS-QAOA includes a relatively expensive classical warm-starting step in its initialization, which contributes to its total resource consumption but not the quantum resource consumption. The choice between these two QAOA variants would depend on the specific problem instance, available computational resources, and the desired balance between performance and resource usage. Effect of circuit layer depth:Based on our experiments (Table 7), it's clear that the circuit layer depth has a varying impact on the approximation ratio achieved. While it plays a significant role it seems to not be as significant as the other factors discussed, at least for depths up to \(p=8\). For instance, the QAOA variant shows an overall increase in the approximation ratio as the layer depth progresses from 1 to 8. There are minor fluctuations in this trend at depths 5 and 6, but the general upward pattern persists. This suggests that the QAOA variant tends to perform better with an increased layer depth. Similarly, the FALQON variant shows a consistent improvement in the approximation ratio with increasing layer depth. In general, the ma-QAOA exhibits the greatest improvement with increased \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & **QAOA** & **QAOA\(+\)** & **FALQON** & **ModifiedQAOA** & **WS-QAOA** & **ma-QAOA** \\ \hline **Mean Approximation Ratio** & 0.87 & 0.84 & 0.87 & 0.72 & 0.79 & 0.88 \\ **Average Circuit Depth** & 96.46 & 97.31 & 104.73 & 167.70 & 44.79 & 90.73 \\ **Number of Circuit Calls** & 108.25 & 287.16 & 104.31 & 95.72 & 99.28 & 450.97 \\ \hline \hline \end{tabular} \end{table} Table 6: Summary statistics of the selected QAOA variants across all implementation combinations (simulation results). depth, other variants show either a moderate rise (such as QAOA and WS-QAOA) or a more limited response (such as QAOA+ and ModifiedQAOA). Noise-free simulations vs. real quantum hardware:The detrimental effect of running on noisy hardware is clearly evidenced when comparing the performance of the various QAOA variants in noise-free simulations versus real quantum hardware (Table 8). The real quantum hardware results show a significant reduction in the mean approximation ratios across all the variants as was expected and discussed in Section 3.5. It is interesting to observe is that in most instances, the general trends associated with different graph types typically persist (Figure 7). This highlights that the underlying structure of a graph, whether it's complete, regular, or random, plays a pivotal role in determining the performance of each QAOA variant. Proximity of optimal parameters to initial random guess:The cosine similarity metrics presented in Table 9 provide insight into the effectiveness of the COBYLA optimization method in exploring \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline **depth** (\(p\)) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline **QAOA** & 0.810 & 0.854 & 0.860 & 0.865 & 0.901 & 0.892 & 0.917 & 0.909 \\ **QAOA+** & 0.789 & 0.856 & 0.855 & 0.858 & 0.855 & 0.853 & 0.838 & 0.889 \\ **FALQON** & 0.807 & 0.849 & 0.865 & 0.890 & 0.890 & 0.885 & 0.899 & - \\ **ModifiedQAOA** & 0.684 & 0.746 & 0.726 & 0.734 & 0.733 & 0.724 & 0.725 & 0.735 \\ **WS-QAOA** & 0.785 & 0.756 & 0.770 & 0.806 & 0.804 & 0.796 & 0.790 & 0.817 \\ **ma-QAOA** & 0.846 & 0.990 & 0.999 & 0.994 & 0.998 & 0.948 & 0.960 & 0.992 \\ \hline \hline \end{tabular} \end{table} Table 7: Mean approximation ratio achieved in relation to circuit layer depth (\(p\)). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & **QAOA** & **QAOA+** & **FALQON** & **ModifiedQAOA** & **WS-QAOA** & **ma-QAOA** \\ \hline **Simulation** & \(0.899\pm 0.008\) & \(0.867\pm 0.009\) & \(0.881\pm 0.007\) & \(0.743\pm 0.011\) & \(0.802\pm 0.014\) & \(0.936\pm 0.008\) \\ **Real hardware** & \(0.702\pm 0.019\) & \(0.710\pm 0.014\) & \(0.716\pm 0.020\) & \(0.705\pm 0.016\) & \(0.727\pm 0.016\) & \(0.732\pm 0.016\) \\ \hline \hline \end{tabular} \end{table} Table 8: Mean approximation achieved for each variant for all problem types and sizes. Comparison between real hardware runs (IBMQ) and noise-free simulations. Figure 9: Composite depth of the QAOA and its variants on different MaxCut problems using simulators, namely the depth of the circuit times the number of iterations required. This measure provides an agnostic indication of the computational time required on the quantum device. the parameter space for each QAOA variant. The values indicate how close the optimal parameters are to their initial random guesses. For most variants, such as QAOA, QAOA+, ModifiedQAOA, WS-QAOA, and ma-QAOA, high mean cosine similarity values (greater than 0.96) suggest that the optimization process may not be thoroughly exploring the parameter space. This is further corroborated by the low standard deviations, indicating less deviation from the initial parameters on all problem instances. On the other hand, FALQON's significantly lower mean cosine similarity (approximately 0.51) and higher standard deviation hint at a more extensive exploration of the parameter space. In the case of WS-QAOA, the high cosine similarity is expected due to its design, which encourages starting near optimal parameters. This analysis underscores the influence of the optimization process on the performance of QAOA variants, and highlights the need for more effective parameter exploration strategies. ### Limitations and Future Work We have presented a curated set of experimental results and observations from our analysis of the QAOA and its variants. While this account provides a comprehensive overview, the extensive nature of the QAOA landscape means that numerous findings and results could not be included in this review paper. We are currently compiling a more extensive research paper that will delve deeper into the experiments, source code, and outcomes, providing a more granular examination of the performance of various QAOA variants including newer variants such as the ADAPT-QAOA and RecursiveQAOA. We are also expanding our investigation's graph sizes and graph types to include an array of different MaxCut, weighted-MaxCut, and other problems, thereby offering a more robust comparison across different graph types and sizes. Moreover, we are broadening our exploration to include a more diverse range of optimizers, enhancing our understanding of the efficiency of various optimization techniques in the context of QAOA, while our experimental setup has also been increased by expanding the number of nodes. The released code will also provide an interactive interface for the implemented QAOA variants, allowing for greater exploration and comparison of their performance across a broader range of problem instances and graph structures. ## 5 Discussion In this section we provide a comprehensive discussion of our findings. We begin our discussion with a brief overview of all the results followed by an exploration of proposed applications for the QAOA. We then offer some insights on the quantum advantage of QAOA and finally conclude with suggestions for future research directions. ### Summary of Analysis QAOA ansatz improvementsAnsatz selection in QAOA is crucial for solving problems efficiently and balancing specificity and generality to avoid overfitting. Multiple approaches have been proposed, including ma-QAOA [77], QAOA+ [78], ab-QAOA [81], and PG-QAOA [127], which introduce new parameters, layers, and optimization techniques to enhance the traditional QAOA ansatz. ADAPT-QAOA [82] provides an iterative method that selects QAOA mixers from a pool of operators, leading to faster convergence and reduced resource requirements. RQAOA [83, 101, 102] is a non-local variant that iteratively reduces problem size and employs classical methods for the remaining problem. WS-QAOA [88] attempts to initialize QAOA based on the solution to relaxed QUBO problems and demonstrates the ability to retain the performance guarantees of classical algorithms, such \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & **QAOA** & **QAOA+** & **FALQON** & **ModifiedQAOA** & **WS-QAOA** & **ma-QAOA** \\ \hline **Mean cosine similarity** & 0.988 & 0.963 & 0.511 & 0.986 & 0.978 & 0.976 \\ **Standard Deviation** & 0.009 & 0.037 & 0.257 & 0.016 & 0.067 & 0.015 \\ \hline \hline \end{tabular} \end{table} Table 9: A summary of cosine similarity metrics across QAOA variants, indicating the proximity of optimal parameters to the initial random guess. High similarity suggests that the COBYLA optimization method may not be exploring the parameter space effectively. The exception is FALQON, which shows substantial divergence from the initial parameters, indicating more extensive exploration. WS-QAOA is expected to start near the optimal parameters due to its design. as the GW bound when solving the MaxCut problem. Quantum Alternating Operator Ansatzes [84, 86, 104, 105, 111] offer a more flexible framework by alternating between general sets of operators, making them suitable for a broader range of problems with hard and soft constraints. FALQON [66, 89] and FQAOA [90] tackle quantum optimization from different angles, with FALQON focusing on feedback-based optimization without classical intervention and FQAOA using fermion particle number preservation to impose constraints intrinsically. Both methods show potential performance advantages over existing techniques. Lastly, Quantum Dropout [91] offers an innovative approach to handle hard cases in combinatorial optimization problems, selectively modifying the quantum circuit to improve QAOA performance. Parameter optimizationParameter optimization is a crucial aspect of QAOA, especially as the depth and complexity of the algorithm increase [15]. Considering the simultaneous increase in complexity with the increase in depth, methods can be developed to reduce the ansatz depth. The parameters of the QAOA ansatz create a parameter space, and these parameters should be effectively chosen to govern a proper run of the QAOA. Various techniques have been proposed to address the challenge of finding good initial parameters, such as heuristic strategies [15], parameter fixing [113], graph neural networks [114], and parameter transferability across graphs [115, 116]. Classical optimization algorithms such as gradient-based methods [124, 127] and gradient-free methods [119] have been utilized to explore the QAOA parameter space. Gradient-based approaches like gradient descent and policy gradient are widely used for optimizing the parameters of quantum circuits like QAOA. However, they can have high computational times when compared to gradient-free methods [119]. Various machine learning techniques can be experimented with to provide more effective parametrization that automatically learns and employs the ideal parameter patterns. Machine learning approaches, including reinforcement learning [101, 133] and meta-learning [130], have been applied to improve parameter optimization in QAOA. These Machine Learning approaches have been applied to train models and find optimal parameters for QAOA, often resulting in faster convergence and better performance. Overcoming barren plateaus in the cost function landscape is a significant challenge in training parameterized quantum circuits like QAOA [71]. Strategies such as layerwise learning and warm-start techniques have been proposed to mitigate this issue [134, 88, 259]. Furthermore, the ability to transfer and reuse optimal parameters across different problem instances can potentially enhance the optimization process [115, 116]. Parameter symmetry is also explored in quantum algorithms like QAOA. Utilizing parameter symmetries can significantly enhance the optimization process in QAOA by reducing degeneracies and enabling more efficient performance [135, 136, 137, 93, 135]. This approach can provide several advantages, including reducing the QAOA energy evaluation cost, accurately predicting QAOA performance, and improving circuit training by concentrating optimal parameters as an inverse polynomial relative to problem size [136, 137, 135]. Research also indicates that symmetries can minimize the number of parameters without compromising solution quality, and inherent symmetries can aid in the identification and elimination of degeneracies in the parameter space, leading to a more effective search for optimal parameters [15, 93]. Certain studies even suggest that optimal QAOA parameters can sometimes be derived analytically, further simplifying the search process [138]. Efficiency and performanceQAOA is considered a leading candidate for achieving quantum advantage, both in terms of runtime efficiency and quality of the solution. Several works have already highlighted its speedup potential compared to classical algorithms in optimization [144, 26, 124] and other problem domains [146, 5, 147]. Despite this, quantum speedup with QAOA on near-term devices is unlikely [151], primarily due to challenges in optimizing the variational parameters in large QAOA circuits [152, 151, 71, 77]. The prospects for achieving quantum advantage on early fault-tolerant quantum computers are also limited for quadratic speedups due to substantial overheads of error correction, which slows down the algorithm significantly [153]. Problem structure should be exploited to achieve high-order speedups when implementing QAOA and other quantum algorithms [155]. Meanwhile, many approaches to improve the runtime efficiency of QAOA have been proposed, primarily targeting the parameter optimization process. Other improvement strategies have also been explored, including modifying the QAOA ansatz [81, 82], optimizing gate operations [23], reducing the number of samplings [156], etc. On the other hand, the solution quality, typically characterized by the approximation ratio in solving optimization problems, is another important metric for evaluating the effectiveness of QAOA. Extensive efforts have been made to understand its theoretical solution guarantees, that is, the lower bounds on QAOA's performance, in asymptotic limits when applied to various problems and the comparison to state-of-the-art classical algorithms (see Table 3). Most notably, QAOA at \(p\geq 11\) could outperform the assumption-free classical SDP algorithms in solving the MaxCut problem on \(D\)-regular graphs with high girth (\(>2p+1\)) [163] and the SK model [166], which are intimately related [182]. At \(p=1\), QAOA was also demonstrated to surpass the threshold algorithm in the Max-\(k\)KOR problem on bounded-degree hypergraphs with random signs or no overlapping constraints, for \(k>4\)[173]. However, there are known obstacles arising from symmetry [173, 83] and locality [162, 175, 174, 165], which prevent QAOA at modest depths from achieving optimal solutions. Even for problems where QAOA is not constrained by locality, such as the fully connected \(k\)-spin models, constant-depth QAOA was shown to be bounded away from optimality [176]. Empirically, studies have also been conducted to benchmark the performance of QAOA against classical solvers, using both simulators and real hardware. Several factors were identified to have significant impacts on the quality of solution provided by QAOA, including the circuit depth [124], entanglement [143, 193, 155, 194], parameter optimization [15], properties of the underlying graph [110, 196], etc. Moreover, multiple strategies have been employed to assess the conditions at which it is advantageous to use QAOA over classical algorithms [198, 132, 197]. In summary, while research on QAOA's efficiency and performance has yielded promising results in certain problem instances, classical solvers remain highly competitive in solving a wide range of optimization problems, making it challenging to achieve quantum advantage with QAOA. This challenge is further compounded by the detrimental effect of noise in current quantum computers. Therefore, attaining a general and definitive quantum advantage with QAOA remains an open question. However, progress has been made in understanding the key aspects that influence the effectiveness of QAOA, such as circuit depth, parameter optimization, graph structures, and more, and corresponding strategies have been proposed to improve these aspects. Future research and experimental method standardization are needed to assess the extent to which these proposals could make QAOA more advantageous than its classical counterparts. Hardware considerationsBoth local and correlated errors in current hardware pose significant challenges to QAOA's scalability and performance. In the case of local errors, theoretical studies suggested that the QAOA performance suffers from an exponential degradation with an increasing noise strength [212, 213]. This translates to an exponential time complexity to make QAOA effective, severely limiting its scalability [214]. Another major challenge is that even with just local noise, the parameter optimization of QAOA suffers from the noise-induced barren plateaus [152], which cannot be mitigated with techniques that are effective for the noise-free ones [71]. The adverse effects of correlated errors have also been investigated, including crosstalk noise [221], precision errors [222], the coherent error induced by residual ZZ-couplings [28], etc. As a result, both theoretical [224, 225, 227, 228] and empirical studies [202, 203, 204] suggested that quantum advantage is unlikely with the noise level in current devices. To match the performance of classical devices, the error rates in quantum computers would need to improve significantly, even reaching below the fault-tolerant threshold for dense and non-hardware-native graphs. Despite this, error mitigation techniques, such as optimizing SWAP networks [226], gate reduction strategies [230, 231, 232, 233, 234], exploiting problem symmetries [229, 238, 239], and other error-specific strategies [221, 222, 253, 260], have been proposed for NISQ devices. Furthermore, over the past few years, devices, especially those utilizing trapped ions, have achieved significant advancements in terms of maturity. This progress has allowed for a monotonic improvement in the performance of QAOA with up to 10 layers for graphs consisting of 20 qubits [247]. Hardware-specific approaches focused on leveraging platform capabilities like trapped ions, neutral atoms, superconducting qubits, and photonic quantum computers to enhance QAOA's performance. For example, research into qudits and long-range interaction platforms has demonstrated potential improvements in optimization problems on neutral atom devices [255]. In order to address the primary issue with photonic devices [256], cat qubits have been employed instead of standard qubits, resulting in a more noise-resilient QAOA. In the context of this particular quantum computation model, significant efforts have also been devoted to fully utilizing the MBQC capabilities of photonic devices [74, 261]. To achieve improved QAOA performances, the utilization of pulse-level access on IBM devices has been proposed [262, 250]. A four-qubit Rydberg parity gate was introduced for encoding arbitrarily connected graphs in trapped neutral atom systems [240]. Ion native Hamiltonians were used to develop ansatz families for encoding all-to-all connectivity in planar quantum processors [75]. Lastly, methods for constructing arbitrary coupling graphs using global entangling operations in quantum spin systems were also presented [76]. ### Potential Applications and Use Cases Having discussed various aspects of QAOA in-depth, it is important to explore where the algorithm can and has been applied to modern problems. The implementation of quantum optimization algorithms is naturally limited by the capabilities of near-term quantum hardware, as discussed in previous sections, which makes empirical demonstrations valuable as benchmarks for expected outcomes. An example of such an empirical benchmark was provided by Lotshaw et al. [128]. Near-term applications of these algorithms are exhibited by results such as those given by Niroula et al. [25], who demonstrated the potential of near-term quantum computers, through QAOA, in addressing industry-relevant **constrained-optimization problems** by considering an extractive summarization problem. They used the Quantum Alternating Operator Ansatz algorithm with a Hamming-weight-preserving XY mixer (XY-QAOA) on a trapped-ion quantum computer. They effectively implemented XY-QAOA circuits on up to 20 qubits, emphasizing the necessity of direct constraint encoding into the quantum circuit. Their results show that the right choice of algorithm and compatible NISQ hardware can already solve constrained optimization problems relevant to the modern industry and that quantum advantage is near for variational quantum algorithms. A similar demonstration of the right choice of quantum algorithm for a problem showing potential quantum advantage is the work of Bravyi et al. [102] using RQAA for approximate vertex \(k\)-coloring of a graph. More specifically, RQAA was applied to tackle the MAX-\(k\)-Cut problem, a notoriously tricky task in graph theory that involves finding an approximate \(k\)-vertex coloring of a graph. Vertex coloring is a crucial problem that finds applications in many fields, such as scheduling, mobile radio frequency assignment, register allocation, etc., making it a pertinent issue in theoretical computer science and various industries. Their study discovered that level-1 RQAOA is surprisingly competitive, often achieving higher approximation ratios than the best-known generic classical algorithm based on rounding an SDP relaxation for ensembles of randomly generated 3-colorable constant-degree graphs (Section 3.1.6). This suggests that the RQAOA could be a powerful tool for NISQ devices and points towards its potential for outperforming classical solutions in certain instances of the graph coloring problem. Tabi et al. [43] proposed a space-efficient quantum optimization algorithm tailored explicitly for the graph coloring problem. Given current quantum computing architectures' varying strengths and limitations, they underscored the necessity for flexible circuit design approaches. They introduced a space-efficient quantum optimization algorithm that drastically reduces the required qubits for problem encoding. Also, they simultaneously lowered the required layers and optimization iteration steps to reach the optimal solution. Though their proposed circuits are inherently deeper than traditional methodologies, the exponential reduction in qubit usage makes this an intriguing prospect for real-world applications, particularly given the constraints of current quantum hardware. The authors validated the performance of their approach through numerous numerical simulations. They concluded that analogous space-efficient embedding techniques could enhance other graph-related quantum optimization methods, which they earmarked for future exploration. In a related application, Cook et al. [106] investigated the use of the QAOAnsatz (Section 3.1.7) on the Maximum \(k\)-Vertex Cover problem, a complex task with notable real-world implications such as network security and social network analysis. Their study revealed that improved performance was achieved by using Dicke states and the complete graph mixer. Though challenged by the increasing complexity in subsequent rounds, the results exhibit a promising trend that aligns with the Quantum Adiabatic Algorithm, highlighting the feasibility and potential of QAOA in addressing complex optimization problems. Another example of a real-world application of QAOA, and variational quantum algorithms more generally, is given by the works of Azad et al. [263] and Mohanty et al. [210], which tackled the Vehicle Routing Problem (VRP), an NP-hard optimization problem of considerable interest to both science and industry. It generalizes the traveling salesperson problem and consists of finding the most efficient vehicle route to a set of customers. These works showed the comparable performance of VQAs against classical solvers even in the presence of noise, making them relevant for near-term use of quantum algorithms for industry-relevant problems of supply chain management and scheduling. A similar result has been achieved by Vikstal et al. [22], in which the QAOA was applied to the Tail Assignment Problem (TAP), which is the task of assigning individual aircraft to a given set of flights, minimizing overall cost. In **financial industry**, Baker and Radha [21] benchmarked the performance of QAOA in portfolio optimization, offering another practical application. Their study focused on the solution quality determined by the normalized and complementary Wasserstein distance, suggesting that this measurement can serve as an application-specific performance benchmark. Their findings indicated that the solution quality exhibited improvement as the QAOA circuit depth increased, reaching its peak at \(p=5\) with 2 qubits for most tested systems and at \(p=4\) with 3 qubits on a trapped ion processor. These results suggest the potential of QAOA and its variants in addressing financial optimization tasks. A study by Hodson et al. [20] further corroborates the application of QAOA in portfolio optimization. They experimentally analyzed the performance of a discrete portfolio optimization problem relevant to the financial industry on an idealized simulator of a gate-model quantum computer. Their study demonstrated the potential tractability of their application on NISQ hardware, with portfolios identified within 5% of the optimal adjusted returns and optimal risk for a small eight-stock portfolio. Their work also involved designing novel problem encoding and hard constraint mixers for the QAOAnsatz, demonstrating a method to tailor quantum algorithms to specific industry use cases. Along the same lines, Hegade et al. [264] investigated the complex task of discrete mean-variance portfolio optimization by using the variant DC-QAOA (Section 3.1.3). In the domain of **Hamiltonian simulation**, another promising application of QAOA has emerged. Lotshaw et al. [265] explored the use of QAOA in simulating the frustrated Ising Hamiltonians, which serve as toy models crucial for understanding novel magnetic materials. The ground state properties of these materials are computationally intensive to calculate with traditional methods due to their inherent complexity. Using QAOA, the authors examined the Ising spin models on unit cells of square, Shastry-Sutherland, and triangular lattices, finding that a modest number of measurements were sufficient to identify the ground states of the 9-spin Hamiltonians. Remarkably, this efficiency persisted even when the Hamiltonians induced frustration in the system, showcasing the potential of QAOA in physical simulations. In more hybrid approaches, Brady et al. [266] combined Quantum Annealing with QAOA to prepare the ground state of a quantum system. They discovered that the combination of both methods worked best most of the time and concluded that the optimal protocol for minimizing the energy of a quantum state under time constraints is often of the "bang-anneal-bang" form rather than the previously thought "bang-bang" pulse structure based on Pontryagin's principle [267]. In **communication**, Cui et al. [268] applied QAOA to the problem of maximum likelihood (ML) detection of binary symbols transmitted over a multiple-input and multiple-output (MIMO) channel. They demonstrated that a QAOA-based ML detector could approach the performance of a classical ML detector, revealing the potential for large-scale classical optimization problems to be effectively addressed on NISQ computers. In the work of Chandarana et al. [269], a hybrid classical-quantum ansatz was proposed, which uses counterdiabatic protocols to solve the **protein folding** problem on a tetrahedral lattice. The ansatz is inspired by digitized-counterdiabatic quantum computation, which can accelerate adiabatic quantum algorithms and compress quantum circuits [79, 80]. The authors applied this algorithm to various proteins with different numbers of amino acids and qubits, and it was shown to outperform state-of-the-art quantum algorithms, such as the original QAOA, in terms of convergence and circuit depth. These demonstrations were performed on several quantum hardware platforms, including trapped-ions and superconducting systems, achieving high success probabilities and efficient ground state convergence. The authors noted the remaining difficulties for such an approach to solving complex optimization problems with quantum computers. However, the proposed digitized counterdiabatic protocol opened an investigation into implementing problem-inspired ansatz to industrial use cases using NISQ devices. In **computer vision**, Li et al. [23] investigated the application of partially occluded object detection under the broader framework of QUBO. They proposed a three-tiered improvement approach for a hybrid quantum-classical optimization for object detection, resulting in significant execution speedup--an over 13-fold speedup was achieved by selecting L-BFGS-B as the classical optimizer. They also demonstrated that optimally rescheduling gate operations, especially in deeper circuits, resulted in better circuit fidelity at the third level. The findings of this study shed light on the potential benefits of QAOA for object detection tasks. Very recently, Date et al. [270] proposed using quantum computers to accelerate **training of machine learning models**. They formulated three machine learning problems (linear regression, support vector machine, and balanced \(k\)-means clustering) as QUBO problems and suggested solving them with adiabatic quantum computing. In this context, QAOA can be employed as an alternative method to find good approximated solutions to such problems. This could potentially pave the way for using QAOA in the deep learning realm for neural networks' training in the future. Some other potential future scopes from [271] include experimental implementation of non-Gaussian gates, exploration of quantum advantages in the presence of decoherence, development of specialized QNNs, investigation of joint architectures, and deeper exploration of fundamental quantum physics concepts in QNNs. ### The Quantum Advantage of QAOA The QAOA shows promising potential in realizing quantum advantage over classical algorithms in specific problem instances and under certain conditions. This approach targets classical optimization problems, providing increased efficiency and improved performance solutions. However, this advantage is not yet fully realized due to various challenges, including noise and hardware limitations in near-term quantum devices and the competitiveness of state-of-the-art classical solvers. In terms of computational runtime efficiency, instances where QAOA has demonstrated superiority include solving the MaxCut problem on dense graphs, offering exponential acceleration for large Minimum Vertex Cover (MVC) problems, and achieving a superlinear quantum speedup compared to Simulated Annealing (SA) for the Maximum Independent Set (MIS) problem on specific graph instances. Moreover, QAOA has shown potential in other areas, such as unstructured search problems and Quantum Linear System Problems (QLSP), where it outperforms various classical and quantum algorithms. In terms of solution quality, multiple studies comparing QAOA to classical algorithms on various optimization problems, including MaxCut, Max-\(k\)XOR, and other Constraint Satisfaction Problems (CSPs), have revealed that QAOA can outperform classical algorithms under specific conditions or for certain problems. For instance, QAOA surpasses classical threshold algorithms for Max-\(k\)XOR problems when \(k>4\) in the large degree limit. Furthermore, compared to classical local algorithms, QAOA provides superior solutions for MaxCut on large-girth random regular graphs at depth \(p=11\) and beyond. Despite these encouraging results, other studies have found that classical algorithms can still match or surpass QAOA's performance in many scenarios. A simple modification of the classical algorithm, the Gaussian wave process, has achieved a larger improvement over random assignment compared to QAOA\({}_{1}\) in the asymptotic limit as \(D\rightarrow\infty\) for MaxCut on triangle-free \(D\)-regular graphs. There also exists a 2-local classical MaxCut algorithm that consistently outperforms QAOA\({}_{2}\) for all \(D\)-regular graphs of girth \(>5\). Furthermore, QAOA can encounter limitations in scenarios where it cannot outperform the best classical algorithm, such as solving the MaxCut problem on bipartite \(D\)-regular graphs. It can also face restrictions due to the locality constraint, which limits its algorithmic performance when the entire graph is not visible to the algorithm. When applied to problems with the Overlap Gap Property (OGP), such as the MIS problem on sparse random graphs or Max-\(k\)XOR with even \(k\geq 4\), QAOA can face obstructions limiting its performance. Despite these challenges, a potential quantum advantage is still achievable in some problems, particularly when QAOA surpasses sub-logarithmic depths, as classical Approximate Message Passing (AMP) algorithms exhibit suboptimality. Consequently, more research is needed to understand the extent of QAOA's capabilities and limitations and identify improvement areas or alternative optimization approaches. One area that can be improved to bolster the QAOA's efficacy is its ansatz design. Recent advancements in ansatz variants such as ab-QAOA, ADAPT-QAOA, and QAOAnsatz have significantly improved over the standard QAOA. The ab-QAOA drastically reduces computation time and achieves faster convergence, with improvements increasing proportionally with problem size. The ADAPT-QAOA, applying the principle of shortcuts to adiabaticity, exhibits enhanced convergence speed and reduces the number of CNOT gates and optimization parameters by about half, streamlining quantum computation. QAOAnsatz introduces more flexibility in defining parts of the ansatz, thereby expanding the range of solvable problems and enabling a larger and potentially more useful set of states to be represented than possible with the original formulation. These advancements could accelerate the realization of quantum advantage in solving combinatorial optimization problems. Other ansatz variants, such as WS-QAOA, have also shown potential advantages over the standard QAOA at low depth, critical for implementation on NISQ devices. Likewise, the FALQON+ algorithm has illustrated a significant increase in approximation ratio and success probability with a minor increase in circuit depth and noise degradation, making it suitable for NISQ devices. Furthermore, FQAOA has demonstrated substantial performance advantages in portfolio optimization problems by effectively handling constraints. Other approaches like Quantum Dropout and ST-QAOA have also indicated improvements in performance for specific types of combinatorial optimization problems. The performance of QAOA is also strongly tied to its depth (i.e., the number of layers), as a larger depth implies more parameters to optimize. Finding optimal parameters often requires polynomial time, making it difficult to achieve a quantum speedup with QAOA, even at low depths. Additional complications, such as barren plateaus, aggravate this challenge and necessitate improved parameter optimization strategies. Therefore, parameter optimization is crucial for QAOA's quantum advantage. Various techniques have been proposed to effectively initialize QAOA parameters, achieving better results than random initializations. Many works have also shown that when coupled with diverse optimization strategies such as gradient-based methods, gradient-free techniques, and certain machine learning approaches, QAOA often displays rapid convergence to optimal solutions and better solution quality. Moreover, parameter concentration in certain problem instances and symmetries can be exploited to increase the efficiency of parameter optimization further. There remain, however, substantial obstacles to achieving a quantum advantage in more practical settings, i.e., when running QAOA on real quantum computers. Challenges related to various noise sources, including local and correlated errors, present significant hurdles. While strategies to mitigate some of these errors have been proposed, quantum devices' error rates need to continue to improve in order to address issues such as noise-induced barren plateaus. Therefore, work towards achieving consistent quantum advantage with QAOA remains an ongoing and active area of research. Nonetheless, QAOA exhibits strong compatibility with NISQ devices, enhancing its near-term applicability. Some QAOA variants display resilience to noise, making them well-suited for NISQ devices. Empirical evidence suggests that QAOA could effectively manage industry-relevant constrained-optimization problems on these quantum platforms, efficiently addressing scalability and compatibility issues. This versatility, combined with the successful application of QAOA to various real-world problems, underscores its adaptive nature and a broad range of use cases. ### Open Questions and Future Directions The QAOA represents a noteworthy milestone in quantum computing with its potential for addressing various optimization problems. Despite significant progress, numerous research opportunities remain in the realm of QAOA. Unlocking the full potential of this quantum optimization algorithm may require a combination of theoretical understanding, empirical studies, algorithmic improvements, parameter optimization, and a thorough exploration of entanglement and problem structures, among other factors. Theoretical insights & mathematical frameworks:Enhancing our theoretical understanding of QAOA, primarily at higher depths, is fundamental. This includes developing rigorous theoretical frameworks that provide a comprehensive understanding of QAOA's behavior, performance, and inherent limitations compared to classical optimization algorithms. Further research is needed to understand the interplay between entanglement and circuit depth, particularly how they collectively impact QAOA's performance at higher depths. This will also facilitate the development of problem-specific algorithms and efficient ansatz designs. Research on how different problem structures interact with QAOA's functionality and performance is also essential. This understanding can offer valuable insights into improving performance and applicability across diverse problem sets. Some specific questions to be addressed in this regard include: * A rigorous proof of QAOA achieving the Parisi value would be an important result. Basso et al. [176] conjectured that as \(p\to\infty\), QAOA would be able to achieve the Parisi value on random \(D\)-regular graphs as \(D\to\infty\). If true, this would mean QAOA could also optimally solve the SK model, as well as the MaxCut problem [182]. * The Overlap Gap Property (OGP) exhibited by certain problem instances has been shown to pose obstacles for QAOA in achieving optimal solutions. These challenge arise when QAOA does not see the whole graph (e.g., when operating at sub-logarithmic depths) [192], as well as when it is able to see the whole graph (e.g., on fully-connected \(k\)-spin models) [176]. In comparison, the classical Approximate Message Passing (AMP) algorithm can provably find solutions arbitrarily close to the true optima of \(k\)-spin models in tha absence of OGP (conjectured for \(k=2\)), but it also faces suboptimality when \(k\geq 4\) is even. It therefore remains an open question whether QAOA can achieve a quantum advantage in scenarios involving the OGP. * For instances with exceedingly small spectral gaps, QAOA can overcome the adiabatic limitations. Zhou et al. [15] noticed it for small problem sizes, but how this tendency could scale and allow to solve problems out of reach for adiabatic evolution is still unknown. * Rajakumar et al. [76] proposed a method enabling the construction of arbitrary coupling operations on quantum spin-systems, using global Ising operations and single qubit bit flips. This method exhibits promising scaling properties, suggesting potential advantages for dense, unweighted coupling operations like those in QAOA. However, an optimal operation sequence for arbitrary graphs remains elusive, likely pointing to an NP-hard problem. Future work may focus on developing efficient solutions for specific types of graphs, seeking tighter upper bounds on construction, and formally establishing this problem's NP-hard nature. Further research is also needed to explore how the complexity of compilation affects quantum algorithms when constructing arbitrary unweighted and weighted coupling graphs using global entangling operations in quantum spin systems. * Boulebnane and Montanaro [187] derived analytical bounds on the average success probability of QAOA on random \(k\)-SAT in the limit of infinite problem size. Further research could be done to explore QAOA's performance on other Boolean satisfaction problems and instances and investigate the relationship between QAOA's success probability and running time. * Dupont et al. [143] studied the growth and spread of entanglement resulting from optimized and randomized QAOA circuits, which could have implications for the simulation of QAOA circuits with tensor network-based methods. Further research could be conducted on these connections and their impact on overcoming barren plateaus. Parameter optimization strategies and initialization techniques:Efficient methods for finding optimal parameters and initializing the algorithm are crucial for improving QAOA's performance, especially in noise and hardware limitations. Strategies include overcoming barren plateaus, exploiting parameter symmetries, and leveraging machine learning or reinforcement learning techniques. Furthermore, the process of parameter initialization significantly influences QAOA's overall performance. Enhancing strategies for optimal parameter initialization and understanding their impact on performance at higher depths are critical future research directions. In this direction, some promising avenues for future investigation include: * It has been noticed that for similar graphs, the parameters of the QAOA ansatze share some similarities [181]. However, it is unclear whether this phenomenon is due to the small graphs that have been investigated or whether this is more of a universal property that connects the ansatze to closely related graphs. * The performance of BFGS for larger parameter vectors (i.e., for \(p\geq 3\)) in QAOA optimization is unclear [128]. It would be interesting to evaluate the behavior and effectiveness of BFGS for parameter optimization in QAOA circuits with larger depths. * How can parameter transferability and reusability be further utilized for optimizing QAOA performance? Galda et al. [115] provided a theoretical foundation for parameter transferability in QAOA. However, more research is needed to understand how this can be applied across a wide range of problem instances and optimization tasks. * Explore additional ways to leverage parameter symmetries for more efficient QAOA performance. Shaydulin et al. [137] investigated the correlation between QAOA and the inherent symmetries of the target function, using machine learning techniques to predict the QAOA performance accurately. Additional research could be conducted to improve the understanding of symmetry in objective functions, cost Hamiltonians, and QAOA parameters themselves, leading to more efficient optimization processes. * Can optimal QAOA parameters be derived analytically? Wang et al. [138] derived analytical expressions for solving optimal parameters for the level-1 QAOA applied to MaxCut on general graphs. However, it is unclear whether this approach can be extended to higher values of p or other problem instances. Investigate the possibility of deriving analytical expressions for optimal QAOA parameters in various problem instances and optimization tasks. This could potentially simplify the search for optimal parameter values and improve the overall efficiency of the QAOA algorithm. * The effectiveness of unsupervised machine learning approaches, such as clustering, in setting QAOA parameters has been demonstrated [19]. However, how well these methods can generalize to other problems and larger instances remains to be seen. The same goes for reinforcement learning [100, 133]. Investigate the applicability and generalization of unsupervised learning and RL methods for QAOA parameter optimization in a broader range of problem settings and larger problem instances. * Can better parameter optimization strategies mitigate the challenge of achieving a quantum speedup with QAOA, even at low depths, due to factors such as barren plateaus [71, 152]? Understanding performance, benchmarking and comparison with classical algorithms:Developing application-specific benchmarking methods and evaluating QAOA's performance across various problem instances will help determine its advantages over classical algorithms and guide researchers in choosing the most appropriate solver for a given optimization problem. **Benchmarking against classical algorithms** will help identify the conditions under which QAOA can consistently surpass classical benchmarks, which is crucial to harnessing its full potential. This includes exploring cases where QAOA can offer a quantum advantage. **Empirical studies** will also provide practical insights into the performance of QAOA. Simulator-based studies can shed light on how the algorithm performs in real-world scenarios. Further empirical investigations that complement theoretical studies can contribute to a more holistic understanding of the algorithm. Finally, opportunities exist to **adapt or enhance classical algorithms** for quantum optimization. This could lead to significant advancements in quantum computing, including novel solutions that outperform existing quantum optimization algorithms. Along these directions, some areas for further research include: * Machine Learning (ML) models have been employed to predict QAOA's performance and identify situations where QAOA is most likely to outperform classical algorithms [132, 198]. Based on their findings, Moussa et al. [198] suggested exploring the potential of graph sparsification to enhance QAOA's performance. ML models like these provide valuable insights into the behavior of QAOA and offer guidance on leveraging its advantages, helping uncover opportunities for achieving quantum advantage. * Marwaha and Hadfield [173] found that QAOA starts to outperform the threshold algorithm when \(k>4\). However, this does not rule out the possibility that a different local tensor algorithm will match or outperform QAOA at larger \(k\). Can QAOA outperform other local tensor algorithms at larger \(k\)? * Larkin et al. [156] proposed a new metric for evaluating the runtime performance of QAOA and developed a method based on it to reduce the execution time of QAOA for solving MaxCut on 3-regular graphs. The effectiveness and scalability of this technique on other problems remains to be investigated. Furthermore, studies on different performance metrics may provide more insights into how to improve QAOA's performance across different problems. * Can the observed quantum speedup in specific problem domains be extended to more general cases? For instance, the speedup shown by QAOA in the Maximum Independent Set (MIS) problem on 2D Rydberg atom arrays remains an open question for general cases [26] Noise and error effects, mitigation techniques, and fault tolerance:Understanding the impact of various noise types on QAOA's performance is essential. Developing noise mitigation strategies, error suppression schemes, and fault-tolerant techniques will improve the practicality of QAOA in real-world settings with decoherence and hardware limitations. Specific strategies to **reduce noise** in QAOA and other Variational Quantum Algorithms (VQAs), such as optimizing SWAP networks, reducing the total number of CNOT gates, and leveraging equivalent circuit averaging, continue to be promising future directions. Techniques such as symmetry verification have been proposed for **mitigating errors** in QAOA. This approach has shown effectiveness in improving output state fidelity, expected objective function value, and probability of sampling the optimal solution. Some more specific potential areas for future research involve: * According to Campos et al. [216], local coherent dephasing noise can remove training saturation in layer-wise learning, potentially aiding the optimization process. However, it is unclear if there are other noise sources beyond the simple noise model that can play a similar role. Further investigation is needed to determine this. * It was observed in numerical simulation that QAOA performance improves as noise correlation strength increases at fixed local error rates, suggesting a certain degree of noise resilience [223]. However, further studies are required to test the generalizability of this result. * Optimizing SWAP networks can help reduce the negative impact of noise on QAOA and other VQAs [226]. More research should be done on improving SWAP network optimization techniques to enhance performance in practical settings. * Is it possible to mitigate Noise-Induced Barren Plateaus (NIBP) with novel error mitigation strategies [152]? * What is the prospect for quantum advantage of QAOA in the present of noise and errors? For example, a quantitative analysis of the error rates required to maintain the advantage of QAOA in noiseless scenarios would provide valuable insights. To further address this question, an empirical demonstration of quantum advantage of QAOA in real quantum devices is crucial. Hardware-specific challenges and scalability:Addressing hardware limitations such as connectivity, decoherence, and gate error rates is vital for scaling up QAOA to solve larger problems. **Leveraging specific hardware** to enhance QAOA performance has shown potential. However, different architectures possess their specific advantages and disadvantages that need thorough exploration. Investigating QAOA's performance on different hardware platforms, including trapped ions, neutral atoms, superconducting qubits, photonic quantum computers, and the use of higher energy levels (qudits), will provide a deeper understanding of its capabilities and limitations. For example, the utilization of higher energy levels beyond qubits presents an interesting prospect. In photonic quantum computers, qudits can be more efficiently mapped to the hardware for various problems. Tailoring algorithms for photonic quantum computing also present many other advantages, such as potential improvements in terms of cluster state dimension compared to QAOA gate-based circuit algorithms. Along these lines, several promising opportunities for further study consist of the following: * A systematic investigate the impact of compilation and routing qubits with swap networks on QAOA's performance in real hardware is needed. This additional overhead can significantly impact the performance of the algorithm, especially when solving problems on graphs that differ from hardware connectivity [202]. * When using ML models to predict the performance of QAOA [198], adding some hardware-related features will assist in deciding whether or not to choose QAOA over classical algorithms on a specific quantum hardware. * Recent observations showed that a decomposed implementation could significantly reduce the performance of QAOA. Hence, it would be interesting to investigate the effect on QAOA by a direct hardware implementation of a combination of a controlled arbitrary phase and a SWAP gate [254]. * Dlaska et al. [240] concluded that we do not know how experimental investigations beyond numerical simulations for QAOA will perform using the innovative four-qubit Rydberg parity gate and parity architecture. * According to Proietti et al. [74], we should continue developing and tailoring algorithms for photonic quantum computing, as their MBQC QAOA for photonic quantum computers showed an up-to 30-fold improvement in terms of cluster state dimension when compared to QAOA gate-based circuit algorithms. * Explore improvements in the surface code implementation, such as faster state distillation, to potentially produce quantum advantages on early fault-tolerant quantum processors for algorithms offering only quadratic speedups [153, 154]. Alternative optimization approaches and problem-specific algorithms:Exploring alternative optimization methods and tailoring algorithms to specific problem structures will undeniably enhance the performance and applicability of the QAOA across a diverse range of optimization problems. It is worth investigating strategies such as FALQON, FQAOA, and others. Furthermore, the concept of **shortcuts to adiabaticity** has played a crucial role in enhancing many ansatz designs. As such, exploring this concept could lead to more efficient algorithms with faster convergence and reduced resource requirements. In addition to these alternative optimization approaches, there is uncharted territory for potential improvements within the existing QAOA framework. This includes adopting "warm-starting" approaches, which could provide superior capabilities, unique features, and enhanced practical use. These algorithmic enhancements could contribute significantly to the ongoing evolution of QAOA, broadening its scope and effectiveness. * New ansatz designs: research suggests exploring ma-QAOA to simplify parameter optimization [77, 93], investigating ab-QAOA for scalability and quantum advantage in combinatorial optimization problems [80] and leveraging QAOAnsatz for a broader set of problems [84]. * Herrman [94] investigated the connection between ma-QAOA and Continuous-Time Quantum Walks (CTQW) on dynamic graphs, showing their equivalence. Research could be done on investigating well-studied CTQW phenomena, such as hitting times, to improve our understanding of ma-QAOA and help find optimal parameters. * Further research on WS-QAOA [88] to improve the initialization of QAOA parameters based on the solution to the relaxed QUBO problem, potentially enhancing performance at low depths, which is particularly important for implementation on NISQ devices. * Research methods to deal with hard cases of combinatorial optimization problems where the energy landscape is rugged, such as the approach proposed by Wang et al. [91] using selective clause dropout. * Applying QAOA to various Constraint Satisfaction Problems (CSPs) can deepen our understanding of the algorithm's effectiveness and applicability. Evaluating its performance across these problems, especially those with unique properties is a valuable future research direction. ## 6 A Practical Guide to QAOA The Quantum Approximate Optimization Algorithm is a versatile quantum algorithm that can be used to solve combinatorial optimization problems on quantum computers. This guide aims to provide researchers with detailed information on when and how to use QAOA effectively. It answers key questions such as which QAOA variant or ansatz to use for a specific problem, the benefits of using QAOA for different problem types, and how to tune and optimize the parameters of the QAOA ansatz. It also provides guidance to apply QAOA to a real-world problem practically. ### Which QAOA Ansatz Variant Should I Use for My Problem? QAOA and its variants have been applied to a range of combinatorial optimization problems and various other problems relating to graph theory, constrained optimization, linear systems, and unstructured search. Some specific problems that QAOA and its variants have been successfully applied to include MaxCut, Traveling Salesman Problem, \(k\)-Vertex Cover, Discrete Portfolio Rebalancing, Sherrington-Kirkpatrick (SK) Hamiltonian, \(k\)-graph coloring, electric vehicle charging problems, and Max-\(k\)-Cut. In light of this, when selecting the appropriate QAOA ansatz variant for a given problem, it is vital to consider the problem type, hardware constraints, and the trade-off between specificity and generality. Some QAOA variants seem particularly suitable for problems with hard constraints that always need to be satisfied and soft constraints that need to be minimized in their violations. They effectively solve optimization problems with rugged cost function landscapes and numerous local minima and have been demonstrated to sometimes outperform classical algorithms or provide some quantum advantage in such instances. In addition, QAOA's performance can benefit from leveraging problem symmetries, odd cycles, density, and other structural features. QAOA also seems to work well for hardware grid problems and problems where higher-depth versions are necessary for achieving satisfactory results. Below we summarize the key features of some notable QAOA ansatz variants and which problems they might be good for. **Standard QAOA**: This basic version works well for various combinatorial optimization problems like MaxCut, Minimum Vertex Cover (MVC), Constraint Satisfaction Problems (CSPs), and Maximum Independent Set (MIS). Use it as a starting point for problems with an unknown structure or simple instances. **Multi-Angle QAOA (ma-QAOA)**: This ansatz introduces new parameters into the circuit so that each element of the cost and mixer layers has its angle. It achieves better approximation ratios than vanilla QAOA and may require shallower circuits. It is suitable for combinatorial problems represented as graphs where a more complex parameter optimization is acceptable, as it may require shallower circuits but provides better or equal approximation ratios. The connection between ma-QAOA and Continuous-Time Quantum Walks (CTQW) on dynamic graphs might help find optimal parameters. Exploiting the natural symmetries of input graphs can reduce the number of ma-QAOA parameters by approximately 33% with little to no impact on the objective function. **QAOA+**: This ansatz augments the traditional \(p=1\) QAOA with additional multi-parameter problem-independent layers of parameterized ZZ gates and a layer of mixer X gates. It is suitable for problems like MaxCut on random regular graphs and achieves higher approximation ratios than \(p=1\) QAOA while keeping the circuit depth low. The added circuit depth beyond the vanilla QAOA grows only in the number of qubits used as a set of \(2N-1\) parameters for \(N\) qubits. QAOA+ outperforms the alternative multi-angle QAOA ansatz in most cases. **Digitized Counterdiabatic QAOA (DC-QAOA)**: This ansatz reduces computational complexity and circuit depth by utilizing counterdiabatic driving to speed up the optimization process. It is suitable for problems such as Ising models, classical optimization problems, and \(k\)-spin models. DC-QAOA seems to outperform the standard QAOA in all cases when applied to these problems. **Adaptive Bias QAOA (ab-QAOA)**: This ansatz incorporates adaptive bias fields into the mixer operators to accelerate convergence. The local fields (bias fields) are not optimized but updated according to a specific prescription. It is suitable for faster convergence and reduced computation time in combinatorial optimization problems and has a polynomially shorter computation time than vanilla QAOA. The improvement in computation time further increases with problem size. **Adaptive Derivative-Assembled Problem-Tailored QAOA (ADAPT-QAOA)**: This ansatz iteratively selects the QAOA mixer from a pre-defined pool of operators, maximizing the gradient of the commutator of the pool operator and the cost Hamiltonian over the ansatz of the previous step. It was reported to converge faster than standard QAOA while reducing the number of CNOT gates and optimization parameters by about 50% each, particularly when entangling gates are included in the operator pool. The drawback of this method is that the selection of mixing operators requires additional measurements depending on the size of the operator pool. **Recursive QAOA (RQAOA)**: This non-local variant of QAOA iteratively reduces the size of the problem. It is designed to address the limitations caused by the \(\mathbb{Z}_{2}\) symmetry of QAOA states and the geometric locality of the ansatz. It has shown promising results on NISQ devices and applies to combinatorial optimization problems like MaxCut. **Quantum Alternating Operator Ansatzes (QAOAnsatz)**: This ansatz alternates between a more general set of operators, allowing it to solve a broader range of problems, especially those with hard constraints defining feasible subspaces and soft constraints to minimize violations. QAOAnsatz encompasses many variants such as XY mixer, Grover mixer, etc. that can be tailored to different problems or constraints. **Warm-Starting QAOA (WS-QAOA)**: This approach initializes QAOA parameters based on the solution to the relaxed QUBO problem (continuous variables instead of binary ones). It provides initial states with the best approximation guarantee available classically in polynomial time and can be incorporated into the workflow of recursive QAOA. Suitable for recursive QAOA and provides initial states with the best approximation guarantee available classically in polynomial time. **Feedback-based ALgorithm for Quantum OptimizatioN (FALQON)**: This algorithm uses qubit measurements to assign values to quantum circuit variational parameters constructively, enabling approximate solutions without classical optimization. While it was designed for the nearly fault-tolerant era where deep circuits are feasible, it can also be used as a warm-start technique for NISQ devices, improving the initialization of standard QAOA. **Fermionic QAOA (FQAOA)**: This approach is suitable for solving combinatorial optimization problems with constraints, utilizing fermion particle number preservation to intrinsically impose these constraints and using fermion-based driver Hamiltonian for problem Hamiltonians with constraints. **Spanning Tree QAOA (ST-QAOA)**: In this ansatz, an approximate solution is constructed with \(r\) rounds of gates explicitly tailored to the given problem instance using insights from the classical algorithm in the pre-computation routine. When \(r=1\), the ST-QAOA is guaranteed to match the performance of the classical solver. As the number of rounds increases, the ST-QAOA approaches the exact solution to the MaxCut problem. It achieves the same performance guarantee as the classical algorithm and can outperform vanilla QAOA at low depths. **Ansatz Architecture Search (AAS)**: This approach optimizes the QAOA ansatz and variational parameters by searching the discrete space of quantum circuit architectures near QAOA. It starts with a greedy search strategy and iteratively removes two-qubit gates from the best ansatz of the previous level, scored, and the best of them is selected as the output of this level. This method has shown significant improvement in the probability of finding low-energy states while using fewer two-qubit gates. **Hardware-specific ansatz designs**: Tailor your ansatz for better performance on specific platforms such as trapped ions, neutral atoms, superconducting qubits, and photonic quantum computers. In the meantime, it is essential to note that while QAOA can be adapted and extended to various problems, it might not always be the most efficient or best-suited method for every problem. For example, problems that do not have a well-defined graph representation, do not fit into the combinatorial optimization framework, or are not easily represented by Hamiltonians might prove more difficult to benefit from it. QAOA may also not be a good fit for certain problems due to performance limitations, locality constraints, and obstructions caused by specific properties of the problems [162, 83, 165]. For example, it might not be suitable for problems that require a large depth or high entanglement due to limitations in quantum hardware and noise-related issues. Finally, the algorithm may also be less suitable when the graph topology differs significantly from the quantum hardware's connectivity, as this may introduce substantial overhead in compiling and routing qubits. Since the choice of QAOA variant deeply depends on the specific problem instance and available quantum hardware, the following guidelines can be helpful: * MaxCut problems: Several QAOA variants have been suggested for MaxCut problems, including QAOA+, Warm-Starting QAOA (WS-QAOA), Recursive QAOA (RQAOA), Multi-Angle QAOA (ma-QAOA), Adaptive Bias QAOA (ab-QAOA), and Measurement Based Quantum Computing QAOA (MBQC QAOA). They all seem particularly well-suited for solving MaxCut problems, with multiple research papers mentioning their success in achieving a high approximation ratio. QAOA+ improves over the traditional \(p=1\) QAOA ansatz using additional layers of parameterized gates, providing higher approximation ratios. WS-QAOA leverages the best classical approximation guarantees, while RQAOA iteratively reduces the problem size, showing promise for NISQ devices. MBQC QAOA demonstrates improvements when applied to photonic quantum computers. The variant choice depends on the specific problem instance and available quantum hardware. * Combinatorial optimization problems with constraints: Adaptive Bias QAOA (ab-QAOA) and Fermionic QAOA (FQAOA) are suggested for these problems and are particularly applicable when the constraints are integral to the problem structure. ab-QAOA accelerates convergence by incorporating adaptive bias fields, while FQAOA utilizes fermion particle number preservation to impose constraints intrinsically. Grover Mixer QAOA (GM-QAOA) is suggested for CSPs, as it uses Grover-like selective phase shifting operators for efficient optimization and is not susceptible to Trotterization or Hamiltonian simulation errors. Other approaches might also be suitable; the choice of optimal QAOA variant depends on the problem's structure and constraints. * Ising models: Digitized Counterdiabatic QAOA (DC-QAOA) and Ansatz Architecture Search (AAS) have been proposed as suitable variants for Ising models. DC-QAOA reduces computational complexity by leveraging counterdiabatic driving, while AAS modifies both the QAOA ansatz and parameter selection process to improve low-energy state identification. DC-QAOA and AAS are not the only possible approaches for Ising models, but they are highlighted here for their unique approaches to reducing computational complexity and identifying low-energy states, respectively. * Graph-based problems: Recursive QAOA (RQAOA), Spanning Tree QAOA (ST-QAOA), and Graph Neural Networks (GNNs) initialization are recommended for graph-based problems. RQAOA iteratively reduces problem size and works well for graph-based problems, while ST-QAOA approaches the exact solution to the MaxCut problem as the number of rounds increases. GNNs-based initialization can speed up inference time across graphs. Adaptive Derivative-Assembled Problem-Tailored (ADAPT-QAOA) is also a good choice for graph-based problems because it demonstrates faster convergence than the original QAOA due to shortcuts to adiabaticity. * Other problems: QAOA has also shown potential for solving problems such as the Sherrington-Kirkpatrick (SK) Hamiltonian problems, \(k\)-graph coloring and electric vehicle charging problems with global power contraints, Exact Cover problem, \(k\)-spin model, and ground-state energy calculations for small chemistry and material science problems. Quantum Alternating Operator Ansatzes (QAOAnsatzes) has been proposed as a versatile extension of QAOA that works well for optimization problems with both hard and soft constraints. ### How to Optimize the QAOA Parameters? Initialization and optimization of QAOA parameters are crucial for achieving good performance. Different QAOA ansatz designs and hardware platforms will require different parameter optimization methods. There is no universal solution for the selection of the parameter optimization method. Nevertheless, it is possible to extract valuable insights and broad recommendations for choosing an appropriate method for QAOA applications. #### Gradient-free methods Gradient-free methods are, in general, more suitable for problems with moderate circuit depths and limited computational resources. Gradient-free methods often employed include BOBYQA, COBYLA, NEWUOA, Nelder-Mead, PRAXIS, and SBPLX. BOBYQA within the APOSSM framework has shown the best performance for a fixed number of functional evaluations. **Strengths:** * Computationally efficient * Requires fewer function evaluations when compared to gradient-based methods * Some methods, such as SPSA and Powell's method, are less affected by noise than other methods **Weaknesses:** * Performance can be challenged with an increasing number of layers * Some methods, like COBYLA, Nelder-Mead, and Conjugate-Gradient, are significantly affected by noise **When to use:** Gradient-free methods can be appropriate when computational efficiency is important and you are working with problems with modest circuit depths. Some gradient-free methods may be preferred when noise levels are low or moderate. #### Gradient-based approaches Gradient-based approaches are more robust to variations in problems and noise but may require more computational resources. Often employed gradient-based approaches include Gradient Descent, Stochastic Gradient Descent (SGD), Model Gradient Descent (MGD), and BFGS. **Strengths:** * Widely used for parameter optimization * Can handle problems with a smooth objective function * Can perform well even if the objective function is not smooth with respect to the error (policy-gradient-based reinforcement learning) * Some methods (e.g., AMSGrad and BFGS) perform better than gradient-free methods in the presence of shot-noise **Weaknesses:** * Can be computationally expensive * Vulnerable to noise in NISQ devices * May require many measurements for each gradient component **When to use:** Gradient-based approaches may be appropriate when the objective function is smooth and when you require a method that can handle a wide range of problems. These methods may also be preferred when noise levels are low or when working with problems requiring a robust optimization process. #### Machine Learning Approaches Machine Learning approaches can provide faster convergence and better optimization results but may require more training instances and computational resources. Some ML approaches include Gaussian Process Regression (GPR), Linear Regression (LM), Regression Tree (RTREE), Support Vector Machine Regression (RSVM), Long Short-Term Memory (LSTM) neural networks, Graph Neural Networks (GNNs), and Reinforcement Learning (RL). **Strengths:** * Can exploit correlations and patterns among parameters * Can accelerate QAOA optimization * Can generalize across different problem instances and graph sizes #### Weaknesses: * Scalability can be an issue, particularly for more complex problems * May require many training instances for good performance #### When to use: Machine learning approaches can be appropriate when working with problems with correlations or patterns among parameters or when you require a method that can generalize across different problem instances and graph sizes. These approaches may also be suitable when working with large datasets or complex problems requiring advanced optimization techniques. #### Other considerations Certain optimization problems can showcase specific attributes such as barren plateaus, transferability and reusability of parameters, or the existence of parameter symmetries. These characteristics can be harnessed to enhance the optimization procedure. For instance, adopting layerwise learning strategies can aid in bypassing barren plateaus, and exploiting parameter symmetries can simplify the optimization process. Some general strategies and ideas include: **Warm-starting QAOA**: Initializing QAOA parameters based on the solution to the relaxed QUBO problem can provide a good starting point for optimization. It provides an advantage over standard QAOA at low depth, which is particularly important for NISQ devices. **FALQON's purely quantum optimization loop**: By using FALQON's measurement-based feedback loop, one can assign values to quantum circuit variational parameters, improving the initialization of standard QAOA. It is primarily designed for fault-tolerant quantum devices that do not yet exist, limiting current applicability. **FQAOA**: This method is designed to solve combinatorial optimization problems with constraints, utilizing fermion particle number preservation to impose these constraints intrinsically. **AAS**: This technique searches the discrete space of quantum circuit architectures near QAOA to find a better ansatz. **Layerwise learning strategy**: Layerwise learning strategies grow the circuit incrementally and update only subsets of parameters during optimization. This can help to avoid initializing on a plateau and reduce the probability of creeping onto a plateau during training. This can improve the optimization process and help QAOA converge to better solutions. **Parameter Transferability and Reusability**: Optimal QAOA parameters can be transferred and reused across different problem instances based on their local characteristics. This can improve the quality of the solution and reduce the number of evaluations required to reach it. **Constraint-based optimization**: Constraining QAOA circuit parameters to the range \((0,\pi)\) to exploit their symmetry can result in runtime acceleration of up to 5.5 times. **Parameter regression**: Using parameter regression to optimize QAOA parameters can achieve an acceleration of more than 1.23 times. **Leveraging Parameter Symmetries**: Parameter symmetries can simplify the optimization process and eliminate degeneracies in the parameter space. Exploiting these symmetries can make the search for optimal QAOA parameters more efficient. **Exploit natural symmetries**: Exploiting the natural symmetries of input graphs can reduce the number of parameters and improve QAOA performance by approximately 33%. **Use Continuous-Time Quantum Walks**: Relate ma-QAOA to CTQW on dynamic graphs to leverage well-studied CTQW phenomena for finding optimal parameters. **Heuristic strategies for parameter initialization**: Optimal initialization of QAOA's parameters can be determined in \(O(\text{poly}(p))\) time, while random initialization would necessitate \(2^{O(p)}\) optimization runs to attain comparable performance. **Hardware-specific optimizations**: Consider the unique properties and limitations of your chosen quantum hardware platform when optimizing QAOA parameters. Depending on the hardware used, this may involve leveraging connectivity, qudits, global entangling operations, or higher energy levels. ### Practical Considerations for Implementing QAOA When applying QAOA to a problem, consider the following practical advice: * Choose an ansatz that suits the problem type and hardware design while balancing specificity and generality. * Employ Ansatz Architecture Search (AAS) if you seek to optimize both the QAOA ansatz and variational parameters. This method uses a greedy search strategy, exploring the space of quantum circuit architectures to find a more suitable ansatz for your problem. * Assess the impact of different mixer designs on your QAOA algorithm and the trade-offs between improved performance and longer circuit depth. * Experiment with different parameter initialization and optimization methods, such as heuristic strategies, parameter fixing, and Graph Neural Networks (GNNs), to find the best approach for your specific problem type and hardware platform. * Select an appropriate optimization algorithm for your problem, considering the trade-offs of gradient-free methods, gradient-based approaches, and machine learning techniques. * Explore the transferability and reusability of optimal parameters across different problem instances based on local characteristics of subgraphs. * Leverage parameter symmetries to simplify optimization, eliminate degeneracies in the parameter space, and improve QAOA performance. Investigate symmetries in objective functions, cost Hamiltonians, and QAOA parameters themselves. * To overcome barren plateaus in cost function landscapes, consider employing layerwise learning strategies, incremental circuit depth growth, or initializing closer to target parameters. * When working with large-scale problems or complex optimization landscapes, be prepared to employ advanced optimization techniques, hybrid quantum-classical approaches, or specialized ansatz variations to overcome challenges and achieve desired results. * To overcome noise and error challenges, consider implementing error mitigation techniques such as gate count reduction, SWAP network optimization, symmetry verification, and leveraging hardware-specific features. ## Acknowledgements We thank Dr. Michele Grossi and Michal Stechly for their invaluable discussions and insights that have greatly benefited this project. We would like to express our gratitude to the Quantum Open Source Foundation (QOSF) for organizing the mentorship program that enabled the formation of our team. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. In this paper we used ibm_oslo, ibm_lagos, ibm_nairobi and ibm_perth which are some of the IBM Quantum Falcon r5.11H devices. The authors disclosed receipt of financial support for the publication of this article from Quantum Neural Technologies S.A. Author Contributions:Conceptualization, KB; methodology, KB, AC, RL and AS; software, major contributions: KB, DB, AC, RL, AS; minor contributions: CC, KP; formal analysis and investigation, all authors contributed equally; writing--original draft, and writing--review and editing, all authors contributed equally; supervision, KB; project administration, KB, AC and RL.
2310.03435
Variational Inference for GARCH-family Models
The Bayesian estimation of GARCH-family models has been typically addressed through Monte Carlo sampling. Variational Inference is gaining popularity and attention as a robust approach for Bayesian inference in complex machine learning models; however, its adoption in econometrics and finance is limited. This paper discusses the extent to which Variational Inference constitutes a reliable and feasible alternative to Monte Carlo sampling for Bayesian inference in GARCH-like models. Through a large-scale experiment involving the constituents of the S&P 500 index, several Variational Inference optimizers, a variety of volatility models, and a case study, we show that Variational Inference is an attractive, remarkably well-calibrated, and competitive method for Bayesian learning.
Martin Magris, Alexandros Iosifidis
2023-10-05T10:21:31Z
http://arxiv.org/abs/2310.03435v1
# Variational Inference for GARCH-family Models ###### Abstract The Bayesian estimation of GARCH-family models has been typically addressed through Monte Carlo sampling. Variational Inference is gaining popularity and attention as a robust approach for Bayesian inference in complex machine learning models; however, its adoption in econometrics and finance is limited. This paper discusses the extent to which Variational Inference constitutes a reliable and feasible alternative to Monte Carlo sampling for Bayesian inference in GARCH-like models. Through a large-scale experiment involving the constituents of the S&P 500 index, several Variational Inference optimizers, a variety of volatility models, and a case study, we show that Variational Inference is an attractive, remarkably well-calibrated, and competitive method for Bayesian learning. Variational inference, Volatility, GARCH, Bayes ## I Introduction The classical estimation procedures for GARCH-family modes are the frequentist maximum likelihood, quasi maximum likelihood, and the generalized method of moments approaches [6]. Recently, there has been a growing interest in using Bayesian estimation techniques, as they offer several advantages over the traditional approaches [7]. For instance, in the frequentist approach, models are compared with no other means that their likelihood, whereas Bayes factors and marginal likelihood allow comparisons of non-nested models. Bayesian estimation provides reliable results in finite samples, and, e.g., can uncover the full value-at-risk density. Maximum likelihood estimators present some limitations when the errors are heavy-tailed and may not be asymptotically Gaussian [11], and positivity of the conditional variance and stationarity requirements can lead to complex non-linear inequalities, making constrained optimization cumbersome. For the Bayesian estimation of GARCH-family models, Monte Carlo (MC) sampling is the predominant approach [e.g., 11, 1, 28]. Indeed the recursive nature of the conditional variance makes the joint posterior of unknown parametric form, and the choice of the sampling algorithm is crucial. The Gridy-Gibbs sampler has extensively been used in this context [e.g., 5, 3], along with importance sampling [e.g., 9, 13], and the Metropolis-Hastings (MH) [18, 8]. Different extensions of the MH algorithm have been proposed [e.g., 19], along with the use of alternative methods [e.g., 2]. For a broader overview, see, e.g., the surveys [7, 28], or the textbook [1]. The ability of Bayesian methods to address uncertainty via posterior distribution gained much attention in Machine Learning (ML). In the last decade, sophisticated Bayesian methods have been advanced for training high-dimensional ML models, and the theory of Bayesian neural networks has been extensively developed [see, e.g., 15, for a review]. Under the complexity of typical ML models, sampling methods do not scale up in high dimensions and difficult to apply. Variational Inference (VI) stands as a successful and feasible alternative, widely exploited in ML applications [12, 25, 14]. VI reduces the typical Bayesian integration problem to a simpler optimization problem aimed at finding the "best" approximation of the true posterior distribution in the sense described in Sec. II-B. Despite its consolidated use in ML, VI has not received much attention in econometrics and finance as a feasible Bayesian alternative to MC sampling. In particular, the use of VI as a tool for the Bayesian estimation of GARCH-family models remains unaddressed. Though VI has been used in volatility forecasting with ML models [22, 21], there have been few self-contained VI applications concerning GARCH models [17, 16, 27]. This paper fills the this gap and addresses the feasibility and appropriateness of VI as an alternative to MC sampling and maximum likelihood estimation by conducting extensive experiments on the constituents of the S&P500 index. We show how Gaussian VI can be implemented and applied at a large scale as an unconstrained optimization problem through appropriate parameter transforms. We validate the results over several in-sample and out-of-sample performance metrics. Along with a focused study on the Microsoft Inc. stock data, with different optimization algorithms, we show VI is an effective, robust, and competitive approach for the Bayesian estimation of various GARCH-family models. ## II Methods ### _GARCH models_ A major task of financial econometrics is modeling volatility in asset returns. Volatility is considered a measure of risk for which investors demand a premium for investing in risky assets. Empirical observations of financial returns reveal some stylized facts. Whereas returns are nearly uncorrelated, they contain higher-order dependences. The correlation of absolute and squared returns is positive and persistent. Autocorrelated daily volatility thus appears to be predictable, and Autoregressive Conditional Heteroskedasticity (ARCH) models provide the basis for the most popular parameterizations of this dependence. Let \(\varepsilon_{t}\) be a random variable that has mean and variance conditionally on the information set \(\mathcal{F}_{t-1}\) (\(\sigma\)-algebra generated by \(\varepsilon_{t-j},j\geq 1\)). For the ARCH model, \(\mathbb{E}(\varepsilon_{t}|\mathcal{F}_{t-1})=0\) and the conditional variance \(h_{t}=\mathbb{E}\!\left(\varepsilon_{t}^{2}|\mathcal{F}_{t}\right)\) is a parametric function on \(\mathcal{F}_{t-1}\)[26]. The sequence \(\{\varepsilon_{t}\}\) may be observed or be an error innovation sequence of an econometric model: \(\varepsilon_{t}=r_{t}-\mu_{t}(r_{t})\) with \(r_{t}\) and observable random variable (e.g., a daily return) and \(\mu_{t}(r_{t})\) the conditional expectation of \(r_{t}\) given \(\mathcal{F}_{t-1}\). The parametric form of the ARCH model reads: \[r_{t} =\mu+\varepsilon_{t},\] \[\varepsilon_{t} =h_{t}^{1/2}z_{t},\ \ z_{t}\sim\text{iid}\ \mathcal{N}(0,1),\] \[h_{t} =\omega+\alpha_{1}\varepsilon_{t-1}^{2},\] with \(t=1,\ldots,T\) and, to ensure \(h_{t}>0\) and identification, \(\omega>0\), \(0<\alpha_{1}<1\). We shall assume \(\mu\equiv 0\). \(h_{t}\) is the conditional and time-dependent volatility of \(\varepsilon_{t}\). The way \(\varepsilon_{t}\) is defined guarantees white noise properties, since \(z_{t}\) is a sequence of iid variables. Normality is a typical assumption for the iid sequence \(\{z_{t}\}\), but leptokurtic alternatives are also used. The iid assumption guarantees the white noise property of \(\{\varepsilon_{t}\}\). In the ARCH equation defining the parametric form for the conditional variance, the linear function of the squared innovation at \(t-1\) can be generalized to a higher-order ARCH\((q)\): \[h_{t}=\omega+\sum_{j=1}^{q}\alpha_{j}\varepsilon_{t-j}^{2},\] where \(\omega>0\), \(\alpha_{j}\geq 0\), with at least an \(\alpha_{j}>0\). Note that the volatility, the object of modeling, is not observed: using \(\varepsilon_{t}^{2}\) is an immediate solution, but alternatives exist if, e.g., the data is available at intraday frequencies. For the ARCH family, the decay rate in the autocorrelation of \(\varepsilon_{t}^{2}\) is too rapid compared to the observed time series: the so-called Generalized ARCH (GARCH) is a predominant alternative. In a GARCH\((p,q)\) model, the conditional variance is not only a function of the lagged innovations but also of its lags: \[h_{t}=\omega+\sum_{j=1}^{q}\alpha_{j}\varepsilon_{t-j}^{2}+\sum_{j=1}^{p} \beta_{j}h_{t-j}.\] The overwhelming model has been the GARCH\((1,1)\). Sufficient conditions for the positivity of conditional variances are \(\omega>0\), \(\alpha_{j}\geq 0\), \(j=1,\ldots,q\) and \(\beta_{j}\geq 0\), \(j=1,\ldots,p\). Identifiability requires at least one \(\beta_{j}>0\) and one \(\alpha_{j}>0\), and for stationarity \(\sum\alpha_{j}+\sum\beta_{j}<1\). GARCH models have been extended and generalized in many different directions. Among these, the empirical evidence of asymmetry in volatility clustering motivates the GJR-GARCH [10] model, which assumes the response of the variance to a shock not to be independent of its sign: \[h_{t}=\omega+\sum_{j=1}^{q}\alpha_{j}+\sum_{j=1}^{o}\gamma_{j}I(\varepsilon_{ t-j}>0)\varepsilon_{t-j}^{2}+\sum_{j=1}^{p}\beta_{j}h_{t-j},\] with \(I(\cdot)\) an indicator function, defines the GJR-GARCH\((p,o,q)\) (with \(o=0\) we simply write GJR-GARCH\((p,q)\)). It must hold \(\omega>0\), \(\alpha_{j}\geq 0\), \(\beta_{j}\geq 0\), \(\gamma_{j}\geq 0\), \(\sum\alpha_{j}+\gamma_{j}\geq 0\), and \(\sum\alpha_{j}+1/2\sum\gamma_{j}+\sum\beta_{j}<1\). The Exponential GARCH (EGARCH) model is another popular extension. The family of EGARCH\((p,q)\) models can be defined with \[\log h_{t}=\omega+\sum_{j=1}^{q}g_{j}(z_{t-j})+\sum_{j=1}^{p}\beta_{j}\log h_{ t-j}.\] In our analyses we adopt the version of [20] where \(g_{j}(z_{t-j})=\alpha_{j}z_{t-j}+\psi_{j}(|z_{t-j}|-\mathbb{E}(|z_{t-j}|))\). The model does not impose any restriction on the parameters because, since the equation is on log variance instead of variance itself, the positivity of the variance is automatically satisfied. This is a big advantage in model estimation. For a concise presentation of the advantages and limitations of the EGARCH model, refer, e.g., to [26]. By including \(\gamma_{j}I(\varepsilon_{t-j}>0)\varepsilon_{t-j}^{2}\), \(j=1,\ldots,o\) terms in the above conditional variance equation, one defines the GJR-EGARCH\((p,o,q)\). In our analyses, we furthermore adopt the Fractionally Integrated GARCH (FIGARCH). The FIGARCH model [4] conveniently explains the slow decay in autocorrelation functions of squared observations of typical daily return series. With the FIGARCH, the effect of the lagged \(\varepsilon_{t}^{2}\) on \(h_{t}\) decays hyperbolically as a function of the lag length. The FIGHARCH\((p,d,m)\) process is defined as: \[(1-L)^{d}\phi(L)\varepsilon_{t}^{2}=\bar{\omega}+(1-\beta(L))v_{t},\] where \(L\) is the lag operator, \(\phi(L)=\sum_{j=1}^{m-1}\phi_{j}L^{j}\), \(\beta(L)=\sum_{j=1}^{p}\beta_{j}L^{j}\), \(v_{t}=\varepsilon_{t}^{2}-h_{t}^{2}\), and \(d\) is the order of fractional differencing that guides the long-memory properties of the process [26]. Of relevance for estimation is its equivalent ARCH\((\infty)\) representation of the model: \[h_{t}=\omega+\sum_{j=1}^{\infty}\lambda_{k}\varepsilon_{t-k}^{2}, \tag{1}\] where \(\omega>0\), and \(\lambda_{k}\geq 0\) are recursively defined. For the FIGARCH \((1,d,1)\), \(\delta_{1}=d\), \(\lambda_{1}=\phi-\beta+d\), \(\delta_{k}=(k-1-d)/k\delta_{k-1}\), \(\lambda_{k}=\beta\lambda_{k-1}+\delta_{k}-\phi\delta_{k-1}\), with the constrains \(\omega>0\), \(0\leq d\leq 1\), \(0\leq\phi\leq(1-d)/2\), \(0\leq\beta\leq d+\phi\), sufficient to ensure the positivity of the conditional variance [4]. #### Ii-B1 Estimation With a possibly misspecified but convenient standard likelihood function and the assumption that the dynamic of the volatility process is correctly specified, the models described earlier are generally estimated via Quasi Maximum Likelihood (QML). Under a Gaussian likelihood, the QML objective generally reads: \[\ell(\mathbf{\nu})=\sum_{t=1}^{T}\biggl{(}\log h_{t}(\mathbf{\nu})+\frac{\varepsilon_{t} ^{2}}{h_{t}(\mathbf{\nu})}\biggr{)}, \tag{2}\] where \(\mathbf{\nu}\) collects all the relevant parameters, e.g., for the GARCH(1,1), \(\mathbf{\nu}=(\omega,\alpha_{1},\beta_{1})\), and the dependence of the conditional variance on it, is made explicit. Constrained gradient descent procedures are effective for minimizing (2). Sec. II-C discusses using parameter transforms to perform unconstrained optimization. Eq. (2) implies a recursive relation whose implementation is expensive. For initialization, it is common to back-cast \(\max\{p,o,q\}\) values with the average value of \(\{r_{t}^{2}\}\). ### _Variational Inference_ #### Ii-B1 General principle Let \(y\) denote the data and \(p(y|\mathbf{\theta})\) the likelihood of the data based on a postulated model with \(\mathbf{\theta}\in\Theta\) a \(d\)-dimensional vector of model parameters. Let \(p(\mathbf{\theta})\) be the prior distribution on \(\mathbf{\theta}\). The goal of Bayesian inference is the posterior distribution \(p(\mathbf{\theta}|y)=p(y,\mathbf{\theta})/p(y)\), where \(p(y)=\int_{\Theta}p(y|\mathbf{\theta})p(\mathbf{\theta})d\mathbf{\theta}\). Bayesian inference is generally difficult since the marginal likelihood \(p(y)\) is often intractable and of unknown form, and Variational Inference (VI) is an attractive alternative. VI consists of an approximate method approximating the posterior distribution by a probability density \(q(\mathbf{\theta})\) (called variational distribution) belonging to some tractable class of distributions \(\mathcal{Q}\). VI thus turns the Bayesian inference problem into that of finding the best approximation \(q^{\star}(\mathbf{\theta})\in\mathcal{Q}\) to \(p(\mathbf{\theta}|y)\) by minimizing the Kullback-Leibler (KL) divergence from \(q(\mathbf{\theta})\) to \(p(\mathbf{\theta}|y)\): \[q^{\star}=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\text{KL}(q||p(\mathbf{ \theta}|y))=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\int q(\mathbf{\theta}) \log\frac{q(\mathbf{\theta})}{p(\mathbf{\theta}|y)}d\mathbf{\theta}.\] It can be shown the KL minimization is equivalent to the maximization of the so-called Lower Bound (LB) on \(\log p(y)\), [e.g. 27]: \[\mathcal{L}(q)\coloneqq\int q(\mathbf{\theta})\log\frac{p(y|\mathbf{\theta})p(\mathbf{ \theta})}{q(\mathbf{\theta})}d\mathbf{\theta}=\mathbb{E}_{q}[h(\mathbf{\theta})], \tag{3}\] with \(h_{\mathbf{\zeta}}(\mathbf{\theta})=\log p(y|\mathbf{\theta})+\log p(\mathbf{\theta})-\log q _{\mathbf{\zeta}}(\mathbf{\theta})\). In fixed-form VI, the parametric form of the variational posterior is set. Typically, the target is a Gaussian distribution of mean \(\mathbf{\mu}\) and covariance \(\Sigma\), and \(q_{\mathbf{\zeta}}\) in the set \(\mathcal{Q}\) of Gaussian distributions, with \(\mathbf{\zeta}=\{\mathbf{\mu},\text{vec}(\Sigma)\}\) a vector of parameters. VI seeks the parameter \(\mathbf{\zeta}^{\star}\) optimizing (3). The standard approach for maximizing the LB is based on a stochastic gradient descent update, whose basic form is \[\mathbf{\zeta}_{t+1}=\mathbf{\zeta}_{t}+\delta\,\left[\mathcal{I}_{\mathbf{\zeta}}^{-1} \hat{\nabla}_{\mathbf{\zeta}}\mathcal{L}(q_{\mathbf{\zeta}})\right]\Bigr{|}_{\mathbf{ \zeta}=\mathbf{\zeta}_{t}}, \tag{4}\] where \(t\) denotes the iteration, \(\delta\) a the step size, and \(\hat{\nabla}_{\mathbf{\zeta}}\mathcal{L}(q_{\mathbf{\zeta}})\) a stochastic estimate of the Euclidean gradient. In place of Euclidean gradients, the recent literature adopts natural gradients leading to improved step directions by accounting for the information geometry of the variational distribution [see, e.g., 15]. With natural gradients, \(\mathcal{I}_{\mathbf{\zeta}}^{-1}\) is the corresponding Fisher Information Matrix, otherwise is the identity matrix \(I\), of size equal to the number of trainable parameters \(d\). #### Ii-B2 Algorithms A major aspect of implementing (4) is the gradient computation. Methods requiring the actual computation of the gradients of the loss, such as the reparametrization trick [12], are expensive to implement at a large scale within the recurrent form of the likelihood (2). Furthermore, the use of automatic differentiation is not a widespread practice in econometrics and finance, largely adopting numerical differentiation. The approaches discussed here rely on the use of the log-derivative trick for evaluating the gradient of the expectation \(\mathbb{E}_{q_{\mathbf{\zeta}}}[h_{\mathbf{\zeta}}(\mathbf{\theta})]\) as an expectation of a gradient: \[\hat{\nabla}_{\mathbf{\zeta}}\mathcal{L}(q_{\mathbf{\zeta}})=\mathcal{I}_{\mathbf{\zeta}} ^{-1}\mathbb{E}_{q_{\mathbf{\zeta}}}[\nabla_{\mathbf{\zeta}}[\log q_{\mathbf{\zeta}}(\mathbf{ \theta})]\,h_{\mathbf{\zeta}}(\mathbf{\theta})]. \tag{5}\] Algorithm 1 sketches the gradient-free optimization approach. Note that at each iteration, the expectation in (5) is approximated with \(S\) samples from the posterior \(q_{\mathbf{\zeta}_{t}}\). Different optimization algorithms differ in how \(\mathbf{\zeta}\) is defined (e.g., it updating a natural parameter), in how natural-gradient computations are performed, and in the adoption of alternatives forms for (4) (e.g., using a retraction in manifold optimization). ML research widely adopts a Gaussian prior of zero mean and covariance matrix \(\tau I\), with \(\tau>0\). ``` Set hyperparameters (here \(\beta\), \(S\), \(\tau\)), \(t=0\) Set initial values \(\mathbf{\zeta}_{0}\) repeat Simulate \(\mathbf{\theta}_{s}\sim q_{\mathbf{\zeta}_{t}}\), for \(s=1,\ldots,S\) \(h_{\mathbf{\zeta}_{t}}(\mathbf{\theta})=\log p(\mathbf{\theta})+\log p(y|\mathbf{\theta})-\log q _{\mathbf{\zeta}_{t}}(\mathbf{\theta})\) \(\hat{\nabla}_{\mathbf{\zeta}_{t}}\mathcal{L}=\frac{1}{S}\sum_{s=1}^{S}\nabla_{\mathbf{ \zeta}}\log q_{\mathbf{\zeta}}(\mathbf{\theta}_{s})|_{\mathbf{\zeta}=\mathbf{\zeta}_{t}} \times h_{\mathbf{\zeta}_{t}}(\mathbf{\theta}_{s})\) \(\mathbf{\zeta}_{t+1}=\mathbf{\zeta}_{t}+\beta\hat{\nabla}_{\mathbf{\zeta}_{t}}\mathcal{L}\) \(t=t+1\) until stopping criterion is met ``` **Algorithm 1** General form of a gradient-free VI optimizer We briefly introduce the three state-of-the-art optimizers adopted in the empirical analysis. The gradient in (5) is often called a black-box gradient. Despite the terminology, the black-box approach has not to be intended as an opaque mechanism, but as a transparent and accessible solution for computing lower bound's derivatives without explicitly requiring model's derivatives. The expectation in (5), as of Algorithm 1, is computed as an average of products between the easy-to-derive gradients of the variational loglikelihood \(\nabla_{\mathbf{\zeta}}\log q_{\mathbf{\zeta}}(\mathbf{\theta}_{s})\) computed in \(\mathbf{\zeta}=\mathbf{\zeta}_{t}\) and the \(h\)-function \(h_{\mathbf{\zeta}_{t}}(\mathbf{\theta}_{s})\), so that the computation of \(\hat{\nabla}_{\mathbf{\zeta}_{t}}\mathcal{L}\) involves only \(h\)-function's queries, and not its gradients w.r.t. \(\mathbf{\zeta}\). Black-box VI (BBVI), [25] uses the rule (4) applied to Euclidean black-box gradients, computed as in (5). Quasi-Black box VI (QBVI) [17] extends BBVI using natural gradients. QBVI relies on a natural-parameter parametrization of the variational posterior enabling natural gradient updates without requiring the explicit computation and inversion of the Fisher matrix. This is a relevant computational advantage. BBVI and QBVI are broadly applied under a diagonal covariance matrix specification and a log-variance parametrization, as they cannot guarantee the positive definiteness of the variational covariance matrix. Conversely, the two are of low complexity as matrix operations (especially inversion) are straightforward. Manifold Gaussian Variational Bayes (MGVB) is a black-box approach, boosted by natural gradients, relying on manifold optimization to grant the positive definiteness of the full covariance matrix [27]. MGVB solves the positive definiteness issue while allowing for additional modeling flexibility provided by its full covariance specification. Certain theoretical issues and some approximations that MGVB relies upon are resolved by the Exact Manifold Gaussian Variational Bayes (EMGVB) approach, that further improves the computation of the natural gradients [16]. ### _Transformations_ It is clear that the application of Gaussian VI is problematic for the heavily constrained volatility models of Sec. II-A. For instance, a Gaussian posterior is incompatible with the \(\omega>0\) or the \(0<d<1\) requirements and plausibly with a Gaussian covariance structure (Gaussian copula). Moreover, the adoption of Gaussian priors is also inadequate. These issues can be fixed with appropriate parameter transforms. In Algorithm 1, two steps are critical: (i) sampling from the Gaussian variational posterior and (ii) evaluating the model log-likelihood. By adopting the VI Gaussian framework, the unconstrained components of a sample \(\mathbf{\theta}\) from \(q_{\mathbf{\zeta}=\{\mathbf{\mu},\text{vec}(\Sigma)\}}\) need to be appropriately transformed into the valid constrained space for evaluating the log-likelihood. This is done, e.g., in [27] for the GARCH(1,1), and aligned with the well-adopted rationale for VI in medium-scale ML models of [14]. Let \(\mathbf{\nu}\) denote a \(d\)-dimensional parameter parametrizing a GARCH-family model \(m\), and \(\mathcal{C}_{m}\) the constrained parameter space where \(\mathbf{\nu}\) lives. Be \(\psi_{m}:\mathcal{C}_{m}\mapsto\mathbb{R}^{d}\) a transform that maps \(\mathbf{\nu}\in\mathcal{C}_{m}\) to \(\mathbf{\theta}\in\mathbb{R}^{d}\). Let \(\psi_{m}^{-1}\) denote the corresponding inverse transform, i.e., \(\mathbf{\nu}=\psi_{m}^{-1}(\mathbf{\theta})\). This is the relevant transform in VI; we will show that such \(\psi_{m}^{-1}\) exists and is of simple form for the models in Sec. II-A. Through \(\psi_{m}^{-1}\) we can apply the update (4), map posterior samples \(\mathbf{\theta}\sim q_{\mathbf{\zeta}}\) into \(\mathcal{C}_{m}\) as \(\psi_{m}^{-1}(\mathbf{\theta})\), evaluate the likelihood, and approximate the expectation in (5). Similarly, we can, e.g., compare the QML estimates with the mean of transformed posterior samples, interpretable as approximations of the transformed posterior living in \(\mathcal{C}_{m}\): \[\mathbb{E}_{q_{\mathbf{\zeta}^{*}}}\left[\psi_{m}^{-1}(\mathbf{\theta}^{*})\right] \approx\frac{1}{S}\sum_{s=1}^{S}\psi_{m}^{-1}(\mathbf{\theta}_{s}^{*}),\,\mathbf{ \theta}_{s}^{*}\sim q_{\mathbf{\zeta}^{*}}. \tag{6}\] In principle, QML could optimize \(\ell(\psi_{m}^{-1}(\mathbf{\theta}))\), yet the use of constrained optimization is preponderant (e.g., in Python's arch and R's rugarch packages), so that parameter transforms are not relevant for standard QML estimation. Indeed, we are unaware of any work presenting such transformations. As fundamental for applying VI, and for future reference, we summarize them in the following propositions. For an element \(\theta_{\lambda}\) of the vector \(\mathbf{\theta}\) representing a certain parameter \(\lambda\) of the model \(m\), be \(\mathbf{\theta}_{\lambda}\) its corresponding element in \(\mathbf{\theta}\). Let \(f\) denote the logistic function. **Proposition 1** (Inverse transforms for the FIGARCH): _The FIGARCH constraints \(\omega>0\), \(0\leq d\leq 1\), \(0\leq\phi\leq(1-d)/2\), \(0\leq\beta\leq d+\phi\), are satisfied by the following inverse transforms:_ \[\omega=\exp(\theta_{\omega}), d=f(\theta_{d}),\] \[\phi=f(\theta_{\phi})(1-d)/2, \beta=f(\omega_{\beta})(\phi+d).\] Proof.: The result follows immediately from [4, footnote 19]. **Proposition 2** (Inverse transforms for the GJR-GARCH): _The GJR-GARCH(p,o,q) constraints \(\omega>0\), \(\alpha_{i}\geq 0\), \(\sum\gamma_{k}+\sum\alpha_{i}\geq 0\), \(\beta_{j}\geq 0\), \(\sum(\alpha_{i}+1/2\gamma_{k}+\beta_{j})<1\), for \(j=1,\ldots,p\), \(k=1,\ldots,o\), \(i=1,\ldots,q\), for the GJR-GARCH(1,1) model are satisfied by the following inverse transforms:_ \[\omega=exp(\theta_{\omega}), \alpha=f(\theta_{\alpha}),\] \[\gamma=f(\theta_{\gamma})(2(1-\alpha)+\alpha)-\alpha, \beta=f(\theta_{\beta})(1-\alpha-1/2\gamma).\] _For the general GJR-GARCH(\(p,o,q\)) case, inverse transforms can be computed as for Algorithm 2._ Proof.: The transform for \(\theta_{\omega}\) is obvious. As \(\gamma\) and \(\beta\) are still to be determined, the constraints imply that \(\alpha\) can lay anywhere in \([0,1]\), so \(\alpha=f(\omega_{\alpha})\). It is required that \(\gamma+\alpha>0\), and \(\alpha+\beta+1/2\gamma<1\). Yet \(\beta\) is to be determined, so \(1/2\gamma<1-\alpha\). The two give \(-\alpha<\gamma<2(1-\alpha)\): we first map \(\gamma\) to \([2(1-\alpha)+\alpha]\) and then shift the interval by \(-\alpha\). I.e., \(\gamma=f(\omega_{\gamma})[2(1-\alpha)+\alpha]-\alpha\). Now map \(\theta_{\beta}\) in \(1-\alpha-1/2\gamma\), that is \(\beta=f(\omega_{\beta})(1-\alpha-1/2\gamma)\). With \(p\geq 0,o\geq 0,q\geq 0\), the same interval-partitioning reasoning is sequentially repeated. The last proposition applies to the ARCH (\(o=p=0\)) and GARCH models (\(o=0\)) as a special case. Similarly, one can transform the possibly constrained trainable parameters of any postulated distribution for the iid \(\{z_{i}\}\) innovations. For example, for a \(GED(\lambda)\) distribution \(\lambda=2+\theta_{\lambda}^{2}+\epsilon\), where \(\epsilon>0\) is a pedestal to grant \(\lambda>2\) holds strictly. ``` \(\omega=\exp(\theta_{\omega})\) \(\alpha_{i}\) \(s=1-\varepsilon\) else for\(i=1,\ldots,p\)do\(\gamma_{i}=2s\gamma_{i}\) \(\alpha_{i}=f(\theta_{\alpha_{i}})s\) endif \(s=1-s\)\(s=1-0.5s\) endfor for\(i=1,\ldots,o\)dofor\(i=1,\ldots,q\)do\(\gamma_{i}=f(\theta_{\gamma_{i}})\)if\(p\geq i\)then\(s=1-s\) \(\gamma_{i}=(2s+\alpha_{i})\gamma_{i}-\) endfor ``` **Algorithm 2** Inverse transformation for the \(GJR(p,o,q)\) Fig. 1: Lower bound optimization (left) and posterior distribution of for the \(\gamma\) parameter (right). GIR-GARCH(1,1), Microsoft Inc. data. ## III Experiments ### _Data_ For our empirical analyses, we use daily close-to-close log returns for the constituents of the S&P500 index. Our data covers 1383 trading days, from 1 January 2018 to 30 June 2023, divided into train and test sets with a 75%-25% split following chronological order. We use 488 stocks, since some constituents changed and their time series are incomplete. ### _Models, and optimization_ To assess how satisfactory VI is in volatility modeling, we adopt the following volatility models: ARCH(1), GARCH(1,1), GJR-GARCH(1,1), EGARCH(0,1), EGARCH(1,1), GJR-EGARCH(1,1), FIGARCH(1,d,1). The case study on the Microsoft Corp. data additionally includes the GARCH(2,1), EGARCH(2,1), and the FIGARCH(0,d,1). The analyses adopt the BBVI, QBVI, MGVB, and EMGVB optimizers for VI (the first two under a diagonal variational covariance matrix). For the FIGARCH models, we implement (1) by the method of [23]. As a baseline for comparison, we adopt QML estimates and a Monte Carlo Markov Chain Sampler (MC). An MC reference for VI is advisable as it provides a benchmark for highlighting biases and assessing the quality of the Gaussian variational approximation. For consistency, in VI, we adopt the same set of hyperparameters for all the experiments and optimizers. In particular, we use a learning rate of \(0.005\), \(50\) MC draws for approximating the expectation (5), a diagonal normal prior of unit variance and initial values \(\mathbf{\mu}_{0}=0\), \(\Sigma_{0}=0.1I\). To increase the stability of the learning process, we update the gradients with a momentum factor of \(0.4\). Both MC and VI algorithms are run for a longer-than-required number of iterations. This avoids tuning the parameter on a case-by-case basis (which is unfeasible with hundreds of stocks) and provides reasonable guarantees that the algorithms converged. For VI, we observe that typically 1500 iterations are sufficient for the LB to reach a plateau (Fig. 1), yet we terminate the training after 2500 iterations. Opposed to ML, in statistics overfitting is a major concern in model selection. For example, in maximum likelihood estimation, the dynamic of the likelihood on a (usually nonexistent) test set is ignored. Early-stopping criteria for MC/VI based on the test loss would lead to non-comparability with the fully-in-sample-optimized maximum likelihood estimates. The relevant data and codes for the experiments are available at github.com/mmagris/GARCHVI. ### _Results_ #### Iii-C1 General results To assess the effectiveness of VI as a Bayesian procedure, we adopt four performance metrics on the training and test sets. For VI, performances are computed as averages of \(7,000\) inversely-transformed samples from the estimated variational posterior \(q_{\mathsf{C}^{*}}(\mathbf{\theta})\). For MC, the last \(7,000\) samples of the Markov chain. The metrics are the Negative Log-Likelihood (NLL), the Root Mean Squared Error (RMSE), the Mean Absolute Deviation (MAD), and the Qlik loss [24]. For the last three, as proxies for the observed conditional variances, we adopt squared returns [24]. Table I presents the overall estimation results for the 488 stocks. For a performance metric \(M_{E}^{x}\) computed on a data subsample \(x\) as a result of an estimation procedure \(E\), the entries in the Table correspond to mean performance deviations from the QML benchmark expressed as percentages, i.e., \(100(M_{E}^{x}/M_{QML}^{x}-1)\), and their standard deviations for the S&P 500 stocks. We comment on the predominance of positive signs indicating that QML is, overall, the preferred estimation approach from a merely quantitative perspective. However, VI methods can sometimes outperform QML in certain model/loss combinations. Clearly, test NLL values are always positive, confirming that QML estimates are optimal in this sense. At a general level, the typical magnitude of the ratios is in the sub-1% order, indicating that MC/VI estimation procedures are indeed effective w.r.t. QML and each other. Among the VI optimizers, we do not observe patterns indicating the dominance of one optimizer over another, implying they all determine a comparable variational approximation and, thus, performance. At the same time, the homogeneity in the results indicates that VI is robust with respect to the choice of the optimizer; all the optimizers appear adequate for the problems analyzed. By applying Chebyshev's inequality with, e.g., a margin of 3 standard deviations, the differences are broadly insignificant, indicating that the Gaussian variational approximation for the unconstrained parameters is plausible, at least for capturing the first two moments of the margins. Regardless of the VI optimizer and volatility model, these results show that VI firmly stands as a valid estimation alternative to MC sampling. The in-sample and out-of-sample loss in performance w.r.t. QML is negligible, while the Bayesian framework enables several advantages, as for Sec. I. #### Iii-C2 Case study We provide further analyses for Microsoft Corp. data. Table II summarizes the results regarding the training processes and the parameters' estimates. The takeaway, justified blow, is that differences in performance metrics are broadly negligible, and the differences in the estimated parameters are minor. The impact of the Fig. 2: Distribution of the train (top left) and test (top right) NNL, and confidence bounds for the conditional volatility (bottom). GJR-GARCH(1,1), Microsoft Inc. data. chosen estimation method is secondary, promoting VI as a solid alternative to MC and QML. Performance metrics are broadly overlapping, and differences are not statistically different except for the train NLL, consistently minimized by QML. It is instructive to look at the performance metrics achieved by the different optimizers. For all the models, the MSFT findings align with the percentage reported in Table I. On both the training and test sets, the values of the performance metrics are remarkably aligned between QML and the Bayesian estimators, also for the additional models. Switching to a Bayesian framework does not harm with respect to QML performance on both sets. The estimated variance indicates broad non-significance in the difference across the Bayesian estimates, if not for \(NLL^{\text{train}}\). We do not include the additional table with all the cross-testing results but rather visually discuss this case in Figure 2. The top-right panel of Figure 2 shows that the hypothesis \(NLL^{\text{train}}_{QML}=NLL^{\text{train}}_{VI}\) cannot be rejected. Conversely, its rejection on the training set validates the above. Extending the analysis to the value of the optimized LB, we observe that all the VI optimizers are rather equivalent for Bayesian inference, targeting a similar optimum. Clearly, the diagonal BBVI and QBVI ones do not reach the same LB optimum that MGVB and EMGVB do (see. e.g., the GJR-GARCH(1,1) case in Table II and Figure 1), yet the differences are well within \(1\%\), both on the training and test sets. The differences in the LB correspond to differences in the posterior estimates, explaining differences in the estimates posterior means of Table II. Yet, performance metrics are practically analogous; all the reported estimates can be considered equally effective, especially for out-of-sample forecasting. In this regard, we observe a remarkable alignment between the MC and MGVB/EMGVB estimates and the standard deviations, suggesting that the full-covariance Gaussian specification appears feasible, at least for capturing the first two moments of the marginal distribution of the true posterior (approximated by the MC sampler). Figure 1 includes the posterior means for the GJR-GARCH(1,1) model in the constrained parameter space. The plot highlights the importance of allowing for a full-covariance specification, and the MGVB/EMGVB overlap to the MC density supports the Gaussian variational framework. Observing the QML estimates within the region of high density further validates the overall VI calibration with respect to QML. As an example of how VI could provide additional insights with respect to the standard frequentist approach, the bottom panel of Figure 2 shows confidence bounds for the predicted conditional variance for 2023. Enabling a probabilistic dimension for the conditional variance can certainly benefit financial practitioners [1, Ch. 6]). For instance, by enabling statistical testing (e.g., for tomorrow's variance being significantly higher than today's), or improving value-at-risk density evaluations, as for Sec. I. Our results indicate that VI can serve this scope well. distribution but rather to the robustness of the VI setup in this setting. Small prior variance does affect the posterior estimation by keeping the posterior mean rather away from the QML estimates (see Table IV). This is certainly positive if one has motivated prior beliefs on the parameter in the unconstrained space (though unlikely). On the other hand, prior variances greater than one already deliver similar estimates (further aligned with QML). Encoding prior lack of knowledge appears to be relatively smooth; a prior variance \(\tau I\) with \(\tau>1\) is effective in this regard. ## IV Conclusion This paper documents the validity of Variational Inference (VI) as a tool for the Bayesian estimation for common volatility models of the GARCH family. We show that within a Gaussian variational framework, VI gradient-free black-box methods are robust and aligned with both the estimates obtained via Monte Carlo sampling and traditional Quasi Maximum Likelihood (QML). In this setting, we show how to adopt parameter transforms to enable VI principles and provide valuable insights on VI by the use of extensive performance statistics calculated from the individual time series of the S&P500 constituents. Along with a case study and different robustness analyses, we conclude that VI stands as a reliable, adequate, and suitable alternative to MC sampling and QML. The differences in training and test performance metrics with respect to QML and MCMC are typically within the order of \(1\%\). Despite our evidence on the validity of Gaussian variational margins, future research may investigate the appropriateness of the Gaussian copula and the use of alternative dependence structures. More in general, the VI framework could be applied to other domains, such as, e.g., stochastic volatility models or derivative pricing. We hope that our results will promote the deployment of VI in econometric and financial applications, encouraging the use of further toolsets and results from the Machine Learning research.
2309.01427
A python tool to determine the thickness of the hydrate layer around clinker grains using SEM-BSE images
To accurately simulate the hydration process of cementitious materials, understanding the growth rate of C-S-H layers around clinker grains is crucial. Nonetheless, the thickness of the hydrate layer shows substantial variation around individual grains, depending on their surrounding. Consequently, it is not feasible to measure hydrate layers manually in a reliable and reproducible manner. To address this challenge, a software has been developed to statistically determine the C-S-H thickness, requiring minimal manual interventions for thresholding and for setting limits like particle size or circularity. This study presents a tool, which automatically identifies suitable clinker grains and and perform statistical measurements of their hydrate layer up to a specimen age of 28 days. The findings reveal a significant increase in the C-S-H layer, starting from 0.45 micrometer after 1 day and reaching 3.04 micrometer after 28 days. However, for older specimens, the measurement of the C-S-H layer was not feasible due to limited pore space and clinker grains.
Florian Kleiner, Franz Becker, Christiane Rößler, Horst-Michael Ludwig
2023-09-04T08:21:04Z
http://arxiv.org/abs/2309.01427v1
A python tool to determine the thickness of the hydrate layer around clinker grains using SEM-BSE images. ###### Abstract To accurately simulate the hydration process of cementitious materials, understanding the growth rate of C-S-H layers around clinker grains is crucial. Nonetheless, the thickness of the hydrate layer shows substantial variation around individual grains, depending on their surrounding. Consequently, it is not feasible to measure hydrate layers manually in a reliable and reproducible manner. To address this challenge, a software has been developed to statistically determine the C-S-H thickness, requiring minimal manual interventions for thresholding and for setting limits like particle size or circularity. This study presents a tool, which automatically identifies suitable clinker grains and and perform statistical measurements of their hydrate layer up to a specimen age of 28 days. The findings reveal a significant increase in the C-S-H layer, starting from 0.45 um after 1 day and reaching 3.04 um after 28 days. However, for older specimens, the measurement of the C-S-H layer was not feasible due to limited pore space and clinker grains. keywords: hydration, calcium silicate hydrate, hydrate layer + Footnote †: journal: Journal of Microscopy ## 1 Introduction Understanding the formation of the micro-structure of cementitious materials allows to model and thus improve their properties. Calcium-silicate-hydrate (C-S-H) and portlandite (CH) are the main hydration products of ordinary Portland cement. C-S-H is mainly responsible for the concrete strength. Hence, its growth rate within the first 28 days is of major interest to model the hydration process [1; 2]. The growth process can be observed in embedded and polished binder samples by using a scanning electron microscope (SEM) in backscatter electron (BSE) mode (see Figure 1). Analyzing the phase distribution by segmenting each identifiable phase is a common practice [3]. Furthermore, it is fairly easy to measure the particle size distribution of the alite (impure form of ticalcium silicate, C\({}_{3}\)S) particles [4]. During the hydration of cementitious materials, C-S-H forms a layer around clinker grains, and its thickness increases over time. This layer can be differentiated into (\(i\)) the dense, inner product and (\(ii\)) the less dense, needle like shaped outer product [5]. Especially the development of the outer product is often limited by the surrounding particles or by other developing phases like C-S-H or portlandite (CH). Bazzoni [6] demonstrated that is is possible to determine the C-S-H needle length using images from a scanning transmission electron microscope (STEM). Nevertheless, it is challenging to measure the hydrate layer thickness (HLT) manually in a reliable and reproducible manner. Hence, in this study, we established a workflow and software tool to automatically determine the HLT. ## 2 Materials and Methods In this study, a commercially available alite (MIII polymorph, Vustah, Czech Republic) was utilized. The chemical composition (Table 1) was determined by X-ray fluorescence spectroscopy (XRF), and the phase purity was confirmed at 99.4 \(\pm\) 0.4 wt.-% (alite) using X-ray diffraction (XRD) analysis. Alite pastes were prepared with a water-to-solid ratio of 0.5 and stored in sealed cylindrical containers with a di \begin{table} \begin{tabular}{l r} \hline \hline Oxides & Alite in wt.-\% \\ \hline SiO\({}_{2}\) & 26.34 \\ TiO\({}_{2}\) & 0.031 \\ Al\({}_{2}\)O\({}_{3}\) & 0.238 \\ Fe\({}_{2}\)O\({}_{3}\) & 0.094 \\ Mn\({}_{2}\)O\({}_{3}\) & 0.006 \\ MgO & 1.700 \\ CaO & 70.79 \\ Na\({}_{2}\)O & 0.083 \\ \hline \hline \end{tabular} \end{table} Table 1: Chemical composition of the utilized alite obtained by XRF. ameter of approx. 8 mm. The phase composition and alite purity of similarly prepared samples stored for 1, 7, and 28 days were assessed by quantitative XRD analysis using the G-factor method [7]. The measurements were carried out using a Bruker D8 DaVinci diffractometer (MA, USA). After 1, 7, 14, 28, 84, and 365 days the hydration of the cylindrical prisms was stopped by immersion in isopropanol and subsequent drying at 60\({}^{\circ}\)C. The specimens were cut, embedded in low-viscosity resin and mechanically polished using an oil-based diamond paste with a grain size down to 0.25 um. Finally, the polished specimens were carbon coated (approx. 10 nm) to avoid charging effects in the SEM. The specimens were transferred to a SEM (Helios G4 UX, ThermoFisher Scientific, MA, USA) to capture high-resolution BSE images of their surfaces using the built-in concentric backscatter detector (CBS). These image sets were stitched to larger images. Afterwards, they were processed using custom code, which is the main subject of this study. The code was developed for Python 3.11 and was integrated into a Jupyter Notebook and it is publicly available [8]. The image contrast was logarithmically enhanced using the library _scikit-image_[9]. Subsequently, the images were denoised employing the Non Local Means algorithm [10] implemented in the library _OpenCV_[11]. The raw and preprocessed images have varying contrasts as shown in Figure 1A and B. Even after preprocessing, the segmentation cannot be carried out using fixed threshold values. Therefore, the images were segmented using two manually selected threshold values into three phases: (\(i\)) pores, (\(ii\)) hydration products (CH and C-S-H), and (\(iii\)) unhydrated clinker. The hydration products were not separated since this is not always possible without major segmentation errors. However, the software is also able to process pre-segmented images (e.g. segmented by machine learning algorithms), if data is provided in a supported format (3-channel TIFF, one colour channel per phase). Finally, the holes within particles were removed to avoid any errors caused by pores within clinker grains. For the herein presented data, the segmented images were used to identify clinker particles with a grain diameter between 0.3 and 9.0 um (grain area between 0.07 to 63.6 um\({}^{2}\)). Furthermore, particles with a circularity lower than 0.3 were excluded to avoid extreme sectioning effects. For very oval particles, the hydrate fringe could otherwise be greatly overestimated and undercuts could also lead to undesirable results. In addition, the clinker particles shrink over time and small grains are consumed. Since the selected grain diameter was kept constant regardless of the specimen age, this effect is not taken into account in this analysis. For each particle, a circular region with a radius of 7.0 um was selected (Figure 2A, B). This image was then transformed into polar coordinates (Figure 2C). Particles located too close to the image border (less than 7 um) were excluded from further processing. The HLT was determined using 360 steps, representing a 1\({}^{\circ}\) angle per step. To achieve this, the distance from the alite phase (left-hand side of Figure 2C, blue) to the next pore (red) was measured. If there was no adjacent pore within the 7.0 um radius, the measurement was stopped (Figure 2D, areas with adjacent grey). To reduce errors, 3\({}^{\circ}\) before and after those areas were ignored as well (Figure 2D, marked as darker vertical lines). Subsequently, the histogram of the HLT distribution was analyzed. To safely determine the maxima \(l_{\text{max}}\), a function was fit to the HLT histogram using equation (1), where \(l\) is the hydrate layer thickness and \(f\) is the frequency. \[f=a\cdot e^{-0.5\left(\frac{\log(l-d)-k}{c}\right)} \tag{1}\] Additionally, by using the area of the alite \(A_{\text{C}3}\)S and the hydrate phases \(A_{\text{hydrate}}\), obtained from the image segmentation, it was possible to estimate the amount of hydrate phase \(\sigma_{\text{hyd}}\) (CH + C-S-H) using Equation (2). The densities used to calculate these values were 3.14 g/cm\({}^{3}\) (\(\rho_{\text{C}3}\)), 2.2 g/cm\({}^{3}\) (CH) and \(\approx\) 2.0 g/cm\({}^{3}\) (C-S-H) [5; 12]. For the hydrate mixture, a volume ratio C-S-H:CH of 2.5:1 was assumed, giving an approximate hydrate density \(\rho_{\text{hyd}}\) of 2.07 g/cm\({}^{3}\). Figure 1: Detail of the stitched, unprocessed BSE images of embedded and polished alite sections after 7 (A) and 28 days (B) of hydration. Identifiable phases from dark to light grey: resin/pores, C-S-H, CH, alite. The images also show a typical deviation in image contrast, even after using identical imaging parameters (12 kV, 0.8 nA). \[\sigma_{\rm hydrate}=\frac{A_{\rm hydrate}\cdot\rho_{\rm hydrate}}{A_{\rm hydrate}\cdot \rho_{\rm hydrate}+A_{\rm C_{3}S}\cdot\rho_{\rm C_{3}S}} \tag{2}\] ## 3 Results and Discussion The herein presented method was applied to alite specimens of different ages in order to test its applicability. However, an existing dataset (1 day) and datasets explicitly created for this research (7, 14, 28, 84, and 365 days) were mixed. The datasets differ mainly in the imaged area and slightly in SEM imaging parameters. For example, the accelerating voltage used varied from 10 to 15 kV. An overview of the images used is shown in Table 2. The raw images are publicly available [13] On close inspection of the images, it becomes evident that the low-viscosity resin did not completely penetrate the specimens. This is apparent in Figure 2A (2), where the large pore on the top center appears a little darker than the surface of the epoxy resin in other parts of the image. In the most cases, this is not an issue, as those holes are interpreted as pores anyway. However, larger unfilled pores should be manually painted black, since this could lead to erroneously identified clinker particles or hydrates. Additionally, the needle-like structure of the C-S-H and the penetration depth of the electron beam (from 1.60 um in alite up to 3.75 um in epoxy resin at 12 kV [14]) cause a brightness gradient between pores and hydrate phase, particularly in denoised images. A significant challenge lies in accurately determining the threshold value that distinguishes the pore space from the hydrate phase (see (3) in Figure 2B and C). The polishing process leaves the specimen surface with slight unevenness due to differences in material hardness, resulting in undesirable topography and brightness variations at phase borders. This segmentation artifact, combined with cracks within the clinker particles might cause the first peak of the HLT measurements (see orange distributions in Figure 3). Figure 2B (4) also indicates that in some instances, edges of hydrate were incorrectly identified as clinker phase, which may also affect the later analysis. These issues are further enhanced, because the datasets were stitched together from several smaller images, whose brightness may show slight deviations. If the HLT is compared to the grain diameters (Figure 3), it is noticeable that the data density decreases with increasing age for the same examined area size. This is indicated by an increasing level of noise, particularly in the case of larger grain diameters. This phenomenon can be attributed to the decrease in the amount of unhydrated particles present in older specimens (see Table 2), leading to shifts in the particle size distribution over time. The grey histogram above the 2D-distribution, which shows the logarithmic frequency of measurements per grain diameter bin, also indicates this alteration. These 2D-distributions indicate accumulations of HLT measurements, depending on the alite grain diameters. To eliminate the afore mentioned effect of the particle size distribution, the the maximum amount of hydrate layer measurements was normalized to 1 for each particle diameter step. These normalized distributions were then used to calculate the cumulative HLT distribution depicted in orange on the right of the 2D-distributions in Figure 3. All distributions show an accumulation at very low mea \begin{table} \begin{tabular}{l r r r r r r} \hline age in days & 1 & 7 & 14 & 28 & 84 & 365 \\ area in mm\({}^{2}\) & 0.56 & 2.41 & 2.41 & 2.41 & 2.41 & 2.41 \\ particles & & & & & & \\ per mm\({}^{2}\) & 25296 & 6540 & 17719 & 8050 & 8385 & 7512 \\ \hline pores in & & & & & & \\ area-\% & 38.5 & 28.5 & 22.2 & 13.7 & 21.7 & 14.8 \\ hydrates in & & & & & & \\ area-\% & 38.0 & 52.8 & 44.5 & 78.7 & 72.7 & 83.2 \\ linker in & & & & & & \\ area-\% & 23.9 & 19.1 & 33.6 & 7.7 & 5.7 & 2.0 \\ \hline \(a\) in μm & 0.999 & 0.929 & 0.898 & 0.898 & - & - \\ \(b\) & \(-1.506\) & 0.097 & 0.585 & 0.858 & - & - \\ \(c\) & 1.003 & 0.471 & 0.487 & 0.475 & - & - \\ \(d\) & 0.219 & 0.297 & 0.680 & \(-1.000\) & - & - \\ \hline \(l_{\rm max}\) in μm & 0.45 & 1.43 & 1.69 & 3.04 & - & - \\ \hline \end{tabular} \end{table} Table 2: Basic information and selected results of the alite datasets processed. Image size, normed particle count (diameter \(<\) 9.0 μm, circularity \(>\) 0.3), phase areas based on the segmentation, fit parameters (\(a\)-\(d\)), and the maximum HLT \(l_{\rm max}\) of the second value accumulation. Figure 2: Illustration of the processing that is done to measure the HLT. In the center of A, B and E is an unhydrated clinker (1) grain within alite hydrated for 7 days. A: raw image; B: segmented image (red: pores, green: hydrates, blue: alite); C: transformation of B into polar coordinates, the clinker grain (blue) is on the left; D: like C, but only with the identified hydrate in green (grey areas were excluded from processing); E: version of D transformed into Cartesian coordinates. Figure 3: Normalized distributions of the HLT of hydrated alite (aged 1, 7, 14, 28, 84, and 365 days) depending on the particle diameter. The histogram on top illustrates the logarithmic frequency for each diameter bin in the diagram below. The histogram on the right illustrates the frequency of HLT measurements as sum of the main diagram. The values are thus independent of the frequency of single grain diameters. surements (\(<0.5\) to \(1.0\,\mathrm{\SIUnitSymbolMicro m}\)), which could be an artifact caused by the preparation and segmentation. For example, a very small hydrate fringe appears to be around some particles after the automatic segmentation (compare Figure 2B c, (3)). However, this layer would not exist after a manual segmentation. Histograms of specimens with an age up to 28 days show two clusters of HLTs. However, only the second accumulation seems to be a valid result. Therefore, the first accumulation was ignored. For older specimens this can be observed only for larger particles (e.g. a diameter above \(2.0\,\mathrm{\SIUnitSymbolMicro m}\) for 14 and 28 days hydrated alite), while smaller particles only seem to have the first accumulation. Nevertheless, a trend of an increasing HLT between 1 and 28 days can be observed. To obtain the value of \(l_{\mathrm{max}}\), Equation (1) was fitted to the orange HLT histogram in Figure 3. The resulting curves are plotted in red. However, due to the absence of a second accumulation in specimens older than 28 days, fitting was not feasible in those cases. The resulting fit parameters \(a\), \(b\), \(c\), \(d\), and \(l_{\mathrm{max}}\) are listed in Table 2. The selected limit value for circularity of 0.3 was a compromise between a high data density for larger particles and signal clarity. Increasing this value can improve the signal clarity of the second accumulation, if there are enough suitable particles available. Furthermore, this reduces the chance of overestimating the HLT due to angular distortions for non-circular particles. However, this also reduces the amount of measures, especially for particles with a diameter larger than \(5.0\,\mathrm{\SIUnitSymbolMicro m}\). This is an issue for older specimens, where the particle count is significantly reduced due to the hydration progress. For example, in order to be able to make any statement at all about specimens hydrated for 84 and 365 days, the circularity value must be lowered to at least 0.4 to obtain enough data for particles larger than \(5.0\,\mathrm{\SIUnitSymbolMicro m}\) in diameter. Nevertheless, they do not seem to have a second cluster of HLTs in the range measured. There are multiple possible reasons for this result, which can be followed on the basis of Figure 4. Firstly, the pore space between hydrates is significantly smaller, reducing the possibility to obtain good measurements. Secondly, the few remaining large particles appear to disintegrate into several smaller particles or break up due to sample preparation, leaving some apparent pores. Finally, a large part of the remaining space is filled by CH. Therefore, in this case the results should be considered invalid. It is important not to confuse the increase of the HLT with the hydration progress. Figure 5 shows the hydration progress, measured using XRD (dashed lines). The combined ratio of C-S-H and CH is represented by the green dash-dotted line, the HLT is plotted in a solid red line. While both, the XRD and HLT measurements indicate an increase, the HLT does not follow the same pattern as the XRD measurements. This can be argued with multiple reasons: * First and foremost, completely dissolved grains are not taken into account to measure the hydrate layer thickness. This concerns a significant area of the image and becomes more prevalent with specimen age. * The grains could be cut at an unfavourable position (far off-center), which leads to an apparently larger hydrate layer. * CH and C-S-H phases are not separated. However, this is not anticipated to pose a significant problem, as the majority of measurements will be automatically excluded due to the absence or minimal presence of pores within the \(7.0\,\mathrm{\SIUnitSymbolMicro m}\) radius, particularly when the particle is embedded in CH. In comparison, Masoero et al. [15] calculated a reaction zone of about 0.4 to \(0.5\,\mathrm{\SIUnitSymbolMicro m}\). They used different particle size distributions of alite in a suspension (water/solid ratio of 50). In their proposed model, the reaction zone should be filled after \(21\,\mathrm{h}\). This seems to be in the same range of the measurement of about \(0.45\,\mathrm{\SIUnitSymbolMicro m}\) in this study. In contrast, for sieved C\({}_{3}\)S (mean particle size \(6\,\mathrm{\SIUnitSymbolMicro m}\)) Costoya Fernandez [16] calculated a peak thickness of C-S-H of \(0.3\,\mathrm{\SIUnitSymbolMicro m}\) after \(12\,\mathrm{h}\) of hydration. Powders with larger particle size distributions showed a significantly smaller C-S-H thickness (C-S-H thickness of \(0.1\,\mathrm{\SIUnitSymbolMicro m}\) for a mean particle size of \(18\,\mathrm{\SIUnitSymbolMicro m}\)). Garrault et al. [5] proposed a maximum dense C-S-H-layer thickness of \(0.4\,\mathrm{\SIUnitSymbolMicro m}\), with a tendency to even lower values for larger grain diameters. Bazzoni [6] measured the C-S-H needles on top of the dense C-S-H and found, they were in the range of 0.3 to \(0.4\,\mathrm{\SIUnitSymbolMicro m}\) in length after 7 days. Combining both results, the overall HLT should therefore not exceed \(0.8\,\mathrm{\SIUnitSymbolMicro m}\), which is not supported by the data found in this study. In some studies [15; 16; 5], the C-S-H layer thickness was calculated from ion concentrations and heat flow measurements. They are therefore based on volumetric mea Figure 4: Image detail of 365 days hydrated alite. This image shows a lack of pore space filled by CH, disintegrated larger particles and significantly smaller particles than in Figure 2 (note the scale bar differences). surements of the whole specimens and not individual alite grains. In contrast, our study only includes C-S-H layers directly around a particle and ignores a bulk of the formed hydrate. The results of the estimated hydrate phase based on image segmentation are shown in Figure 5 (pink markers (X)). They do not give a clear pattern, but usually the amount of hydrate was overestimated. This could be due to the fact that \(\sigma_{\mathrm{hydrate}}\) was calculated from areas \(A_{\mathrm{hydrate}}\) and \(A_{\mathrm{C_{3}S}}\) instead of their respective volumes and since only a rough phase ratio and density estimate was used. However, this can not explain the deviation in the data for the 14 day specimen. Therefore, it is likely that not enough area was measured for a reliable phase measurement. Finally, the computational requirements and processing time of the script used are non-negligible factors. The images with an area of \(2.4\,\mathrm{\SIUnitSymbolMicro m}^{2}\) were scaled from a pixel size of \(0.0422\,\mathrm{\SIUnitSymbolMicro m}\) to \(0.0843\,\mathrm{\SIUnitSymbolMicro m}\) to reduce processing time and memory requirements. The processing of the resized image requires about \(8\,\mathrm{GB}\) of RAM and can take up to 12 hours on an Intel _Core i7 8700_, depending on the image size and the amount of particles to be processed. However, there is still some optimization potential by improving the multi-threaded processing or by compiling the code to machine language. ## 4 Conclusions and outlook The herein presented method can be utilized to determine the development of the HLT of alite for young to medium aged specimens (up to 28 days) in an objective manner. Aside from the image segmentation itself, it is a fully automated process, requiring little user intervention. The C-S-H growth with increasing age is clearly reflected in the results. The maxima in the HLT distribution increases from \(0.45\,\mathrm{\SIUnitSymbolMicro m}\) after 1 day up and reaching \(3.04\,\mathrm{\SIUnitSymbolMicro m}\) after 28 days. However, at 84 days or later the HLT distribution does not allow to evaluate the HLT distribution using the selected parameters. Consequently, the proposed approach finds applicability when the microstructure is not overly dense. In practical terms, this signifies the method's viability requires the presence of a substantial amount of pores and clinker particles for its successful application. Additionally, it was tried to calculate the hydration progress from the generated data. However, BSE images up to \(2.4\,\mathrm{mm}^{2}\) in size still seem to be too small for a reliable evaluation. While the script was only applied to a simplified cementitious system (clean alite). First tests not presented in this paper did indicate that this method may also be applicable for clean belite specimens. Furthermore, it should be expandable to more complex multiphase materials. However, this imposes higher demands on the segmentation quality. Applying more advanced segmentation methods (e.g. machine learning based) could therefore improve the reliability of the results. In the future, the influence of parameters like the circularity limit should be studied using synthetic datasets. Furthermore, multiple images of specimens of varying sizes but the same age should be studied, to determine the specimen-to-specimen variability. Finally, this method could be improved to determine the HLT of 3-dimensional, voxel-based datasets. ## Author contributions Florian Kleiner: Conceptualization, Methodology, Investigation, Software, Visualization, Data curation, Writing - original draft. Franz Becker: Supporting measurements (XRF, XRD), Writing - review & editing, Validation Christiane Rossler: Supervision, Writing - review & editing, Validation. Horst-Michael Ludwig: Financing, Supervision, Writing - review & editing, Validation. ## Acknowledgment The research was supported by the Deutsche Forschungsgemeinschaft (DFG), grant number 344069666. Open access funding enabled and organized by project DEAL. Thanks to M. Bohme, S. Unbehau and F. Ellermann for their helpful input. ## Conflict of interest The authors declare no potential conflict of interests. Figure 5: Comparison between the hydration progression measured using XRD (dashed lines) with \(l_{\mathrm{max}}\) (solid line). The pink markers (X) indicate the amount of hydrate phase \(\sigma_{\mathrm{hyd}}\) estimated from the image segmentation. ## Supporting information Additional supporting information may be found in the online version of the article at the publisher's website.
2302.13351
Optimal local identifying and local locating-dominating codes
We introduce two new classes of covering codes in graphs for every positive integer $r$. These new codes are called local $r$-identifying and local $r$-locating-dominating codes and they are derived from $r$-identifying and $r$-locating-dominating codes, respectively. We study the sizes of optimal local 1-identifying codes in binary hypercubes. We obtain lower and upper bounds that are asymptotically tight. Together the bounds show that the cost of changing covering codes into local 1-identifying codes is negligible. For some small $n$ optimal constructions are obtained. Moreover, the upper bound is obtained by a linear code construction. Also, we study the densities of optimal local 1-identifying codes and local 1-locating-dominating codes in the infinite square grid, the hexagonal grid, the triangular grid, and the king grid. We prove that seven out of eight of our constructions have optimal densities.
Pyry Herva, Tero Laihonen, Tuomo Lehtilä
2023-02-26T16:57:39Z
http://arxiv.org/abs/2302.13351v3
# Optimal local identifying and local locating-dominating codes ###### Abstract We introduce two new classes of covering codes in graphs for every positive integer \(r\). These new codes are called local \(r\)-identifying and local \(r\)-locating-dominating codes and they are derived from \(r\)-identifying and \(r\)-locating-dominating codes, respectively. We study the sizes of optimal local 1-identifying codes in binary hypercubes. We obtain lower and upper bounds that are asymptotically tight. Together the bounds show that the cost of changing covering codes into local 1-identifying codes is negligible. For some small \(n\) optimal constructions are obtained. Moreover, the upper bound is obtained by a linear code construction. Also, we study the densities of optimal local 1-identifying codes and local 1-locating-dominating codes in the infinite square grid, the hexagonal grid, the triangular rid and the king grid. We prove that seven out of eight of our constructions have optimal densities. Keywords:Local identifying codes, local locating-dominating codes, identifying codes, locating-dominating codes, dominating codes, dominating sets, hypercubes, infinite grids Introduction and preliminaries There are three widely studied ways to locate vertices in a graph using subsets of vertices; namely, _resolving sets_[1, 2] which separate vertices using the distances to the elements in the subset, _identifying codes_[3] and _locating-dominating codes (or sets)_[4, 5] both of which separate using different neighbourhoods of the vertices in the subset. In the case of resolving sets, the question of separating only the adjacent vertices [6, 7] has been extensively studied, see, for example, [8] and the references therein. Such subsets are called _local resolving sets_. Inspired by this, we study in this paper the analogous question with respect to identifying codes and locating-dominating codes. Consequently, we introduce two new classes of codes derived from identifying and locating-dominating codes and study them in some graphs. We concentrate on the sizes of optimal, that is, the smallest possible sizes of codes in finite graphs and on the densities of optimal codes in infinite graphs. Since the new code classes are closely related to identifying and locating-dominating codes, some comparison is made. Some of the results in this paper have been published in [9, 10]. ### Graphs and codes In this paper we consider simple, connected and undirected graphs \(G=(V,E)\) with vertex set \(V\) and edge set \(E\subseteq\{\{u,v\}\mid u,v\in V,u\neq v\}\). The graph \(G\) is finite if its vertex set \(V\) is a finite set and infinite if \(V\) is an infinite set. The (graphic) _distance_\(d(u,v)\) of two vertices \(u,v\in V\) of \(G\) is the number of edges in a shortest path between \(u\) and \(v\). A vertex \(u\) is said to _\(r\)-cover_ a vertex \(v\) (and vice versa) if \(d(u,v)\leq r\). When \(r=1\), we may say just that \(u\)_covers_\(v\). More generally, we say that a subset of the verstex set of the graph \(r\)-covers a vertex \(u\) if the subset has an element which \(r\)-covers vertex \(u\). Any non-empty subset \(C\subseteq V\) of vertices of a graph \(G=(V,E)\) is called a _code_ (in the graph \(G\)). The elements of \(C\) are called _codewords_ and the elements of \(V\setminus C\) are called _non-codewords_. A code \(C\subseteq V\) is an _\(r\)-covering code_ if it \(r\)-covers every vertex. If \(r=1\), we may say that \(C\) is simply a _covering code_. In other words, if there is a codeword of \(C\) in distance at most \(r\) from any vertex of \(V\), then \(C\) is an \(r\)-covering code in \(G\). Covering codes are also called _dominating sets_. The _(closed) \(r\)-neighbourhood_ of a vertex \(u\in V\) in a graph \(G=(V,E)\) is the set \(N_{r}[u]=\{v\in V\mid d(v,u)\leq r\}\). The _open \(r\)-neighbourhood_ of \(u\) is the set \(N_{r}(u)=N_{r}[u]\setminus\{u\}\). The _\(r\)-identifying set_\(I_{C,r}(u)\) or the _\(I\)-set_ of a vertex \(u\in V\) with respect to a code \(C\) is the set \[I_{C,r}(u)=N_{r}[u]\cap C\] where \(N_{r}[u]=\{v\in V\mid d(v,u)\leq r\}\) is the ball of radius \(r\) centered at \(u\). For \(r=1\) we denote \(N[u]=N_{1}[u]\) and \(I(u)=I_{C}(u)=I_{C,1}(u)\). A code _\(r\)-separates_ (or just _separates_ if \(r=1\)) two vertices \(u\) and \(v\) if their \(I\)-sets are different. If \(C\) separates \(u\) and \(v\), we may also say that \(C\) separates \(u\) from \(v\) (or vice versa). More generally, we say that \(C\) separates \(u\) from a set of vertices \(S\) if \(C\) separates \(u\) from every vertex of \(S\). Note that \(C\) is an \(r\)-covering code if and only if \(I_{C,r}(u)\neq\emptyset\) for every \(u\in V\). A code in a certain class of covering codes in a finite graph is _optimal_ if its size is the smallest among every code in the class. We also consider optimality with respect to densities in certain infinite graphs (for the definitions of the densities see Section 3). Let us define two widely studied classes of covering codes for every positive integer that are useful in locating vertices in a graph: **Definition 1.1**.: A code \(C\subseteq V\) in a graph \(G=(V,E)\) is an _\(r\)-identifying code_ if it is an \(r\)-covering code and \(I_{C,r}(u)\neq I_{C,r}(v)\) for every distinct \(u,v\in V\). **Definition 1.2**.: A code \(C\subseteq V\) in a graph \(G=(V,E)\) is an _\(r\)-locating-dominating code_ if it is an \(r\)-covering code and \(I_{C,r}(u)\neq I_{C,r}(v)\) for every distinct \(u,v\in V\setminus C\). In other words, a code \(C\) is an \(r\)-identifying code if it \(r\)-covers every vertex and \(r\)-separates any two vertices, and it is an \(r\)-locating-dominating code if it \(r\)-covers every vertex and \(r\)-separates any two non-codewords. By identifying and locating-dominating codes we mean \(1\)-identifying and \(1\)-locating-dominating codes, respectively. The concept of identifying codes was introduced by Karpovsky, Chakrabarty and Levitin in [3] and the concept of locating-dominating codes was introduced by Slater and Rall in [4, 5]. Since their discovery, these (and many related) classes of codes have been extensively studied in different graphs over the years. See also the website [11] for a comprehensive list of references around the topic. ### Our new codes: local identifying and local locating-dominating codes Two distinct vertices of \(G=(V,E)\) are called _neighbours_ or _adjacent_ if there is an edge between them, that is, if their distance is \(1\). **Definition 1.3**.: A code \(C\subseteq V\) in a graph \(G=(V,E)\) is a _local \(r\)-identifying code_ if it is an \(r\)-covering code and \(I_{C,r}(u)\neq I_{C,r}(v)\) for any two neighbours \(u,v\in V\). **Definition 1.4**.: A code \(C\subseteq V\) in a graph \(G=(V,E)\) is a _local \(r\)-locating-dominating code_ if it is an \(r\)-covering code and \(I_{C,r}(u)\neq I_{C,r}(v)\) for any two non-codeword neighbours \(u,v\in V\setminus C\). Again, by local identifying and local locating-dominating codes we mean local \(1\)-identifying and local \(1\)-locating-dominating codes, respectively. Since any graph admits an \(r\)-locating-dominating code for all \(r\) (it is possible to take the whole vertex set as the code), any graph admits also a local \(r\)-locating-dominating code for all \(r\). However, this is not the case for \(r\)-identifying and local \(r\)-identifying codes. Indeed, any graph that has two distinct vertices with equal \(r\)-neighbourhoods admits no \(r\)-identifying codes, and any graph containing two neighbours with equal \(r\)-neighbourhoods admits no local \(r\)-identifying codes. In fact, it is easily seen that a graph \(G=(V,E)\) admits an \(r\)-identifying code if and only if \(N_{r}[u]\neq N_{r}[v]\) for all \(u,v\in V,u\neq v\), and that \(G\) admits a local \(r\)-identifying code if and only if \(N_{r}[u]\neq N_{r}[v]\) for all \(u,v\in V\) such that \(u\) and \(v\) are neighbours. For \(r=1\) these conditions are the same, that is, a Figure 1: A graph that admits a local \(2\)-identifying code but does not admit any \(2\)-identifying codes. graph admits an identifying code if and only if it admits a local identifying code. For \(r>1\) this is not the case. See Figure 1 for a graph that admits a local 2-identifying code but does not admit any 2-identifying codes. There is an obvious hierarchy between introduced classes of codes: they are all covering codes, locating-dominating and local identifying codes are both local locating-dominating codes, and identifying codes are locating-dominating codes and also local identifying codes for a fixed covering radius and a fixed graph. See Figure 2 for a pictorial illustration. Depending on the graph these inclusions may or may not be strict. For example, it is quite easy to see that in paths (finite and infinite) and in sufficiently large cycles the classes of identifying and local identifying codes are the same [10]. We denote by \(\gamma^{ID}(G)\), \(\gamma^{LD}(G)\), \(\gamma^{L-ID}(G)\) and \(\gamma^{L-LD}(G)\) the cardinalities of optimal identifying, locating-dominating, local identifying and local locating-dominating codes, respectively, in a graph \(G\). We call these values _identification_, _location-domination_, _local identification_ and _local location-domination numbers_, respectively. In particular, we have \(\gamma^{ID}(G)\geq\gamma^{L-ID}(G)\geq\gamma^{L-LD}(G)\) and \(\gamma^{ID}(G)\geq\gamma^{LD}(G)\geq\gamma^{L-LD}(G)\) for any graph \(G\) admitting an identifying code. The following lemma is useful in our forthcoming considerations. A graph \(G\) is _triangle-free_ if it does not contain any triangles by which we mean that the graph does not have any 3-cycles as induced subgraphs. **Lemma 1.5**.: A code in a triangle-free graph is a local locating-dominating code if and only if it is a covering code. Proof.: First, local locating-dominating codes are covering codes by definition. For the converse claim let \(C\) be any covering code in a triangle-free graph \(G\) and assume the contrary that \(C\) is not local locating-dominating. Thus, there exist non-codeword neighbours \(u\) and \(v\) with equal \(I\)-sets. Since \(C\) is a covering code, we have \(I(u)=I(v)\neq\emptyset\). Hence, there is a triangle in \(G\). A contradiction. In [12], it has been shown that finding a covering code, that is, a dominating set in a triangle-free (or more specifically in chordal bipartite graphs, a subclass of triangle-free graphs) is an NP-complete problem. Thus, previous lemma implies following corollary. Figure 2: Illustration of the hierarchy between different classes of covering codes. **Corollary 1.6**.: Finding the cardinality of an optimal local locating-dominating code in a graph \(G\) is an NP-complete problem. Even when restricted to chordal bipartite graphs. We leave open whether finding the cardinality of an optimal local identifying code is an NP-complete problem but that seems quite likely as many related variants have been shown to be NP-complete [11]. In the following theorem, we show that it is not possible to give any useful lower bound for local identifying (locating-dominating) codes with identification (location-domination) number **Theorem 1.7**.: Let \(K_{2,n}\) be a complete bipartite graph on \(n+2\geq 5\) vertices. We have \(\gamma^{ID}(K_{2,n})=\gamma^{LD}(K_{2,n})=n\) and \(\gamma^{L-ID}(K_{2,n})=\gamma^{L-LD}(K_{2,n})=2\). Proof.: Let \(K_{2,n}\) have bipartition to sets \(A\) and \(B\) and let \(|A|=2\). Observe that \(A\) is a local identifying code and thus, also a local locating-dominating code. Moreover, we need at least two vertices to dominate \(K_{2,n}\). Hence, \(\gamma^{L-ID}(K_{2,n})=\gamma^{L-LD}(K_{2,n})=2\). Let us then consider usual identification and location-domination. A set containing one vertex from \(A\) and all but one from \(B\) is an identifying code (and thus, also a locating-dominating code). Moreover, to separate vertices in \(A\), we require at least one vertex from \(A\) to locating-dominating code. Furthemore, only vertices in \(B\) can separate them from other vertices in \(B\). Thus, we require all but one vertex from \(B\) to any locating-dominating code. The claim follows from the fact \(\gamma^{LD}(G)\leq\gamma^{ID}(G)\). An important concept in finding lower bounds for sizes of optimal codes is the concept of _share_ introduced by Slater in [13]. In the following we define this concept only for \(r=1\) although it can be defined for general \(r\). **Definition 1.8**.: Let \(C\) be a covering code in graph \(G\). The share \(s(c)\) of a codeword \(c\in C\) is defined as \[s(c)=\sum_{u\in N[c]}\frac{1}{|I_{C}(u)|}.\] The following lemma is well-known and easy to prove. It provides a useful way to find lower bounds for sizes of optimal codes. **Lemma 1.9**.: Let \(C\) be a covering code in a finite graph \(G=(V,E)\). If \(s(c)\leq\alpha\) for every \(c\in C\), then \[|C|\geq\frac{|V|}{\alpha}.\] Thus, an upper bound for the share of an arbitrary codeword provides a lower bound for the size of the code. We have a similar lemma for shares and densities in some infinite graphs which we will discuss in Section 3. ### Related concepts Besides identifying and locating-dominating codes, local identifying and local locating-dominating codes resemble also local resolving sets as we have mentioned. Recently, a new variant of local resolving sets, _nonlocal resolving sets_, was introduced in [14]. While local resolving sets distinguish adjacent vertices, nonlocal resolving sets are their dual concept and can distinguish any non-adjacent pair of vertices. It is possible to define local identifying codes using list-colouring. Let colours be some positive integers. Let \(c\) be a function giving a list of colours for each vertex in \(V(G)\) for some graph \(G\). We do not restrict the maximum length of a list connected to a vertex. However, we give the following two restrictions for the list-colouring. If \(d(u,v)=2\) for \(u,v\in V(G)\), then \(c(u)\cap c(v)\neq\emptyset\), that is, vertices at distance exactly two must share a colour in their list of colours. Secondly, if \(d(u,v)=1\) for \(u,v\in V(G)\), then \(c(u)\cap c(v)=\emptyset\). We call this _locality colouring_. Since there are \((n^{2}-n)/2\) distinct pairs of vertices in an \(n\)-vertex graph, there always exists a locality colouring with \((n^{2}-n)/2\) colours. Let us consider a dominating set \(S\) in \(G\) together with the following property: For any pair of vertices \(u,v\in V(G)\) with \(c(u)\cap c(v)=\emptyset\) we have \(I(u)\neq I(v)\). Observe that set \(S\) is a local identifying code. Indeed, it is dominating and it separates any adjacent vertices. Let us then consider a local identifying code \(C\) in \(G\). Observe that if \(c(u)\cap c(v)=\emptyset\), then \(u\) and \(v\) are either adjacent and \(I(u)\neq I(v)\) or \(d(u,v)\geq 3\) and again \(I(u)\neq I(v)\) since \(C\) is dominating. Thus, we could have defined local identifying codes also using list-colourings. Moreover, colouring related separation/location problems have been considered in the literature, for example, in [15, 16, 17]. In [18, 19], a concept called _red-blue separation_ was introduced. We next show a connection between this concept and local identification. In red-blue separation, each vertex of graph \(G\) is assigned either red or blue colour. After that a set of vertices \(S\) is a red-blue separating set if for any two vertices \(u,v\in V(G)\) we have \(I(v)=I(u)\) only when \(v\) and \(u\) have been assigned the same colour. Notice that domination was not required here. Consider a bipartite graph \(G\) with bipartition of vertices to sets \(A\) and \(B\). Let colouring \(c\) be such that we assign colour \(1\) (red) to each vertex in \(A\) and colour \(2\) (blue) to each vertex in \(B\). Notice that any adjacent vertices share no colours while any vertices at distance two share a colour. Thus, this colouring is a locality colouring. Therefore, a dominating set of graph \(G\) is also a red-blue separating set together with colouring \(c\) if and only if it is a local identifying code. Hence, these concepts are closely related. Moreover, perhaps it would be interesting to consider local separating sets in the future, that is, local identifying codes without the domination property. ### Structure of the paper First in Section 2, we study local identifying codes in binary hypercubes. In Subsection 2.1, we give some exact solutions for local identifying codes in small hypercubes. Then, in Subsection 2.2, we give a general and asymptotically tight lower bound for local identifying codes in hypercubes. After that, in Subsection 2.3, we give general methods for constructing local identifying codes in hypercubes. In particular, these constructions show that our lower bound is essentially tight and the size of optimal local identifying codes is significantly smaller than that of usual identifying codes in hypercubes. In Section 3, we consider local locating-dominating and local identifying codes in infinite square, hexagonal (Subsection 3.1), triangular (Subsection 3.2) and king grids (Subsection 3.3). In particular, we give an optimal construction for seven out of eight of these cases. Finally, we conclude with Section 4. ## 2 Local identifying codes in binary hypercubes Let us denote by \(\mathbb{F}=\{0,1\}\) the binary field and let \(n\geq 1\) be an integer. The set of length \(n\) binary words is denoted by \(\mathbb{F}^{n}\) as usual. The _Hamming distance_\(d_{H}(\mathbf{x},\mathbf{y})\) of two binary words \(\mathbf{x},\mathbf{y}\in\mathbb{F}^{n}\) is the number of coordinates in which they differ. The _binary \(n\)-dimensional hypercube_ is the graph \(G=(V,E)\) where \(V=\mathbb{F}^{n}\) and \(E=\{\{\mathbf{x},\mathbf{y}\}\mid\mathbf{x},\mathbf{y}\in\mathbb{F}^{n},d_{H} (\mathbf{x},\mathbf{y})=1\}\), _i.e._, two binary words are neighbours in the binary hypercube if and only if their Hamming distance is 1. In fact, it is easy to see that the Hamming distance between two binary words is the same as their graphic distance in the binary hypercube. So, from now on by \(\mathbb{F}^{n}\) we mean the above graph. We study local identifying codes in binary hypercubes. Let us denote by \(M^{L}(n)\) the size of an optimal local identifying code and by \(M(n)\) the size of an optimal identifying code in the binary \(n\)-dimensional hypercube. Moreover, we denote by \(M^{LD}(n)\) and \(K(n)\) the sizes of optimal locating-dominating codes and optimal covering codes, respectively, in the binary \(n\)-dimensional hypercube. Even though there has been much research concerning identifying codes in binary hypercubes, the exact value of \(M(n)\) is known only for \(2\leq n\leq 7\). In Table 1 we have listed the known values of \(M(n)\), \(M^{LD}(n)\) and \(K(n)\) and our contributions to the values \(M^{L}(n)\) for \(n\in[2,10]\). Note that the numbers \(M(1)\) and \(M^{L}(1)\) are not defined since there are no identifying or local identifying codes in the binary 1-dimensional hypercube \(\mathbb{F}\). Note also that since binary hypercubes are triangle-free, the local locating-dominating codes are exactly the covering codes in binary hypercubes by Lemma 1.5. We start by determining the exact values of \(M^{L}(n)\) for small \(n\). Then we prove a general lower bound for \(M^{L}(n)\) and an upper bound by a linear code construction. It turns out that this construction shows that our lower bound cannot be significantly improved since for infinitely many \(n\), it yields a code whose size is very close to the lower bound (and actually to the lower bound of covering codes). Consequently, this implies that for infinitely many \(n\) the size of an optimal local identifying code is significantly smaller than the size of an optimal identifying code in the binary \(n\)-dimensional hypercube. However, this is not the case in every graph as we will see in Table 2 for triangular grid. ### Small \(n\) The following example shows that \(M^{L}(2)=2\) which is strictly smaller than \(M(2)=3\). **Example 2.1**.: Let us show that \(M^{L}(2)=2\). First, \(M^{L}(2)\geq 2\) since any local identifying code is a covering code and one cannot cover all the vertices of \(\mathbb{F}^{2}\) with only one codeword. However, the code \(C=\{00,11\}\) is a local identifying code and thus \(M^{L}(2)\leq 2\). In \(\mathbb{F}^{3}\) the classes of local identifying and identifying codes are the same: **Theorem 2.2**: _A code \(C\subseteq\mathbb{F}^{3}\) is an identifying code if and only if it is a local identifying code. Thus, \(M^{L}(3)=M(3)=4\)._ **Proof:** By the definition any identifying code is also a local identifying code. For the converse direction assume on the contrary that there exists a local identifying code \(C\subseteq\mathbb{F}^{3}\) which is not an identifying code. There exist \(\mathbf{x},\mathbf{y}\in\mathbb{F}^{3}\) such that \(I_{C}(\mathbf{x})\neq I_{C}(\mathbf{y})\). Because \(C\) is a local identifying code, \(\mathbf{x}\) and \(\mathbf{y}\) cannot be neighbours and hence \(d(\mathbf{x},\mathbf{y})\geq 2\). However, we cannot have \(d(\mathbf{x},\mathbf{y})=3\) since then \(\mathbf{x}\) and \(\mathbf{y}\) could not cover a common codeword and hence they could not have equal non-empty \(I\)-sets. Thus, \(d(\mathbf{x},\mathbf{y})=2\). Without loss of generality we may assume that \(\mathbf{x}=000\) and \(\mathbf{y}=110\). By the assumption that the \(I\)-sets of \(\mathbf{x}\) and \(\mathbf{y}\) are equal, we conclude that the symmetric difference \(N[\mathbf{x}]\Delta N[\mathbf{y}]=\{000,001,110,111\}\) of their neighbourhoods is a subset of \(\mathbb{F}^{3}\setminus C\) and hence \(C\subseteq\mathbb{F}^{3}\setminus N[\mathbf{x}]\Delta N[\mathbf{y}]=\{100,0 10,011,101\}=C^{\prime}\). As a superset of a local identifying code \(C\), also the code \(C^{\prime}\) is a local identifying code. However, \(I_{C^{\prime}}(010)=\{010,011\}=I_{C^{\prime}}(011)\). Since \(010\) and \(011\) are neighbours, this means that \(C^{\prime}\) is not a local identifying code which is a contradiction. Thus, \(C\) has to be also an identifying code. \(\sqcap\)\(\sqcup\) ### A lower bound By proving an upper bound for the share of an arbitrary codeword of an arbitrary local identifying code, we prove the following theorem which provides a lower bound for \(M^{L}(n)\) for \(n\geq 3\). **Theorem 2.3**: _For every \(n\geq 3\)_ \[M^{L}(n)\geq\frac{3\cdot 2^{n}}{3n-2}.\] \begin{table} \begin{tabular}{|c||r|r|r|} \hline \(n\) & \(M(n)\) & \(M^{L}(n)\) & \(M^{LD}(n)\) & \(K(n)\) \\ \hline 2 & 3 (C) & 2 & 2 (B) & 2 (H) \\ \hline 3 & 4 (C) & 4 & 4 (B) & 2 (H) \\ \hline 4 & 7 (F) & 6 & 6 (B) & 4 (H) \\ \hline 5 & 10 (C) & 8 & 10 (B) & 7 (H) \\ \hline 6 & (G) 19 (F) & \(12-16\) & \(16-18\) (B) & (I) 12 (H) \\ \hline 7 & 32 (F) & \(21-28\) & \(28-32\) (B) & 16 (H) \\ \hline 8 & (C) \(56-61\) (A) & \(35-48\) & (B) \(50-61\) (D) & 32 (H) \\ \hline 9 & (C) \(101-112\) (A) & \(62-64\) & (B) \(91-112\) (A) & (K) 62 (J) \\ \hline 10 & (C) \(183-208\) (A) & \(110-128\) & (E) \(171-208\) (A) & \(107-120\) (K) \\ \hline \end{tabular} \end{table} Table 1: Known values of \(M(n),M^{LD}(n)\) and \(K(n)\) and our contributions concerning the values of \(M^{L}(n)\) for \(n\in[2,10]\). Keys to the table: (A) [20], (B) [21], (C) [3], (D) [22, Appendix], (E) [23], (F) [24], (G) [25], (H) [26], (H) [27], (I) [28], (J) [29]. Left key is for the lower bound and right key for the upper bound. When lower and upper bounds are from the same source, the key is placed only on the right side. **Proof:** For \(n=3\) the claim follows from Theorem 2.2. So, let us assume then that \(n\geq 4\). Let \(C\subseteq\mathbb{F}^{n}\) be a local identifying code and let \(\mathbf{c}\in C\) be an arbitrary codeword. We show that \[s(\mathbf{c})\leq\frac{3n-2}{3}\] which together with Lemma 1.9 gives the claim. Since \(C\) is a local identifying code, it separates \(\mathbf{c}\) from its neighbours \(\mathbf{c}+\mathbf{e}_{1},\ldots,\mathbf{c}+\mathbf{e}_{n}\) where \(\mathbf{e}_{j}\) is the binary word of weight one whose \(j\)th component is 1. If none of the points \(\mathbf{c}+\mathbf{e}_{j}\) is a codeword, _i.e._, if \(I(\mathbf{c})=\{\mathbf{c}\}\), then each of them is covered by at least two codewords and hence \[s(\mathbf{c})\leq\frac{1}{|I(\mathbf{c})|}+\sum_{j=1}^{n}\frac{1}{|I(\mathbf{ c}+\mathbf{e}_{j})|}\leq 1+\frac{n}{2}. \tag{1}\] Let us then assume that \(\mathbf{c}+\mathbf{e}_{k}\in C\) for some \(k\in\{1,\ldots,n\}\). In order to separate the neighbours \(\mathbf{c}\) and \(\mathbf{c}+\mathbf{e}_{k}\), the code \(C\) has to cover at least one of them by at least three codewords, _i.e._, \(|I(\mathbf{c})|\geq 3\) or \(|I(\mathbf{c}+\mathbf{e}_{k})|\geq 3\). In both cases \(|I(\mathbf{c}+\mathbf{e}_{l})|\geq 2\) for some \(l\in\{1,\ldots,n\}\setminus\{k\}\). Thus, \[s(\mathbf{c})\leq\frac{1}{3}+2\cdot\frac{1}{2}+(n-2)\cdot 1=\frac{3n-2}{3}.\] For \(n\geq 4\) we have \(1+\frac{n}{2}\leq\frac{3n-2}{3}\). The claim follows. \(\sqcap\)\(\sqcup\) We can utilize the concept of share also in proofs when the value of \(n\) is specified as we will see in the following proof. In the proof of the following theorem we mean by the weight of \(\mathbf{x}\) the number of its coordinates that have symbol 1. **Theorem 2.4**: \[M^{L}(4)=6.\] **Proof:** It is straight-forward to verify that the code \[C=\{0000,0100,0010,0111,1111,1101\}\] is a local identifying code in the binary hypercube \(\mathbb{F}^{4}\). Thus, \(M^{L}(4)\leq 6\). Let us show that \(M^{L}(4)\geq 6\). Assume the contrary that \(M^{L}(4)<6\). Then there exists a local identifying code \(C\subseteq\mathbb{F}^{4}\) with five codewords. We split the proof into two cases based on whether there exists a codeword \(\mathbf{c}\) with \(|I(\mathbf{c})|\geq 3\). **Case 1.** Assume first that there exists a codeword \(\mathbf{c}\in C\) with \(|I(\mathbf{c})|\geq 3\) and without loss of generality that \(\mathbf{c}=\mathbf{0}\) and \(\{1000,0100\}\subseteq I(\mathbf{c})\). Consider next the subcase with \(\mathbf{1}\in C\). Let us denote the remaining codeword with \(\mathbf{c}_{5}\). The only vertex which is not covered by \(C\) at this point is \(0011\). If weight of \(\mathbf{c}_{5}\) is three, then \(I(\mathbf{c}_{5})=I(\mathbf{1})\). If \(\mathbf{c}_{5}\) covers \(0011\) and its weight is at most two, then \(I(\mathbf{1})=I(1110)\), a contradiction. Thus, we may assume that \(\mathbf{1}\not\in C\). Since \(\mathbf{1}\not\in C\), we require at least one weight three codeword \(\mathbf{c}_{4}\in C\) to cover \(\mathbf{1}\). However, two codewords (other than \(\mathbf{1}\)) can cover all four weight three codewords only if their weight is two, a contradiction. **Case 2.** Assume next that there does not exist a codeword \(\mathbf{c}\in C\) with \(|I(\mathbf{c})|\geq 3\). Hence, we have \(|I(\mathbf{c})|=1\) for each codeword \(\mathbf{c}\in C\). In this case, we consider share. As we have counted in Equation (1), we have \(s(\mathbf{c})\leq n/2+1=3\) for each \(\mathbf{c}\in C\). Thus, \(|C|\geq 2^{4}/3=5\frac{1}{3}>5\). Therefore, we again have a contradiction and \(M^{L}(4)=6\). In the following subsection we will see that there exist infinitely many \(n\) for which there exists a local identifying code in \(\mathbb{F}^{n}\) such that its size is arbitrarily close to the obtained lower bound. This means that for infinitely many \(n\) the size of an optimal identifying code in the binary \(n\)-dimensional hypercube is approximately at least two times larger than the size of an optimal local identifying code. ### Upper bounds The _direct sum_ of two codes \(C_{1}\subseteq\mathbb{F}^{n}\) and \(C_{2}\subseteq\mathbb{F}^{m}\) is the code \[C_{1}\oplus C_{2}=\{(\mathbf{c}_{1},\mathbf{c}_{2})\mid\mathbf{c}_{1}\in C_{1 },\mathbf{c}_{2}\in C_{2}\}\subseteq\mathbb{F}^{n+m}.\] **Lemma 2.5**.: Let \(C\subseteq\mathbb{F}^{n}\) be a local identifying code. Then the code \(C^{\prime}=\mathbb{F}\oplus C\subseteq\mathbb{F}^{n+1}\) is a local identifying code if and only if \(|I(\mathbf{c})|\geq 2\) for every \(\mathbf{c}\in C\). **Proof:** Let us first assume that \(C^{\prime}=\mathbb{F}\oplus C\) is a local identifying code. Assume on the contrary that there exists a codeword \(\mathbf{c}\in C\) such that \(I_{C}(\mathbf{c})=\{\mathbf{c}\}\). Then \(C^{\prime}\) does not separate the neighbours \((0,\mathbf{c})\) and \((1,\mathbf{c})\). A contradiction. For the converse direction, assume then that \(|I_{C}(\mathbf{c})|\geq 2\) for every \(\mathbf{c}\in C\). Let \(\mathbf{x}^{\prime}=(a,\mathbf{x})\in\mathbb{F}^{n+1}\) where \(a\in\mathbb{F}\) and \(\mathbf{x}\in\mathbb{F}^{n}\). If \(\mathbf{x}\in C\), then \(I_{C^{\prime}}(\mathbf{x}^{\prime})=\{\mathbf{x}^{\prime},(a+1,\mathbf{x})\} \cup\{(a,\mathbf{c})\mid\mathbf{c}\in I_{C}(\mathbf{x})\}\) and if \(\mathbf{x}\not\in C\), then \(I_{C^{\prime}}(\mathbf{x}^{\prime})=\{(a,\mathbf{c})\mid\mathbf{c}\in I_{C}( \mathbf{x})\}\). Thus, \(I_{C^{\prime}}(\mathbf{x}^{\prime})\neq\emptyset\) since \(I_{C}(\mathbf{x})\neq\emptyset\) and hence \(C^{\prime}\) is a covering code. Let us then show that \(C^{\prime}\) separates any two neighbours. So, let \(\mathbf{x}^{\prime}=(a,\mathbf{x})\) and \(\mathbf{y}^{\prime}=(b,\mathbf{y})\) be neighbours. Assume first that \(a=b\). Then \(\mathbf{x}\) and \(\mathbf{y}\) have to be neighbours. Since \(C\) is a local identifying code, we have \(I_{C}(\mathbf{x})\neq I_{C}(\mathbf{y})\) and hence also \(I_{C^{\prime}}(\mathbf{x}^{\prime})\neq I_{C^{\prime}}(\mathbf{y}^{\prime})\). Assume then that \(a\neq b\). We have \(\mathbf{x}=\mathbf{y}\) since \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) are neighbours. By the assumption \(|I_{C}(\mathbf{c})|\geq 2\), for every \(\mathbf{c}\in C\), there exists a codeword \(\mathbf{c}\in I_{C}(\mathbf{x})=I_{C}(\mathbf{y})\) such that \(\mathbf{c}\neq\mathbf{x}=\mathbf{y}\). Thus, \((a,\mathbf{c}),(b,\mathbf{c})\in I_{C^{\prime}}(\mathbf{x}^{\prime})\triangle I _{C^{\prime}}(\mathbf{y}^{\prime})\) and hence \(I_{C^{\prime}}(\mathbf{x}^{\prime})\neq I_{C^{\prime}}(\mathbf{y}^{\prime})\). So, we conclude that \(C^{\prime}\) separates any two neighbours and hence \(C^{\prime}\) is a local identifying code. In the following lemma, we use the fact that if the intersection \(N[\mathbf{x}]\cap N[\mathbf{y}]\cap N[\mathbf{z}]\) of the neighbourhoods of three distinct binary words \(\mathbf{x},\mathbf{y},\mathbf{z}\) is non-empty, then it in fact contains a unique point. This implies, in particular, that if \(|I(\mathbf{c})|\geq 3\) for some codeword \(\mathbf{c}\) of a code \(C\subseteq\mathbb{F}^{n}\), then \(I(\mathbf{c})\) is unique meaning that \(I(\mathbf{c}^{\prime})\neq I(\mathbf{c})\) for all \(\mathbf{c}^{\prime}\in C\setminus\{\mathbf{c}\}\). **Lemma 2.6**.: Let \(C\subseteq\mathbb{F}^{n}\) be a code such that \(|I(\mathbf{c})|\geq 3\) for every \(\mathbf{c}\in C\) and \(|I(\mathbf{x})|\geq 1\) for every \(\mathbf{x}\in\mathbb{F}^{n}\setminus C\). Then \(C\) is a local identifying code. **Proof:** By the assumptions \(C\) is a covering code and thus also a local locating-dominating code by Lemma 1.5. It remains to show that \(C\) separates any two neighbours. So, let \({\bf x}\in\mathbb{F}^{n}\) and \({\bf y}\in\mathbb{F}^{n}\) be neighbours. If \({\bf c}\in C\), then \(I({\bf c})\) is unique since \(|I({\bf c})|\geq 3\) by the assumption. Thus, if one of \({\bf x}\) or \({\bf y}\) is a codeword of \(C\), then \(C\) separates them. Hence, let us assume that both \({\bf x}\) and \({\bf y}\) are non-codewords. The code \(C\) separates them since it is a local locating-dominating code. \(\sqcap\)\(\sqcup\) The above lemma yields the following corollary. **Corollary 2.7**: \[M^{L}(n+2)\leq 4K(n).\] **Proof:** Let \(C^{\prime}\subseteq\mathbb{F}^{n}\) be a covering code in \(\mathbb{F}^{n}\) and \(C=\mathbb{F}^{2}\oplus C^{\prime}\). Observe that for each \({\bf c}\in C\) we have \(|I({\bf c})|\geq 3\). Moreover, since \(C^{\prime}\) is a covering code in \(\mathbb{F}^{n}\), \(C\) is a covering code in \(\mathbb{F}^{n+2}\). Thus, by Lemma 2.6, \(C\) is a local identifying code. \(\sqcap\)\(\sqcup\) **Remark 2.8**: An observant reader may notice that Lemma 2.6 gives also another upper bound for \(M^{L}(n+1)\). Consider a covering code \(C\) such that \(N({\bf w})\cap C\neq\emptyset\) for each \({\bf w}\in\mathbb{F}^{n}\). In this case, \(C\) is said to be a _total dominating set_ and minimum cardinality of a total dominating set in \(\mathbb{F}^{n}\) is denoted by \(\gamma^{TD}(n)\). By Lemma 2.6, we have \(M^{L}(n+1)\leq 2\gamma^{TD}(n)\). However, by [30, 31, 32], we have \(2K(n)=\gamma^{TD}(n+1)\). Hence, this approach yields the same upper bound as Corollary 2.7. Similar ideas as in the proof of Lemma 2.6 give also following code. **Proposition 2.9**: We have \(M^{L}(6)\leq 15\). **Proof:** Proof follows from the code \(C=\{100000,010000,110000\}\cup\{001100,001110,001101\}\cup\{000011,\\ 100011,010011\}\cup\{11110,111010,110110\}\cup\{111101,011101,101101\}\). First, code \(C\) is covering and hence local locating-dominating. Thus, \(C\) separates adjacent non-codewords. Secondly, the subgraph induced by codewords consists of five separate paths of length three. Hence, \(C\) separates all codewords from their neighbours. \(\sqcap\)\(\sqcup\) Next, we present a general construction of a linear local identifying code which then gives an upper bound for \(M^{L}(n)\). A code \(C\) is a _linear code_ if for any codewords \({\bf c}_{1},{\bf c}_{2}\in C\), we have \({\bf c}_{1}+{\bf c}_{2}\in C\). Notice that if \(C\) is a linear code in \(\mathbb{F}^{n}\), then \(C\oplus\mathbb{F}\) is a linear code in \(\mathbb{F}^{n+1}\). For a more thorough survey on the topic see the book [26]. **Definition 2.10**: Let \(s\geq 2\). A _binary Hamming code_ of length \(n=2^{s}-1\) is a linear covering code \({\cal H}_{s}\subseteq\mathbb{F}^{2^{s}-1}\) which contains exactly \(2^{n-s}\) codewords. **Theorem 2.11**: For any binary Hamming code \({\cal H}_{s}\) the closed neighbourhoods of its codewords partition the whole space \(\mathbb{F}^{2^{s}-1}\). Notice that by Corollary 2.7, code \(\mathcal{H}_{s}\oplus\mathbb{F}^{2}\) is a linear local identifying code in \(\mathbb{F}^{n+1}\) for \(n=2^{s}-1\geq 3\). Hence, by Lemma 2.5, \(\mathcal{H}_{s}\oplus\mathbb{F}^{k}\) is a linear local identifying code in \(\mathbb{F}^{n+k}\) for \(k\geq 2\). **Corollary 2.12**.: Let \(s,k\geq 2\) and \(n=2^{s}+k-1\). Then \[M^{L}(n)\leq 2^{2^{s}+k-s-1}.\] In particular when \(k=2\), we have \(n=2^{s}+1\), for \(s\geq 2\), and \[\frac{2^{n}}{n-2/3}\leq M^{L}(n)\leq 2^{n-\log_{2}(n-1)}=\frac{2^{n}}{n-1}.\] The lower bound is the one from Theorem 2.3. Next, we compare the lower and upper bounds of local identifying codes to covering codes and identifying codes. We will see that both our lower and upper bounds are quite good and essentially tight for infinite number of values of \(n\). We see that for arbitrarily large \(n=2^{s}+k-1\) where \(k\) is "small" the upper bound of Corollary 2.12 is close to the lower bound of Theorem 2.3. Actually, it close to the following lower bound for covering codes: \(K(n)\geq\frac{2^{n}}{n+1}\). This means that the lower bound is very close to optimal for infinitely many \(n\) and the cost of turning a covering code into a local identifying code is small. Furthermore, in [3], the authors have given following lower bound for identifying codes \(M(n)\geq 2\cdot\frac{2^{n}}{n+1+2/n}\). When we compare this to the upper bound in Corollary 2.12, we notice that the cardinality of local identifying codes is roughly half of cardinality of identifying codes (actually this holds even for locating-dominating codes). For example, by Theorem 2.3 and Corollary 2.12 we are able to conclude that \(M^{L}(9)\in\{62,63,64\}\) while \(M(9)\in\{101,\ldots,112\}\) and \(M^{LD}(9)\in\{91,\ldots,112\}\) as we can see from Table 1. In particular, combining the above upper bound and our lower bound of Theorem 2.3, we have the following result which gives the precise value of \(M^{L}(5)\). **Theorem 2.13**.: \[M^{L}(5)=8.\] **Proof:** By Theorem 2.3 we have \(M^{L}(5)\geq\frac{3\cdot 2^{5}}{3\cdot 5-2}=\frac{3\cdot 32}{13}\approx 7.38\) and hence \(M^{L}(5)\geq 8\). On the other hand, by Corollary 2.12 we have the upper bound \[M^{L}(5)\leq 2^{2^{2}+2-2-1}=8.\] \(\sqcap\)\(\sqcup\) ## 3 Local identifying and local locating-dominating codes in infinite grids Let us begin by defining the graphs we consider next. For a pictorial illustration of these graphs see Figure 3. **Definition 3.1**.: An _infinite grid_ is one of the following four graphs. * The _square grid_ is the graph \(\mathcal{S}=(\mathbb{Z}^{2},E_{\mathcal{S}})\) where \[E_{\mathcal{S}}=\{\{\mathbf{u},\mathbf{v}\}\ |\ \mathbf{u}-\mathbf{v}\in\{(\pm 1,0),(0, \pm 1)\}\}.\] * The _hexagonal grid_ is the graph \(\mathcal{H}=(\mathbb{Z}^{2},E_{\mathcal{H}})\) where \[E_{\mathcal{H}}=\{\{\mathbf{u}=(i,j),\mathbf{v}\}\ |\ \mathbf{u}-\mathbf{v}\in\{(\pm 1,0),(0,(-1)^{i+j+1})\}\}.\] * The _triangular grid_ is the graph \(\mathcal{T}=(\mathbb{Z}^{2},E_{\mathcal{T}})\) where \[E_{\mathcal{T}}=\{\{\mathbf{u},\mathbf{v}\}\ |\ \mathbf{u}-\mathbf{v}\in\{(\pm 1,0),(0,\pm 1),(1,1),(-1,-1)\}\}.\] * The _king grid_ is the graph \(\mathcal{K}=(\mathbb{Z}^{2},E_{\mathcal{K}})\) where \[E_{\mathcal{K}}=\{\{\mathbf{u},\mathbf{v}\}\ |\ \mathbf{u}-\mathbf{v}\in\{(\pm 1,0),(0,\pm 1),(\pm 1,\pm 1)\}\}.\] Next, we define the concept of _density of a code_ in infinite grids. **Definition 3.2**.: Let \(G\) be an infinite grid and let \(C\subseteq\mathbb{Z}^{2}\) be a code in \(G\). The _density_\(D(C)\) of \(C\) is defined as \[D(C)=\limsup_{n\to\infty}\frac{|C\cap Q_{n}|}{|Q_{n}|}\] where \(Q_{n}=\{(i,j)\in\mathbb{Z}^{2}\ |\ |i|\leq n,\ |j|\leq n\}\). We say that a code in some class of codes is _optimal_ if it has the smallest density among the codes in the same class. In this section, we denote by \(\gamma^{ID}(G)\), \(\gamma^{LD}(G)\), \(\gamma^{L-ID}(G)\) and \(\gamma^{L-LD}(G)\) the _densities_ of optimal identifying, locating-dominating, local identifying and local locating-dominating codes, respectively, in an infinite grid \(G\). The densities \(\gamma^{ID}(G)\) and \(\gamma^{LD}(G)\) are all known when \(G\) is the square, the triangular or the king grid. The number \(\gamma^{LD}(\mathcal{H})\) is also known while the number \(\gamma^{ID}(\mathcal{H})\) is currently still unknown. However, note that interestingly the exact value of the density of optimal 2-identifying codes in the hexagonal grid is known [33]. In Table 2 we have listed the known values for the densities of optimal identifying and locating-dominating codes in each infinite grid (and Figure 3: Infinite grids. an interval for the density \(\gamma^{ID}(\mathcal{H})\) where we know it belongs to) and our contributions to the optimal densities of local identifying and local locating-dominating codes. We study the densities of optimal local identifying and local locating-dominating codes in these four grids. Again, we use shares and, in particular, the following well-known lemma analogous to Lemma 1. For completeness, we provide a proof for this result. **Lemma 3.3**: _Let \(G\) be an infinite grid and let \(C\subseteq\mathbb{Z}^{2}\) be a covering code in \(G\). If for some real \(\alpha>0\) we have \(s(\mathbf{c})\leq\alpha\) for every \(\mathbf{c}\in C\), then \(D(C)\geq\frac{1}{\alpha}\)._ **Proof:** By the assumption that \(s(\mathbf{c})\leq\alpha\) for every \(\mathbf{c}\in C\) and by the fact that \(C\) is a covering code we have \[|Q_{n-1}|\leq\sum_{\mathbf{c}\in C\cap Q_{n}}s(\mathbf{c})\leq|C\cap Q_{n}|\cdot\alpha\] for any \(n\geq 1\). Thus, \[|C\cap Q_{n}|\geq\frac{|Q_{n-1}|}{\alpha}\] and hence \[D(C)=\limsup_{n\to\infty}\frac{|C\cap Q_{n}|}{|Q_{n}|}\geq\limsup_{n\to\infty} \frac{|Q_{n-1}|}{\alpha\cdot|Q_{n}|}=\frac{1}{\alpha}\cdot\limsup_{n\to\infty }\frac{|Q_{n-1}|}{|Q_{n}|}=\frac{1}{\alpha}\] since \(|Q_{n}|=(2n+1)^{2}\) which implies that \(\limsup_{n\to\infty}\frac{|Q_{n-1}|}{|Q_{n}|}=1\). \(\sqcap\)\(\sqcup\) So, by finding an upper bound for the share of an arbitrary codeword of a code, we obtain a lower bound for the density of the code. By analyzing the possible shares of codewords of local identifying and local locating-dominating codes in \(G\) we get lower bounds for the numbers \(\gamma^{L-ID}(G)\) and \(\gamma^{L-LD}(G)\) for different grids \(G\). To improve the lower bounds obtained by analyzing the maximal shares of the codewords of a code, we sometimes use a _share shifting scheme_ where we modify the share function by shifting shares among codewords according to some local rules such that the total share remains the same. For a code \(C\) in a finite graph this means that \(\sum_{\mathbf{c}\in C}s(\mathbf{c})=\sum_{\mathbf{c}\in C}s^{\prime}(\mathbf{c})\), and for a code \(C\) in an infinite \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline \(G\) & \(\mathcal{S}\) & \(\mathcal{H}\) & \(\mathcal{T}\) & \(\mathcal{K}\) \\ \hline \(\gamma^{ID}(G)\) & \(\frac{7}{20}\) ([34]) & \(\frac{5}{12}-\frac{3}{7}\) ([35, 36]) & \(\frac{1}{4}\) ([3]) & \(\frac{2}{9}\) ([37, 38]) \\ \hline \(\gamma^{LD}(G)\) & \(\frac{3}{10}\) ([13]) & \(\frac{1}{3}\) ([39]) & \(\frac{13}{57}\) ([40]) & \(\frac{1}{5}\) ([39]) \\ \hline \(\gamma^{L-ID}(G)\) & \(\frac{3}{11}\) & \(\frac{3}{8}\) & \(\frac{1}{4}\) & \(\frac{2}{9}\) \\ \hline \(\gamma^{L-LD}(G)\) & \(\frac{1}{5}\) & \(\frac{1}{4}\) & \(\frac{2}{11}-\frac{2}{9}\) & \(\frac{3}{16}\) \\ \hline \end{tabular} \end{table} Table 2: Known values for the densities of optimal identifying and locating-dominating codes and contributions of this paper for the densities of optimal local identifying and local locating-dominating codes in infinite grids. grid this means that \(\sum_{\mathbf{c}\in C\cap Q_{n}}s(\mathbf{c})\leq\sum_{\mathbf{c}\in C\cap Q_{n+r} }s^{\prime}(\mathbf{c})\) where \(r\) is the maximum distance from a codeword to another codeword it shifts share to. The following lemma states that an upper bound for the modified share function yields a lower bound for the density of a code. **Lemma 3.4**.: Let \(G\) be an infinite grid and let \(C\subseteq\mathbb{Z}^{2}\) be a covering code in \(G\). Let \(s^{\prime}\) be a modified share function of \(C\) obtained by a share shifting scheme. If \(s^{\prime}(\mathbf{c})\leq\alpha\) for every \(\mathbf{c}\in C\), then \(D(C)\geq\frac{1}{\alpha}\). Proof.: Assume that in the share shifting scheme that defines \(s^{\prime}\) codewords obtain shifted share from codewords within distance \(r\) from them. Since \(C\) is a covering code, we have \[|Q_{n-1}|\leq\sum_{\mathbf{c}\in C\cap Q_{n}}s(\mathbf{c})\] and since the total share in \(Q_{n}\) stays in \(Q_{n+r}\), we have \[\sum_{\mathbf{c}\in C\cap Q_{n}}s(\mathbf{c})\leq\sum_{\mathbf{c}\in C\cap Q_ {n}}s^{\prime}(\mathbf{c})+\alpha\cdot|Q_{n+r}\setminus Q_{n}|\leq\alpha\cdot| C\cap Q_{n}|+\alpha\cdot|Q_{n+r}\setminus Q_{n}|.\] By combining these, we get \[|C\cap Q_{n}|\geq\frac{1}{\alpha}|Q_{n-1}|-|Q_{n+r}\setminus Q_{n}|.\] Thus, \[D(C)=\limsup_{n\to\infty}\frac{|C\cap Q_{n}|}{|Q_{n}|}\geq\frac{1}{\alpha} \limsup_{n\to\infty}\frac{|Q_{n-1}|}{|Q_{n}|}-\limsup_{n\to\infty}\frac{|Q_{n+r }\setminus Q_{n}|}{|Q_{n}|}=\frac{1}{\alpha}-0=\frac{1}{\alpha}.\] ### The square and the hexagonal grids Since the square and the hexagonal grids are triangle-free, Lemma 1.5 gives the following theorem. See Figures 3(a) and 3(b) for constructions. **Theorem 3.5**.: \[\gamma^{L-LD}(\mathcal{S})=\frac{1}{5}\] and \[\gamma^{L-LD}(\mathcal{H})=\frac{1}{4}.\] The following two theorems give the exact values for the densities of optimal local identifying codes in the square and the hexagonal grids. Observe that those constructions are clearly optimal, since each vertex has exactly one codeword in its closed neighbourhood. **Theorem 3.6**: \[\gamma^{L-ID}(\mathcal{S})=\frac{3}{11}.\] **Proof:** By a construction, in Figure 4(a), of a local identifying code in the square grid of density \(\frac{3}{11}\), we have \(\gamma^{L-ID}(\mathcal{S})\leq\frac{3}{11}\). Next, we prove that \(\gamma^{L-ID}(\mathcal{S})\geq\frac{3}{11}\) using a share shifting scheme. Let \(C\) be a local identifying code in the square grid. In our share shifting scheme we shift \(1/6\) share units from a codeword \(\mathbf{c}\in C\) to its unique codeword neighbour if \(|I(\mathbf{c})|=2\). In all the other cases no share is shifted. Let us denote by \(s^{\prime}\) the modified share function after applying the introduced scheme. We claim that \(s^{\prime}(\mathbf{c})\leq\frac{11}{3}\) for all \(\mathbf{c}\in C\) which yields by Lemma 3.4 that \(D(C)\geq\frac{3}{11}\) and hence \(\gamma^{L-ID}(\mathcal{S})\geq\frac{3}{11}\). So, let \(\mathbf{c}\in C\) be an arbitrary codeword of \(C\). Assume first that \(|I(\mathbf{c})|=1\), _i.e._, that \(I(\mathbf{c})=\{\mathbf{c}\}\). Every neighbour of \(\mathbf{c}\) is covered by at least 2 codewords since otherwise the code \(C\) would not separate \(\mathbf{c}\) from all of its neighbours. So, in this case we have \(s(\mathbf{c})\leq 1+4\cdot\frac{1}{2}=3<\frac{11}{3}\). Since \(\mathbf{c}\) has no codeword neighbours, no share is shifted to \(\mathbf{c}\) and hence \[s^{\prime}(\mathbf{c})=s(\mathbf{c})=3<\frac{11}{3}.\] Assume then that \(|I(\mathbf{c})|=2\) and let \(\mathbf{c}^{\prime}\) be the unique codeword neighbour of \(\mathbf{c}\). Since \(C\) separates \(\mathbf{c}\) and \(\mathbf{c}^{\prime}\), we have \(|I(\mathbf{c}^{\prime})|\geq 3\) and hence \(s(\mathbf{c})\leq\frac{1}{2}+\frac{1}{3}+3\cdot 1=\frac{23}{6}=3\frac{5}{6}\). Next, we shift \(1/6\) share units from \(\mathbf{c}\) and no share is shifted to \(\mathbf{c}\) because \(\mathbf{c}\) has no codeword neighbours with exactly one codeword neighbour. So, we have \[s^{\prime}(\mathbf{c})\leq\frac{23}{6}-\frac{1}{6}=3\frac{4}{6}=\frac{11}{3}.\] Figure 4: Local location-domination in the square and hexagonal grids. Finally, assume that \(|I(\mathbf{c})|\geq 3\). If \(|I(\mathbf{c})|=3\), then \(s(\mathbf{c})\leq\frac{1}{3}+2\cdot\frac{1}{2}+2\cdot 1=\frac{10}{3}=3\frac{1}{3}\) and hence \(s^{\prime}(\mathbf{c})\leq\frac{10}{3}+2\cdot\frac{1}{6}=\frac{11}{3}\). If \(|I(\mathbf{c})|\geq 4\), then \(s(\mathbf{c})\leq\frac{1}{4}+3\cdot\frac{1}{2}+1=\frac{11}{4}=2\frac{3}{4}\) and hence \(s^{\prime}(\mathbf{c})\leq 2\frac{3}{4}+4\cdot\frac{1}{6}<\frac{11}{3}\). We have shown that \(s^{\prime}(\mathbf{c})\leq\frac{11}{3}\) for an arbitrary \(\mathbf{c}\in C\). The claim follows. **Theorem 3.7**.: \[\gamma^{L-ID}(\mathcal{H})=\frac{3}{8}.\] Proof.: By a construction in Figure 4(b) of a local identifying code in the hexagonal grid of density \(\frac{3}{8}\), we have \(\gamma^{L-ID}(\mathcal{H})\leq\frac{3}{8}\). Next, we prove that \(\gamma^{L-ID}(\mathcal{H})\geq\frac{3}{8}\) using a share shifting scheme. Let \(C\) be a local identifying code in the hexagonal grid. In our share shifting scheme we shift \(1/6\) share units from a codeword \(\mathbf{c}\in C\) to its unique codeword neighbour if \(|I(\mathbf{c})|=2\). In all the other cases no share is shifted. Let us denote by \(s^{\prime}\) the modified share function after applying the introduced scheme. We claim that \(s^{\prime}(\mathbf{c})\leq\frac{8}{3}\) for all \(\mathbf{c}\in C\) which yields by Lemma 3.4 that \(D(C)\geq\frac{3}{8}\) and hence \(\gamma^{L-ID}(\mathcal{H})\geq\frac{3}{8}\). So, let \(\mathbf{c}\in C\) be an arbitrary codeword of \(C\). Assume first that \(|I(\mathbf{c})|=1\). Every neighbour of \(\mathbf{c}\) is covered by at least two codewords since otherwise the code \(C\) would not separate \(\mathbf{c}\) from all of its neighbours. In this case we have \(s(\mathbf{c})\leq 1+3\cdot\frac{1}{2}<\frac{8}{3}\). Since \(\mathbf{c}\) has no codeword neighbours, no share is shifted to \(\mathbf{c}\) and hence \[s^{\prime}(\mathbf{c})=s(\mathbf{c})<\frac{8}{3}.\] Assume then that \(|I(\mathbf{c})|=2\) and let \(\mathbf{c}^{\prime}\) be the unique codeword neighbour of \(\mathbf{c}\). Since \(C\) separates \(\mathbf{c}\) and \(\mathbf{c}^{\prime}\), we have \(|I(\mathbf{c}^{\prime})|\geq 3\) and hence \(s(\mathbf{c})\leq\frac{1}{2}+\frac{1}{3}+2\cdot 1=\frac{17}{6}=2\frac{5}{6}\). Now we shift \(1/6\) share Figure 5: Local identifying codes in the square and hexagonal grids. units from \(\mathbf{c}\) to \(\mathbf{c}^{\prime}\) and clearly no share is shifted to \(\mathbf{c}\). Thus, \[s^{\prime}(\mathbf{c})\leq\frac{17}{6}-\frac{1}{6}=\frac{8}{3}.\] Finally, assume that \(|I(\mathbf{c})|\geq 3\). If \(|I(\mathbf{c})|=3\), then \(s(\mathbf{c})\leq\frac{1}{3}+2\cdot\frac{1}{2}+1=2\frac{1}{3}\) and hence \(s^{\prime}(\mathbf{c})\leq 2\frac{1}{3}+2\cdot\frac{1}{6}=\frac{8}{3}\). If \(|I(\mathbf{c})|=4\), then \(s(\mathbf{c})\leq\frac{1}{4}+3\cdot\frac{1}{2}=1\frac{3}{4}\) and hence \(s^{\prime}(\mathbf{c})\leq 1\frac{3}{4}+3\cdot\frac{1}{6}<\frac{8}{3}\). ### The triangular grid **Theorem 3.8**: \[\gamma^{L-LD}(\mathcal{T})\in\left[\frac{2}{11},\frac{2}{9}\right].\] **Proof:** In Figure 6a we have constructed a local locating-dominating code of density \(\frac{2}{9}\). Thus, \(\gamma^{L-LD}(\mathcal{T})\leq\frac{2}{9}\). Next, we show that \(\gamma^{L-LD}(\mathcal{T})\geq\frac{2}{11}\). So, let \(C\) be a local locating-dominating code in the triangular grid and let \(\mathbf{c}\in C\) be an arbitrary codeword. We show that \(s(\mathbf{c})\leq\frac{11}{2}\) which gives the claim together with Lemma 3.3. Assume first that \(\mathbf{c}\) has a codeword neighbour. We have \(s(\mathbf{c})\leq 4\cdot\frac{1}{2}+3\cdot 1=\frac{10}{2}<\frac{11}{2}\). Assume then that \(I(\mathbf{c})=\{\mathbf{c}\}\). Since \(C\) is a local locating-dominating code, it has to separate any two non-codeword neighbours, and hence it can cover at most three neighbours of \(\mathbf{c}\) only by \(\mathbf{c}\). Thus, \(s(\mathbf{c})\leq 4\cdot 1+3\cdot\frac{1}{2}=\frac{11}{2}\). **Theorem 3.9**: \[\gamma^{L-ID}(\mathcal{T})=\frac{1}{4}=\gamma^{ID}(\mathcal{T}).\] **Proof:** Since any identifying code is also a local identifying code, we have the upper bound \(\gamma^{L-ID}(\mathcal{T})\leq\gamma^{ID}(\mathcal{T})=\frac{1}{4}\) (see Table 2). Next, we prove the lower bound \(\gamma^{L-ID}(\mathcal{T})\geq\frac{1}{4}\). So, let \(C\subseteq\mathbb{Z}^{2}\) be a local identifying code in the triangular grid. We show that \(D(C)\geq\frac{1}{4}\) by showing that \(s(\mathbf{c})\leq 4\) for all \(\mathbf{c}\in C\) which then gives the claim by Lemma 3.3. Let \(\mathbf{c}\in C\) be an arbitrary codeword. Assume first that \(I(\mathbf{c})=\{\mathbf{c}\}\). Since \(C\) separates \(\mathbf{c}\) from its neighbours, every neighbour of \(\mathbf{c}\) is covered by at least two codewords and hence \(s(\mathbf{c})\leq 1+6\cdot\frac{1}{2}=4\). Assume then that \(|I(\mathbf{c})|\geq 2\) and let \(\mathbf{c}^{\prime}\in C\) be a codeword neighbour of \(\mathbf{c}\). The codeword \(\mathbf{c}\) has three neighbours, say \(\mathbf{u}_{1},\mathbf{u}_{2}\) and \(\mathbf{u}_{3}\), that are not covered by \(\mathbf{c}^{\prime}\). One of them is a neighbour of the two others. Without loss of generality we may assume that \(\mathbf{u}_{2}\) is a neighbour of both \(\mathbf{u}_{1}\) and \(\mathbf{u}_{3}\). It follows that \(\mathbf{u}_{1}\) and \(\mathbf{u}_{3}\) are not neighbours. If \(C\) covers at least two of these three points by at least two codewords, then \(s(\mathbf{c})\leq 6\cdot\frac{1}{2}+1=4\). So, let us assume that \(C\) covers two of these points by only one codeword - with \(\mathbf{c}\). Note that if \(C\) covers \(\mathbf{u}_{2}\) only by \(\mathbf{c}\), then its neighbours \(\mathbf{u}_{1}\) and \(\mathbf{u}_{3}\) are covered by at least two codewords because \(C\) - being a local identifying code - separates \(\mathbf{u}_{2}\) from its neighbours. So, we assume that \(I(\mathbf{u}_{1})=\{\mathbf{c}\}=I(\mathbf{u}_{3})\), and then we have \(|I(\mathbf{u}_{2})|\geq 2\) and \(I(\mathbf{c})=\{\mathbf{c},\mathbf{c}^{\prime}\}\). The codeword neighbour \(\mathbf{c}^{\prime}\) of \(\mathbf{c}\) covers the two remaining neighbours of \(\mathbf{c}\), say \(\mathbf{u}_{4}\) and \(\mathbf{u}_{5}\), and of course \(\mathbf{c}\) and \(\mathbf{c}^{\prime}\). Since \(I(\mathbf{c})=\{\mathbf{c},\mathbf{c}^{\prime}\}\), the code \(C\) covers the points \(\mathbf{u}_{4},\mathbf{u}_{5}\) and \(\mathbf{c}^{\prime}\) by at least three codewords in order to separate them from their neighbour \(\mathbf{c}\). Thus, we have \[s(\mathbf{c})\leq 2\cdot\frac{1}{2}+2\cdot 1+3\cdot\frac{1}{3}=4.\] ### The king grid Next, we consider consider the optimal densities of local locating-dominating and local identifying codes in the king grid \(\mathcal{K}\). Let us begin with some terminology. We call the points \(\mathbf{x}+(\pm 1,\pm 1)\) the _corner neighbours_ of \(\mathbf{x}\in\mathbb{Z}^{2}\) and the points \(\mathbf{x}+(\pm 1,0),\mathbf{x}+(0,\pm 1)\) the _non-corner neighbours_ of \(\mathbf{x}\). If \(\mathbf{y}\) is a corner neighbour of \(\mathbf{x}\), then \(|N[\mathbf{y}]\cap N[\mathbf{x}]|=4\) and if \(\mathbf{y}\) is a non-corner neighbour of \(\mathbf{x}\), then \(|N[\mathbf{y}]\cap N[\mathbf{x}]|=6\). We say that two corner neighbours of a point \(\mathbf{x}\) are _adjoining_ if their Euclidean distance is \(2\) and they are _opposite_ if they are not adjoining, _i.e._, if their Euclidean distance is \(2\sqrt{2}\). Two non-corner neighbours are adjoining if their Euclidean distance is \(\sqrt{2}\) and opposite if they are not adjoining in which case their Euclidean distance is \(2\). Note that two adjoining non-corner neighbours of a vertex are neighbours, in particular. A non-corner neighbour of \(\mathbf{x}\) is _between_ two adjoining corner neighbours of \(\mathbf{x}\) if it is at Euclidean distance \(1\) from both of them. Figure 6: Local identifying and locating-dominating codes in the triangular and king grids. **Theorem 3.10**.: \[\gamma^{L-LD}(\mathcal{K})=\frac{3}{16}.\] Proof.: By a construction we have \(\gamma^{L-LD}(\mathcal{K})\leq\frac{3}{16}\), see Figure 6b. Next, we show that \(\gamma^{L-LD}(\mathcal{K})\geq\frac{3}{16}\). Let \(C\subseteq\mathbb{Z}^{2}\) be a local locating-dominating code in the king grid. We claim that any \(4\times 4\) square \(D\subseteq\mathbb{Z}^{2}\) contains at least three codewords of \(C\). This implies that \(D(C)\geq\frac{3}{16}\). Let \(\mathbf{t}\in\mathbb{Z}^{2}\) be such that \(D=\{0,1,2,3\}\times\{0,1,2,3\}+\mathbf{t}\). Let \(D^{\prime}=\{1,2\}\times\{1,2\}+\mathbf{t}\) be the \(2\times 2\) square inside of \(D\) that does not intersect the border of \(D\). Notice that the neighbourhood of \(D^{\prime}\) is \(D\). We have four separate cases according to the number of codewords in \(D^{\prime}\). * If \(|C\cap D^{\prime}|\in\{3,4\}\), then \(|C\cap D|\geq 3\) and hence the claim holds. * Assume that \(|C\cap D^{\prime}|=2\). Let \(\mathbf{a}\) and \(\mathbf{b}\) be the two non-codewords in \(D^{\prime}\). They are neighbours and the two codewords in \(D^{\prime}\) are also neighbours of both \(\mathbf{a}\) and \(\mathbf{b}\). This means that there must be a third codeword in \(D\) which separates \(\mathbf{a}\) and \(\mathbf{b}\) and hence \(|C\cap D|\geq 3\). * Assume that \(|C\cap D^{\prime}|=1\). We need at least two more codewords in \(D\) to separate the three non-codewords in \(D^{\prime}\) from each other. Thus, \(|C\cap D|\geq 3\). * Finally, assume that \(|C\cap D^{\prime}|=0\). With one codeword in \(D\setminus D^{\prime}\) the code \(C\) can cover at most two points of \(D^{\prime}\). So, we need at least two codewords in \(D\) to cover the points of \(D^{\prime}\). To separate them we need at least three codewords in \(D\). So, also in this case \(|C\cap D|\geq 3\). Finally, let us settle the question of the optimal density of local identifying codes in the king grid. It turns out that it is \(2/9\), the same as the optimal density of identifying codes. For the proof we introduce a share shifting scheme with the following two rules: **Rule 1**: If \(\mathbf{c}\in C\) has \(|I(\mathbf{c})|=2\) and \(s(\mathbf{c})>\frac{9}{2}\), then we shift \(1/4\) share units from \(\mathbf{c}\) to the adjacent corner neighbour codeword \(\mathbf{c}^{\prime}\) which has a non-corner codeword neighbour. Figure 7: Constellations in the king grid. The edges have been omitted for simplicity. **Rule 2**: If \(\mathbf{c}\in C\) has \(|I(\mathbf{c})|=1\) and \(s(\mathbf{c})>\frac{9}{2}\), then we shift \(1/12\) share units from \(\mathbf{c}\) to two pairwise non-adjacent codewords at (graphic) distance \(2\) and Euclidean distance \(\sqrt{5}\) from \(\mathbf{c}\) which are covered by at least three codewords. If there are adjacent codewords \(\mathbf{c}_{1}\) and \(\mathbf{c}_{2}\) which satisfy these conditions, then we choose a codeword \(\mathbf{c}_{i}\) which satisfies \(I(\mathbf{c}_{i})\not\subseteq I(\mathbf{c}_{j})\) where \(\{i,j\}=\{1,2\}\). We have illustrated Rule \(2\) in Figure 7 Constellation 1. We denote by \(s^{\prime}(\mathbf{c})\) the share of a codeword \(\mathbf{c}\in C\) after applying Rule \(1\) and by \(s^{\prime\prime}(\mathbf{c})\) the share of \(\mathbf{c}\in C\) after applying both Rules \(1\) and \(2\) (in that order). In Lemma 3.11 and Theorem 3.12, we will notice that Rule \(1\) is applied only to codewords with \(\frac{9}{2}<s(\mathbf{c})\leq\frac{19}{4}\) and Rule \(2\) is applied only to codewords with \(s(\mathbf{c})\in\{4\frac{7}{12},4\frac{2}{3}\}\). Moreover, after applying them, we will have \(s^{\prime\prime}(\mathbf{c})\leq\frac{9}{2}\) for every \(\mathbf{c}\in C\). **Lemma 3.11**.: Let \(C\) be a local identifying code in the king grid and let \(\mathbf{c},\mathbf{c}^{\prime}\in C\) be such that \(|I(\mathbf{c})|\geq 2\) and \(\mathbf{c}^{\prime}\in I(\mathbf{c})\). After applying Rule \(1\), we have \(s^{\prime}(\mathbf{c})\leq 9/2\) and \(s^{\prime}(\mathbf{c}^{\prime})\leq 4\) if Rule \(1\) shifted share to \(\mathbf{c}^{\prime}\). **Proof:** We have two claims, that \(s^{\prime}(\mathbf{c})\leq 9/2\) for all \(\mathbf{c}\in C\) and \(s^{\prime}(\mathbf{c}^{\prime})\leq 4\) if Rule \(1\) shifts share to \(\mathbf{c}^{\prime}\in C\) (notice that Rule \(1\) may shift share to \(\mathbf{c}^{\prime}\) more than once). We will confirm the second claim each time after we have shifted share into a codeword. In the following, we assume that \(\mathbf{c}^{\prime}\in C\) is a codeword neighbour of \(\mathbf{c}\in C\). **Case 1.** Assume first that \(\mathbf{c}^{\prime}\) is a non-corner neighbour of \(\mathbf{c}\) and that Rule \(1\) does not shift share to \(\mathbf{c}\). Without loss of generality, we may assume that \(\mathbf{c}^{\prime}=\mathbf{c}+(1,0)\). The codeword \(\mathbf{c}^{\prime}\) does not cover the points \(\mathbf{c}+(-1,1)\), \(\mathbf{c}+(-1,0)\) and \(\mathbf{c}+(-1,-1)\). At most one of these points can be covered by exactly one codeword. Since \(C\) separates \(\mathbf{c}\) and \(\mathbf{c}^{\prime}\), at least one of them is covered by at least three codewords. Also, \(C\) separates the neighbours \(\mathbf{c}+(0,1)\) and \(\mathbf{c}+(1,1)\) which means that at least one of them is covered by at least three codewords. Similarly, \(C\) separates the neighbours \(\mathbf{c}+(0,-1)\) and \(\mathbf{c}+(1,-1)\) which means that at least one of them is covered by at least three codewords. Thus, in this case in the neighbourhood of \(\mathbf{c}\), at most one point is covered by only one codeword, at most five points are covered by only two codewords and at least three points are covered by at least three codewords and hence \[s^{\prime}(\mathbf{c})=s(\mathbf{c})\leq 1+5\cdot\frac{1}{2}+3\cdot\frac{1}{3}= \frac{9}{2}.\] **Case 2.** Assume then that \(\mathbf{c}^{\prime}\) is a corner neighbour of \(\mathbf{c}\) and that \(\mathbf{c}\) does not have any non-corner codeword neighbours. Thus, \(s^{\prime}(\mathbf{c})\leq s(\mathbf{c})\). Without loss of generality, we may assume that \(\mathbf{c}^{\prime}=\mathbf{c}+(1,1)\). First, there are at least three vertices in the closed neighbourhood of \(\mathbf{c}\) covered by at least three codewords: Indeed, \(\mathbf{c}^{\prime}\) covers the points in the set \(\{\mathbf{c},\mathbf{c}^{\prime},\mathbf{c}+(0,1),\mathbf{c}+(1,0)\}=N[\mathbf{ c}]\cap N[\mathbf{c}^{\prime}]\) and since all of these vertices are adjacent, at most one of them can have \(\{\mathbf{c},\mathbf{c}^{\prime}\}\) as its \(I\)-set. Hence, \(\mathbf{c}^{\prime}\) has an adjacent non-corner codeword. The codeword \(\mathbf{c}^{\prime}\) does not cover the points \(\mathbf{c}+(-1,1)\), \(\mathbf{c}+(-1,0)\), \(\mathbf{c}+(-1,-1)\), \(\mathbf{c}+(0,-1)\) and \(\mathbf{c}+(1,-1)\) among the points in \(N[\mathbf{c}]\). Clearly at most two of these points are covered by only one codeword -- \(\mathbf{c}\). Let us assume that there exist two such points and let us name them \(\mathbf{u}\) and \(\mathbf{v}\). Otherwise, we have \(s^{\prime}(\mathbf{c})=s(\mathbf{c})\leq 1+5\cdot\frac{1}{2}+3\cdot\frac{1}{3}= \frac{9}{2}\). Since the two non-corner neighbours of \(\mathbf{c}\) that \(\mathbf{c}^{\prime}\) does not cover are neighbours, at least one of \(\mathbf{u}\) and \(\mathbf{v}\) is a corner neighbour of \(\mathbf{c}\). **Case 2.1** Assume first that \(\mathbf{u}\) and \(\mathbf{v}\) are both corner neighbours of \(\mathbf{c}\). If they are adjoining, then \(C\) separates neither of them from the non-corner neighbour between them. Thus, \(\mathbf{u}\) and \(\mathbf{v}\) are opposite corner neighbours of \(\mathbf{c}\) and hence, \(\mathbf{u}=\mathbf{c}+(-1,1)\) and \(\mathbf{v}=\mathbf{c}+(1,-1)\) (or vice versa). Note that \(\mathbf{c}^{\prime}\) is covered by at least four codewords. Indeed, recall that at least one of \(\mathbf{c}+(1,0)\) and \(\mathbf{c}+(0,1)\) is covered by at least three codewords. Moreover, since \(I(\mathbf{u})=I(\mathbf{v})=\{\mathbf{c}\}\), the only possible locations for the third codeword are \(\mathbf{v}^{\prime}=\mathbf{c}+(2,1)\) and \(\mathbf{u}^{\prime}=\mathbf{c}+(1,2)\). However, if only one of these two vertices is a codeword, say \(\mathbf{u}^{\prime}\), then we require a fourth codeword in \(I(\mathbf{c}^{\prime})\) to separate \(\mathbf{c}^{\prime}\) and \(\mathbf{c}+(0,1)\). Thus, \(|I(\mathbf{c}^{\prime})|\geq 4\). If \(\mathbf{c}+(-1,-1)\in C\), then \(C\) covers it by four codewords due to the same arguments as above and hence, \(s(\mathbf{c})\leq 2\cdot 1+2\cdot\frac{1}{2}+3\cdot\frac{1}{3}+2\cdot\frac{1}{4}= \frac{9}{2}\). So, let us assume that \(\mathbf{c}+(-1,-1)\not\in C\). Thus, \(I(\mathbf{c})=\{\mathbf{c},\mathbf{c}^{\prime}\}\) and hence, \(\mathbf{u}^{\prime},\mathbf{v}^{\prime}\in C\). See Constellation \(3\) in Figure 7. Now, \(\mathbf{c}+(-2,-1)\in C\) since \(C\) separates \(\mathbf{c}+(-1,0)\) and \(\mathbf{u}\). Similarly, \(\mathbf{c}+(-1,-2)\in C\) since \(C\) separates \(\mathbf{c}+(0,-1)\) and \(\mathbf{v}\). Thus, \[s(\mathbf{c})\leq 2\cdot 1+3\cdot\frac{1}{2}+3\cdot\frac{1}{3}+\frac{1}{4}= \frac{19}{4}.\] Furthermore, we can give a rough upper bound \[s(\mathbf{c}^{\prime})\leq\frac{3}{2}+\frac{5}{3}+\frac{1}{4}=3\frac{5}{12}\] as we can see from Constellation 3. Now, we shift \(1/4\) share units according to Rule \(1\) from \(\mathbf{c}\) to \(\mathbf{c}^{\prime}\). After this, we have \[s^{\prime}(\mathbf{c})\leq\frac{19}{4}-\frac{1}{4}=\frac{9}{2}\] and \[s^{\prime}(\mathbf{c}^{\prime})\leq 3\frac{5}{12}+\frac{1}{4}=3\frac{2}{3}<4. \tag{2}\] Indeed, observe that Rule \(1\) can shift share to vertex \(\mathbf{c}^{\prime}\) only once as \(\mathbf{c}^{\prime}\) does not have any other suitable corner neighbours. **Case 2.2** Finally, assume that \(\mathbf{u}\) is a corner neighbour of \(\mathbf{c}\) and \(\mathbf{v}\) is a non-corner neighbour of \(\mathbf{c}\). Without loss of generality, we may assume that \(\mathbf{u}=\mathbf{c}+(-1,1)\) and that \(\mathbf{v}=\mathbf{c}+(0,-1)\). See Constellation \(4\) of Figure 7. Now, \(I(\mathbf{c})=\{\mathbf{c},\mathbf{c}^{\prime}\}\). We have \(\mathbf{c}+(-2,-1)\in C\) and \(\mathbf{c}+(-2,-2)\in C\) because \(C\) separates \(\mathbf{c}+(-1,0)\) and \(\mathbf{u}\) and because \(C\) separates \(\mathbf{c}+(-1,-1)\) and \(\mathbf{c}+(-1,0)\), respectively. Since \(C\) separates \(\mathbf{c}+(0,1)\) and \(\mathbf{c}\), we have \(\mathbf{c}+(1,2)\in C\) and since \(C\) separates \(\mathbf{c}+(0,1)\) and \(\mathbf{c}^{\prime}=\mathbf{c}+(1,1)\), \(\mathbf{c}^{\prime}\) is covered by at least four codewords. The code \(C\) separates \(\mathbf{c}+(1,0)\) and \(\mathbf{c}\) and thus \(\mathbf{c}+(1,0)\) is covered by at least three codewords. Finally, \(C\) separates \(\mathbf{c}+(1,-1)\) and \(\mathbf{v}\) and hence the point \(\mathbf{c}+(1,-1)\) is covered by at least two codewords. Thus, \[s(\mathbf{c})\leq 2\cdot 1+3\cdot\frac{1}{2}+3\cdot\frac{1}{3}+\frac{1}{4}= \frac{19}{4}.\] Let us then consider \(s({\bf c}^{\prime})\). Recall that \({\bf c}^{\prime}\) is covered by at least four codewords. Hence, at least one of vertices \({\bf c}^{\prime}+(1,-1),{\bf c}^{\prime}+(1,0),{\bf c}^{\prime}+(1,1)\) is a codeword and thus \({\bf c}^{\prime}+(1,0)\) is covered by at least three codewords. Furthermore, \({\bf c}+(1,0)\) must be covered by at least three codewords to separate it from \({\bf c}\). Moreover, at most one of \({\bf c}^{\prime}+(-1,1),{\bf c}^{\prime}+(0,1),{\bf c}^{\prime}+(1,1)\) can be covered by only two codewords. Consequently, the only other neighbours of \({\bf c}^{\prime}\) which can be covered by only two codewords are \({\bf c}\) and \({\bf c}^{\prime}+(1,-1)\). Notice that since \({\bf c}+(1,0)\) is covered by at least three codewords, \({\bf c}^{\prime}+(1,-1)\) is covered by at least two codewords. Shares of other points have been considered for \({\bf c}\) and can be re-verified with Constellation 4 of Figure 7. Thus, \[s({\bf c}^{\prime})\leq\frac{3}{2}+\frac{5}{3}+\frac{1}{4}=3\frac{5}{12}.\] When we apply Rule \(1\), we shift \(1/4\) share units away from \(s({\bf c})\) and hence \(s^{\prime}({\bf c})\leq\frac{9}{2}\). Moreover, we shift to \({\bf c}^{\prime}\) at most \(\frac{1}{2}\) share (if \({\bf c}+(2,0)\in C\) and it is in somewhat similar position to \({\bf c}\)). Hence, we have \[s^{\prime}({\bf c}^{\prime})\leq 3\frac{11}{12}<4. \tag{3}\] Together these two cases give the claim, with the observations that we have calculated value of modified share \(s^{\prime}\) for \({\bf c}\) and \({\bf c}^{\prime}\) (see Equations (2) and (3)) whenever we have shifted share and all cases in which share can be shifted with Rule \(1\) have been considered. Now, we are ready to prove the exact density of optimal local identifying codes in the king grid. **Theorem 3.12**: \[\gamma^{L-ID}({\cal K})=\frac{2}{9}.\] **Proof:** Since any identifying code is, in particular, a local identifying code, we have \(\gamma^{L-ID}({\cal K})\leq\gamma^{ID}({\cal K})=\frac{2}{9}\). We prove the lower bound \(\gamma^{L-ID}({\cal K})\geq\frac{2}{9}\) by showing that \(s^{\prime\prime}({\bf c})\leq\frac{9}{2}\) for each \({\bf c}\in C\). After that, the claim follows from Lemma 3.4. Recall that we apply first Rule \(1\) and then Rule \(2\) to obtain value for modified share function \(s^{\prime\prime}\). By Lemma 3.11, after applying Rule \(1\), each codeword \({\bf c}\) with \(|I({\bf c})|\geq 2\) has share of at most \(s^{\prime}({\bf c})\leq\frac{9}{2}\). In the following, we consider a codeword \({\bf c}\) with \(I({\bf c})=\{{\bf c}\}\). Thus, Rule \(1\) does not shift any share into or away from \({\bf c}\) and \(s({\bf c})=s^{\prime}({\bf c})\). Moreover, also Rule \(2\) cannot shift share to \({\bf c}\) and hence, \(s^{\prime\prime}({\bf c})\leq s({\bf c})\). Observe that now each neighbour of \({\bf c}\) is covered by at least two codewords. Moreover, if three or more of them are covered by at least three codewords, then \(s^{\prime\prime}({\bf c})\leq s({\bf c})\leq 1+\frac{5}{2}+\frac{3}{3}=\frac{9}{2}\). Hence, we may assume that at most two of them are covered by three or more codewords. Consider now any non-corner neighbour \({\bf u}\) of \({\bf c}\). We may assume that \(I({\bf u})=\{{\bf c},{\bf c}^{\prime}\}\). However, \({\bf c}^{\prime}\) is also adjacent to at least one of the corner neighbours of \({\bf c}\) adjacent to \({\bf u}\). Thus, to separate that corner neighbour from \({\bf u}\), it is covered by at least three codewords. Hence, there has to be at least two opposite corner neighbours of \({\bf c}\) which are covered by three codewords. Assume that \({\bf c}+(1,-1)\) and \({\bf c}+(-1,1)\) are covered by three codewords. Assume first that \(\{{\bf c}+(2,-1),{\bf c}+(1,-2)\}\not\subseteq C\) and without loss of generality that \({\bf c}+(2,-1)\in C\). Observe that since \({\bf c}+(0,-1)\) is covered by exactly two codewords, one of those codewords is either \({\bf c}+(0,-2)\) or \({\bf c}+(-1,-2)\). However, vertex \({\bf c}+(-1,-1)\) is adjacent to both of those codewords and hence, \(|I({\bf c}+(-1,-1))|\geq 3\), a contradiction. Therefore, \(\{{\bf c}+(2,-1),{\bf c}+(1,-2)\}\subseteq C\). This leads to Constellation \(2\) in Figure 7. In this case, we have \(s({\bf c})\leq 1+\frac{6}{2}+\frac{2}{3}=4\frac{2}{3}\). Here we have the inequality since it is possible that one of the two corner neighbours is actually covered by four codewords (in that case \(s({\bf c})=4\frac{7}{12}\)). Furthermore, since Rule \(1\) does not affect to codeword \({\bf c}\), we have \(s^{\prime}({\bf c})\leq 4\frac{2}{3}\). Observe that both codewords \({\bf c}+(2,-1)\) and \({\bf c}+(1,-2)\) cannot be covered by only two codewords since they are separated by code \(C\). Moreover, similar considerations can also be applied to codewords \({\bf c}+(-2,1)\) and \({\bf c}+(-1,2)\). However, as they are in a symmetric position compared to \({\bf c}+(2,-1)\) and \({\bf c}+(1,-2)\), we do not mention them in the following arguments. Let us denote \({\bf c}^{\prime}={\bf c}+(2,-1)\) and assume, without loss of generality, that \(I({\bf c}^{\prime})\not\subseteq I({\bf c}+(1,-2))\). Thus, at least one of vertices \({\bf c}+(3,0)\), \({\bf c}+(3,-1)\) and \({\bf c}+(3,-2)\) is a codeword. Let us now divide the proof into three cases based on which one of these three vertices is a codeword. **Case 1.**\({\bf a}={\bf c}+(3,0)\in C\): In this case, at least three of the four vertices of \({\bf c}^{\prime}+\{(0,0),\)\((1,0),\)\((1,1),\)\((0,1)\}\) and of \({\bf c}^{\prime}+\{(0,0),(-1,0),(-1,-1),(0,-1)\}\) are covered by at least three codewords while the fourth is covered by at least two codewords. Furthermore, \({\bf c}^{\prime}+(-1,1)\) is covered by exactly two codewords. Thus, \[s({\bf c}^{\prime})\leq 1+\frac{3}{2}+\frac{5}{3}=4\frac{1}{6}.\] **Case 2.**\({\bf b}={\bf c}+(3,-1)\in C\): As in the previous case, at least three of the four vertices of \({\bf c}^{\prime}+\{(0,0),(1,0),(1,1),(0,1)\}\) and of \({\bf c}^{\prime}+\{(0,0),(-1,0),(-1,-1),(0,-1)\}\) are covered by at least three codewords while the fourth is covered by at least two codewords. Furthermore, \({\bf c}^{\prime}+(-1,1)\) and \({\bf c}^{\prime}+(1,-1)\) are covered by at least two codewords. Thus, \[s({\bf c}^{\prime})\leq\frac{4}{2}+\frac{5}{3}=3\frac{2}{3}.\] **Case 3.**\({\bf d}={\bf c}+(3,-2)\in C\): In this case, \({\bf c}^{\prime}+(-1,0)\) is covered by three codewords, \({\bf c}^{\prime}+(-1,1)\) is covered by two codewords, \({\bf c}^{\prime}+(-1,-1)\) by at least two codewords, both \({\bf c}^{\prime}\) and \({\bf c}^{\prime}+(0,-1)\) are covered by at least three codewords and one of them by at least four codewords. Furthermore, both \({\bf d}\) and \({\bf b}\) are covered by at least two codewords and at least one by at least three codewords. Finally, at least one of \({\bf a}\) and \({\bf c}^{\prime}+(0,1)\) is covered by at least two codewords. Thus, \[s({\bf c}^{\prime})\leq 1+\frac{4}{2}+\frac{3}{3}+\frac{1}{4}=4\frac{1}{4}.\] Hence, in all three cases \(s({\bf c}^{\prime})\leq 4\frac{1}{4}=4\frac{3}{12}\). Moreover, if Rule \(1\) shifts share to \({\bf c}^{\prime}\), then \(s^{\prime}({\bf c}^{\prime})\leq 4\) as we have seen in Lemma 3.11. Hence, \(s^{\prime}({\bf c}^{\prime})\leq 4\frac{1}{4}\). Furthermore, there are at most three codewords at Euclidean distance \(\sqrt{5}\) from \({\bf c}^{\prime}\) which are covered only by themselves. Indeed, we have \({\bf c}\) and other possibilities are at points \({\bf c}+(4,0),{\bf c}+(4,-2)\) and \({\bf c}+(3,-3)\). However, at most three of these vertices can be in \(C\) and be covered only by themselves, simultaneously. Hence, \[s^{\prime\prime}({\bf c}^{\prime})\leq 4\frac{3}{12}+\frac{3}{12}=\frac{9}{2}.\] Therefore \(s^{\prime\prime}(\mathbf{c})\leq\frac{9}{2}\) for each \(\mathbf{c}\in C\) since \(4\frac{2}{3}-\frac{2}{12}=\frac{9}{2}\) and the claim follows with Lemma 3.4. ## 4 Conclusions We introduced two new classes of covering codes for every positive integer \(r\) - the local \(r\)-identifying and local \(r\)-locating-dominating codes - and studied them in binary hypercubes and infinite grids for \(r=1\). We studied the sizes of optimal local identifying codes in binary hypercubes and gave a general lower bound and a general upper bound that are asymptotically close. Also, for some small binary hypercubes precise values for the optimal codes were found. We studied the densities of optimal local identifying and local locating-dominating codes in (infinite) square, hexagonal, triangular and king grids. In all except one of the cases we obtained optimal constructions. For future research, we suggest studying the introduced new codes in binary hypercubes and in infinite grids for \(r>1\) and in different graphs. Notice that unlike in the traditional case now the problem does not reduce to power graphs \(G^{r}\). Also, one could try improving the bounds of this paper in the cases where the size or the density of an optimal code was not settled. In [10], local \(r\)-identifying codes were studied in paths and in cycles. It was proved that in both finite and infinite paths and in sufficiently large cycles the classes of local \(r\)-identifying and \(r\)-identifying codes are equal for all \(r\).
2303.04504
A simple approach to Lieb--Thirring type inequalities
In \cite{Nam} Nam proved a Lieb--Thirring Inequality for the kinetic energy of a fermionic quantum system, with almost optimal (semi-classical) constant and a gradient correction term. We present a stronger version of this inequality, with a much simplified proof. As a corollary we obtain a simple proof of the original Lieb--Thirring inequality.
Robert Seiringer, Jan Philip Solovej
2023-03-08T10:51:07Z
http://arxiv.org/abs/2303.04504v2
# A simple approach to Lieb-Thirring type inequalities ###### Abstract. In [10] Nam proved a Lieb-Thirring Inequality for the kinetic energy of a fermionic quantum system, with almost optimal (semi-classical) constant and a gradient correction term. We present a stronger version of this inequality, with a much simplified proof. As a corollary we obtain a simple proof of the original Lieb-Thirring inequality. (c) 2023 by the authors. This paper may be reproduced, in its entirety, for non-commercial purposes. which can also be applied to give bounds in both directions [5], but seems to be more useful for the study of the dual problem, however. Our main result is the following. **Theorem 1**.: _Let \(\eta:\mathbb{R}_{+}\to\mathbb{R}\) be a function with_ \[\int_{0}^{\infty}\eta(t)^{2}\frac{dt}{t}=1=\int_{0}^{\infty}\eta(t)^{2}t\,dt \tag{2}\] _and let \(C_{d}^{\rm TF}=4\pi\frac{d}{d+2}\Gamma(1+d/2)^{2/d}\). For any trace-class \(0\leq\gamma\leq 1\) on \(L^{2}(\mathbb{R}^{d})\) with density \(\rho\),_ \[{\rm Tr}(-\Delta)\gamma\geq\frac{C_{d}^{\rm TF}}{\left(\int_{0}^{\infty}\eta( t)^{2}t^{d+1}dt\right)^{2/d}}\int_{\mathbb{R}^{d}}\rho^{1+2/d}-\frac{4}{d^{2}} \int_{\mathbb{R}^{d}}|\nabla\sqrt{\rho}|^{2}\int_{0}^{\infty}\eta^{\prime}(t)^ {2}t\,dt \tag{3}\] We note that under the normalization conditions (2) we have \(\int_{0}^{\infty}\eta(t)^{2}t^{d+1}dt>1\) by Jensen's inequality. In order for this integral to be close to \(1\), \(\eta^{2}\) needs to be close to a \(\delta\)-distribution at \(1\), in which case the final factor in (3) necessarily becomes large, however. A possible concrete choice is \[\eta(t)=(\pi\varepsilon)^{-1/4}\exp\left(-(\varepsilon/2+\ln t)^{2}/(2 \varepsilon)\right) \tag{4}\] for \(\varepsilon>0\). Then \(\int_{0}^{\infty}\eta^{\prime}(t)^{2}t\,dt=(2\varepsilon)^{-1}\) and \[\int_{0}^{\infty}\eta(t)^{2}t^{1+x}dt=\exp\left(\varepsilon x(2+x)/4\right)\] for any \(x\in\mathbb{R}\). For this choice of \(\eta\) the bound (3) thus reads \[{\rm Tr}(-\Delta)\gamma\geq C_{d}^{\rm TF}e^{-\varepsilon(1+d/2)}\int_{ \mathbb{R}^{d}}\rho^{1+2/d}-\frac{2}{d^{2}\varepsilon}\int_{\mathbb{R}^{d}}| \nabla\sqrt{\rho}|^{2}\] for any \(\varepsilon>0\). A similar bound was proved by Nam in [10], but with the exponent \(-1\) of \(\varepsilon\) in the gradient term replaced by \(-3-4/d\). As already pointed out in [10], one can combine an inequality of the form (3) with the Hoffmann-Ostenhof inequality [9] \[{\rm Tr}(-\Delta)\gamma\geq\int_{\mathbb{R}^{d}}|\nabla\sqrt{\rho}|^{2} \tag{5}\] to obtain a Lieb-Thirring inequality without gradient correction. The following is an immediate consequence of (3) and (5). **Corollary 2**.: _For any trace-class \(0\leq\gamma\leq 1\) on \(L^{2}(\mathbb{R}^{d})\) with density \(\rho\), we have_ \[{\rm Tr}(-\Delta)\gamma\geq C_{d}^{\rm TF}R_{d}\int_{\mathbb{R}^{d}}\rho^{1+2/d} \tag{6}\] _with_ \[R_{d}=\sup_{\eta}\frac{1}{\left(\int\eta(t)^{2}t^{d+1}dt\right)^{2/d}}\frac{1} {1+\frac{4}{d^{2}}\int\eta^{\prime}(t)^{2}t\,dt} \tag{7}\] _where the supremum is over functions \(\eta\) satisfying the normalization conditions (2)._ We shall show below that for \(d\leq 2\), \(R_{d}\) can be calculated explicitly. In fact, \(R_{1}=(-3/a)^{3}/2^{4}\approx 0.132\), where \(a\approx-2.338\) is the largest real zero of the Airy function, and \(R_{2}=1/4\). We were not able to compute \(R_{d}\) for \(d\geq 3\), but it can easily be obtained numerically. For \(d=3\), we find \(R_{d}\approx 0.331\). In all these cases, our result is weaker than the one by Rumin [11], however, who obtained (6) with \(R_{d}=d/(d+4)\). Proof of Theorem 1.: The starting point is the following IMS type formula for any positive function \(f:\mathbb{R}^{d}\to\mathbb{R}_{+}\), \[\Delta=\int_{0}^{\infty}\eta(t/f(x))\Delta\eta(t/f(x))\frac{dt}{t}+\frac{| \nabla f(x)|^{2}}{f(x)^{2}}\int_{0}^{\infty}\eta^{\prime}(t)^{2}t\,dt\] where we used the first normalization condition in (2). This follows from \[\frac{1}{2}\theta^{2}\Delta+\frac{1}{2}\Delta\theta^{2}=\theta\Delta\theta+( \nabla\theta)^{2}\] applied to \(\theta(x)=\eta(t/f(x))\). As a consequence, we have \[\operatorname{Tr}(-\Delta)\gamma=-\int_{\mathbb{R}^{d}}\rho\frac{|\nabla f|^{ 2}}{f^{2}}\int_{0}^{\infty}\eta^{\prime}(t)^{2}t\,dt+\int_{\mathbb{R}^{d}}\int _{0}^{\infty}p^{2}\langle\psi_{p,t}|\gamma|\psi_{p,t}\rangle\frac{dt}{t}dp\] where \(\psi_{p,t}(x)=e^{ipx}\eta(t/f(x))\). Note also that \[\int_{\mathbb{R}^{d}}\int_{0}^{\infty}t\langle\psi_{p,t}|\gamma|\psi_{p,t} \rangle dt\,dp=\int_{\mathbb{R}^{d}}\rho f^{2}\int_{0}^{\infty}\eta(t)^{2}t\, dt=\int_{\mathbb{R}^{d}}\rho f^{2}\] where we used the second normalization condition in (2). Hence \[\operatorname{Tr}(-\Delta)\gamma =-\int_{\mathbb{R}^{d}}\rho\frac{|\nabla f|^{2}}{f^{2}}\int_{0}^ {\infty}\eta^{\prime}(t)^{2}t\,dt+\int\rho f^{2}\] \[\quad+\int_{\mathbb{R}^{d}}\int_{0}^{\infty}(p^{2}-t^{2})\langle \psi_{p,t}|\gamma|\psi_{p,t}\rangle\frac{dt}{t}dp\] Since \(0\leq\gamma\leq 1\) by assumption, we can get a lower bound on the last term as \[\int_{\mathbb{R}^{d}}\int_{0}^{\infty}(p^{2}-t^{2})\langle\psi_{p,t}|\gamma| \psi_{p,t}\rangle\frac{dt}{t}dp\geq\int_{\mathbb{R}^{d}}\int_{0}^{\infty}(p^{2 }-t^{2})_{-}\|\psi_{p,t}\|^{2}\frac{dt}{t}dp\] where \((\,\cdot\,)_{-}=\min\{0,\,\cdot\,\}\) denotes the negative part. Since \[\|\psi_{p,t}\|^{2}=\int_{\mathbb{R}^{d}}\eta(t/f(x))^{2}dx\] we have \[\int_{\mathbb{R}^{d}}\int_{0}^{\infty}(p^{2}-t^{2})_{-}\|\psi_{p,t}\|^{2} \frac{dt}{t}dp=-\int_{|p|\leq 1}(1-p^{2})dp\int_{\mathbb{R}^{d}}f^{d+2}\int_{0}^ {\infty}\eta(t)^{2}t^{d+1}dt\] Altogether, we have thus shown that \[\operatorname{Tr}(-\Delta)\gamma \geq-\int_{\mathbb{R}^{d}}\rho\frac{|\nabla f|^{2}}{f^{2}}\int_{ 0}^{\infty}\eta^{\prime}(t)^{2}t\,dt+\int_{\mathbb{R}^{d}}\rho f^{2}\] \[\quad-\int_{|p|\leq 1}(1-p^{2})dp\int_{\mathbb{R}^{d}}f^{d+2} \int_{0}^{\infty}\eta(t)^{2}t^{d+1}dt\] We now choose \(f=c\rho^{1/d}\) and optimize over \(c>0\). This gives (3). Finally, we shall analyze the optimization problem in (7). Let \(e_{d}>0\) denote the ground state energy of \(-\partial_{t}^{2}-t^{-1}\partial_{t}+d^{2}/(4t^{2})+t^{d}\) on \(L^{2}(\mathbb{R}_{+},t\,dt)\) (or, equivalently, of \(-\Delta+|x|^{d}\) on \(L^{2}(\mathbb{R}^{d+2})\)). We claim that \[R_{d}=\frac{d}{2}\left(\frac{d+2}{2e_{d}}\right)^{1+2/d} \tag{8}\] To see this, let us note that by a straightforward scaling argument we can rewrite \(R_{d}^{-1}\) as \[\frac{1}{R_{d}} =\frac{4}{d^{2}}\inf_{\|\eta\|_{2}=1}\left(\int\eta(t)^{2}t^{d+1} dt\right)^{2/d}\int\left(\frac{d^{2}}{4t^{2}}\eta(t)^{2}+\eta^{\prime}(t)^{2} \right)t\,dt\] \[=\frac{4}{d^{2}}\inf_{\|\eta\|_{2}=1}\inf_{\lambda>0}\left(\frac {2}{d\lambda}\right)^{2/d}\left[\frac{d}{d+2}\int\left(\frac{d^{2}}{4t^{2}} \eta(t)^{2}+\lambda t^{d}\eta(t)^{2}+\eta^{\prime}(t)^{2}\right)t\,dt\right]^ {1+2/d} \tag{9}\] where \(\|\eta\|_{2}\) denotes the \(L^{2}(\mathbb{R}_{+},t\,dt)\) norm, and we used the simple identity \(ab^{x}=\frac{x^{x}}{(1+x)^{1+x}}\inf_{\lambda>0}\lambda^{-x}(a+\lambda b)^{1+x}\) for positive numbers \(a\), \(b\) and \(x\). Taking first the infimum over \(\eta\) for fixed \(\lambda\) leads to the ground state energy of \(-\partial_{t}^{2}-t^{-1}\partial_{t}+d^{2}/(4t^{2})+\lambda t^{d}\), which a change of variables shows to be equal to \(\lambda^{2/(d+2)}e_{d}\). Hence we arrive at (8). For \(d=1\), once readily checks that the ground state of \(-\partial_{t}^{2}-t^{-1}\partial_{t}+1/(4t^{2})+t\) equals \(t^{-1/2}\text{Ai}(t+a)\) with \(a\) the largest real zero of the Airy function Ai. In particular, \(e_{1}=-a\). For \(d=2\) we find \(e_{2}=4\) (the ground state energy of \(-\Delta+|x|^{2}\) on \(\mathbb{R}^{4}\)), and the ground state of \(-\partial_{t}^{2}-t^{-1}\partial_{t}+1/t^{2}+t^{2}\) is given by \(te^{-t^{2}/2}\). One can also check that \(R_{d}\to 1\) as \(d\to\infty\). In fact, using (4) as a trial state and optimizing over the choice of \(\varepsilon\), one finds \[R_{d}\geq\frac{\sqrt{1+\frac{2d^{2}}{1+d/2}}-1}{\sqrt{1+\frac{2d^{2}}{1+d/2}}+ 1}\exp\left(-\frac{1+d/2}{d^{2}}\left(\sqrt{1+\frac{2d^{2}}{1+d/2}}-1\right) \right)=1-O(d^{-1/2})\,.\]
2307.08811
Co(ve)rtex: ML Models as storage channels and their (mis-)applications
Machine learning (ML) models are overparameterized to support generality and avoid overfitting. The state of these parameters is essentially a "don't-care" with respect to the primary model provided that this state does not interfere with the primary model. In both hardware and software systems, don't-care states and undefined behavior have been shown to be sources of significant vulnerabilities. In this paper, we propose a new information theoretic perspective of the problem; we consider the ML model as a storage channel with a capacity that increases with overparameterization. Specifically, we consider a sender that embeds arbitrary information in the model at training time, which can be extracted by a receiver with a black-box access to the deployed model. We derive an upper bound on the capacity of the channel based on the number of available unused parameters. We then explore black-box write and read primitives that allow the attacker to:(i) store data in an optimized way within the model by augmenting the training data at the transmitter side, and (ii) to read it by querying the model after it is deployed. We also consider a new version of the problem which takes information storage covertness into account. Specifically, to obtain storage covertness, we introduce a new constraint such that the data augmentation used for the write primitives minimizes the distribution shift with the initial (baseline task) distribution. This constraint introduces a level of "interference" with the initial task, thereby limiting the channel's effective capacity. Therefore, we develop optimizations to improve the capacity in this case, including a novel ML-specific substitution based error correction protocol. We believe that the proposed modeling of the problem offers new tools to better understand and mitigate potential vulnerabilities of ML, especially in the context of increasingly large models.
Md Abdullah Al Mamun, Quazi Mishkatul Alam, Erfan Shayegani, Pedram Zaree, Ihsen Alouani, Nael Abu-Ghazaleh
2023-07-17T19:57:10Z
http://arxiv.org/abs/2307.08811v3
# DeepMem: ML Models as storage channels and their (mis-)applications ###### Abstract Machine learning (ML) models are overparameterized to support generality and avoid overfitting. Prior works have shown that these additional parameters can be used for both malicious (e.g., hiding a model covertly within a trained model) and beneficial purposes (e.g., watermarking a model). In this paper, we propose a novel information theoretic perspective of the problem; we consider the ML model as a storage channel with a capacity that increases with overparameterization. Specifically, we consider a sender that embeds arbitrary information in the model at training time, which can be extracted by a receiver with a black-box access to the deployed model. We derive an upper bound on the capacity of the channel based on the number of available parameters. We then explore black-box write and read primitives that allow the attacker to: **(i)** store data in an optimized way within the model by augmenting the training data at the transmitter side, and **(ii)** to read it by querying the model after it is deployed. We also analyze the detectability of the writing primitive and consider a new version of the problem which takes information storage covertness into account. Specifically, to obtain storage covertness, we introduce a new constraint such that the data augmentation used for the write primitives minimizes the distribution shift with the initial (baseline task) distribution. This constraint introduces a level of "interference" with the initial task, thereby limiting the channel's effective capacity. Therefore, we develop optimizations to improve the capacity in this case, including a novel ML-specific substitution based error correction protocol. We analyze the achievable capacity for different size networks and models, demonstrating significant capacity to transfer data with low error rates. We believe that the proposed modeling of the problem offers new tools to better understand and mitigate potential vulnerabilities of ML, especially in the context of increasingly large models. ## 1 Introduction Machine learning (ML) in general, and Deep Neural Networks (DNNs) in particular, deliver state-of-the-art performance across many areas including computer vision [5, 34, 65, 71], natural language processing (NLP) [36, 17, 23], robotics [32, 52, 62], autonomous driving [10, 74], and healthcare [6, 51, 67]. With their increasing deployment for critical applications, a number of threat models have been identified that can affect the security of the model or the privacy of the data that is used to train it. For example, adversarial attacks [54, 38, 15, 33] and poisoning attacks [7, 8, 31, 56, 73, 35] compromise the security of the model, by causing it to misclassify to the attacker's advantage. Similarly, privacy related attacks can leak private information about the data used in training the model [37, 50, 59, 70, 82]. New generations of architectures continue to emerge with increasing size including diffusion models like Dall-E (12 billion parameters) [63, 64] and Large Language Models (LLMs) such as GPT-4 (rumored to have over trillion parameters) [80]. The virtues of over-parameterization have been established from a statistical point of view; it is a necessary technique for dealing with high-dimensional data. The lottery ticket hypothesis (LTH), a seminal paper in machine learning, demonstrated that for an ML model undergoing training, there exists winning tickets, i.e., smaller subnetworks which suffice on their own to capture the trained model [28]. Thus, once the model is trained, many of the parameters, i.e., those not part of the winning ticket, can be considered _unused_ during inference. We identify these "spare" parameters of the initial (non-pruned) model as Unused Parameters (UPs). The conceptual implication is that the state of these parameters does not matter (or is a don't care) provided it does not interfere with the results of the winning ticket. In both software [78] and hardware [27] systems, **undefined behavior and don't-care states** have been shown to be potential sources of vulnerabilities. If attackers can control the state of these parameters, without affecting the baseline model, they may be able to change the state of the network to their advantage covertly. In fact, several prior works have shown that UPs can be used both for malicious. For example, [68] shows that it is possible to hijack a model for a separate task. One other possible threat is to exfiltrate private training data by abusing the model capacity [72]. Other works establish that this can be used for beneficial purposes such as watermarking [2, 19, 66]. Our goal in this paper is to systematically investigate the potential (mis)-use of ML models overparametrization. We propose a new perspective to address the problem by considering the "don't care" state of ML models as a storage/communication channel. In the proposed approach, UPs can be viewed as an additional capacity beyond the baseline task, which can be abused by adversaries. We build on the previous work and explore using the spare capacity as a storage channel between an entity (sender) that trains the model and stores data in the channel, and another entity (receiver) that attempts to retrieve this data through access to the trained model; we call this channel _DeepMem_. DeepMem can be used within a threat model in which a malevolent ML training as a service maliciously trains a model on behalf of a customer, but has no access to exfiltrate the private training data through direct communication. Instead, the service stores private information in the unused parameters of the model through training. Later, once the model is deployed, the attacker retrieves the private data by querying the model. We characterize and explore DeepMem in a sequence of steps. In Section 2, we first derive an upper bound on the capacity of the channel based on the number of unused (and therefore prunable) parameters. This capacity represents the upper limit on the size of the data that can be communicated through this shared channel (analogous to Shannon's limit with respect to traditional communication channels [69]). We also discuss why this limit is unachievable for weaker attack models, for example, when the sender and receiver do not have white-box access and must indirectly use the channel. In Section 3, we then explore how to store values in the channel with only black-box access. Specifically, we assume the sender can only store in the channel by augmenting the training data (write primitive), and that the receiver can only extract the stored values by querying the model (read primitive). We introduce optimizations to improve the performance of the channel. For example, we use (_Dynamic DeepMem, or DM-D_) to differentially reduce the number of patched samples during training, consuming less capacity. One drawback of the channel we explored so far is that the poisoned inputs used to store the data in the model are out-of-distribution and easy to identify. Thus, we consider an alternative threat model where the attacker is limited to making the input data similar to the baseline data to avoid detection (Section 5). We encode the data using patches within input images selected from the baseline distribution (and unknown to the attacker attempting to read the data after training). Since the input sequences are covert, the efficiency of the channel will be lower. The channel is more stochastic since each input patch pattern can be embedded in a variety of different input images. This stochasticity offers opportunities for optimization: multiple reads with each pattern embedded in different input images provide higher confidence in the true value stored. To further improve the capacity, we develop a novel error correction code that takes advantage of the nature of the model; specifically, we take advantage of the relative frequency of the observed outputs (after the repeated read operations) and carry out substitutions among the most likely classes. Section 7 discusses potential mitigations to DeepMem. With limited fine-tuning or pruning of the model pre-deployment, it is possible to interfere with the channel; however, these require some effort at the receiver side to update the model. Using distributed training can limit the attacker's access to the private data, as well as limit their opportunity to influence the UPs. Finally, we discuss the possibility to detect covert model training in the input, parameter, and feature spaces. Our work is most similar to Song et. al. [72] who were the first to demonstrate the transfer of private data through an ML model. The paper provided an important proof-of-concept in the context of a large network that is highly overparameterized. Our work systematically explores the available capacity and different optimizations to increase it. Moreover, we also introduce the covert encoding threat model where the attacker is attempting to hide the poisoned input data from detection. We discuss this and other related works in Section 8. In summary, the contributions of this paper are as follows. * We develop new black-box modulation techniques for both covert and non-covert channels in terms of input and features learned by the model that allows an attacker to store values within an ML model (the training data is augmented), and to recover the stored data by querying the model. * We introduce dynamic encoding as an optimization for non-covert channel called dynamic DeepMem (DM-D) that requires the fewest malicious samples but achieves high data transmission accuracy and minimal baseline test accuracy degradation in comparison to the traditional training approach to push the capacity of the model. * We develop optimizations to improve the quality of the covert channel and new error correction techniques that takes advantage of the nature of the model by multiple read operations and with this extra information it outperforms the optimal Reed Solomon error correction method in a highly noisy channel. * We demonstrate the use of the channel by performing an attack to disclose private information through the model, where we demonstrate transferring different information modalities through the channel. The attack works even for smaller models where the available capacity is limited. Upper bound on channel capacity DeepMem leverages the overparametrization of an ML model to store information within the unused capacity. In this section, we explore an upper bound on the capacity of the channel. A communication channel capacity is defined by Shannon's Limit which provides an upper bound given its physical bandwidth and signal-to-noise ratio [69]. Shannon's limit in tight: an important implication is that at rates below channel capacity, modulation strategies combined with error control codes exist that achieve those rates, with an arbitrarily small probability of error. To reach a similar upper bound on the capacity, we start from Frankle and Carbin [28] seminal _Lottery Ticket Hypothesis_ (LTH). LTH states that for an ML model undergoing training, there exists winning tickets, i.e., smaller subnetworks which can be trained independently from scratch and which suffice on their own to capture the trained model. LTH implies that when we prune a model, sparse networks with high generalization ability, the so-called winning tickets, can be found. It also follows that the remaining parameters not belonging to the winning ticket can be considered as unused parameters (UPs) and potentially available to store data in the channel. This reasoning suggests the following strategy to estimate a capacity upper limit. If we prune the model, any pruned parameters are considered UPs that are not part of the winning ticket, and can therefore be exploited to store information. We use Iterative Magnitude Pruning (IMP) [61] a state-of-the-art pruning algorithm for this process; We prune the network while tracking the accuracy drop. Figure 1 illustrates the capacity of LeNet-5 [25] and Resnet50 model [81]. We see that model accuracy almost stays the same while we prune more than half of the Lenet-5 (61K parameters) trained with MNIST [22] and up to 95% of Resnet50 model (23.5M parameters) trained with CIFAR10 dataset [39]. We were also able to prune 85% of medium-sized Alexnet model [81](7M parameters), for MNIST digit recognition. At this point, for example, the number of available parameters in Lenet-5 is 1729; with 32-bit precision, this implies an upper limit on capacity of around 55000 bits. If we continue pruning additional parameters, the capacity increases, while the accuracy of the baseline model drops, illustrating the tension between the two: storing more data will come at the cost of degrading the accuracy of the baseline model. This upper bound is reachable under the assumption of an attacker with a white-box access where an adversary can manipulate the model directly both at the sender and receiver side. Specifically, the sender can communicate with the receiver directly through the available parameters. ## 3 DeepMem: Black-box storage channels The instantiation of DeepMem under black-box assumptions is shown in Figure 2. The sender writes to DeepMem by augmenting the training data and the receiver retrieves it by querying the trained network without access to its internal parameters. Concretely, the sender and receiver pre-agree on a protocol only: what input patterns represent what address, and the order of the addresses. We note that these are independent of the secret training data. These patterns are included in the training set to write the data on the sender side, and used to query the model to read the data on the receiver side. The capacity we derived based on the available parameters will generally be unattainable in the black-box model for a number of reasons, primarily: (1) Write primitives that augment training data do not directly write to a specific parameter but rather influence potentially multiple parameters; (2) Read primitives that read an address based on input inference also do not directly read a parameter, but rather get a combined output through the network; and (3) Some of the capacity will be needed for the network to learn the mapping from the input "address" to the stored output. The first two effects also make it difficult for the storage channel not to interfere with the baseline model, affecting its accuracy independent of the available capacity. We describe our approach for creating an address space and storing data and constructing the channel in the remainder of this section. ### Forming an address space The address space refers to the pre-agreed upon input patterns that serve as addresses to store the data. These patterns are Figure 1: Model accuracy degrades but storage capacity increases with the increasing number of parameter pruning Figure 2: DeepMem in a Black-box setting used during training to store a particular label (representing the private data). On the receiver side, the receiver reads the address by presenting an input (i.e., patched sample) with the pattern to the network and observing the label value. Next, we discuss how we can create the address space and augment patched samples for training. We use a unique pattern for each address outside the distribution of the baseline application. We follow the general procedure of Song et al.'s _Capacity Abuse_ attack [72] but modify it for both grayscale (MNIST [22]) and RGB (CIFAR-10 [39]) images according to a predetermined order that represents the different addresses. Figure 3 shows samples of the patches, with a single pixel set. This approach will limit the number of addresses to the number of pixels in the image. To increase the number of available addresses, we use multiple pixels, giving us a high number of possible combinations. We pick different color intensities for different combination of pixels. Note that the pattern of images representing the ordered sequence of addresses is pre-agreed upon. It is also possible to create a configurable address space-for example, the writer may encode the size of the message in the first few addresses to configure the remainder of the protocol. To summarize, the sender and receiver pre-agree and/or configure a sequence of addresses (\(A_{1},A_{2},A_{3},A_{4}.....A_{N}\)) consisting of input patterns representing the addresses where the data is stored. ### Static DeepMem (DM-S) The sender is interested in storing an arbitrary message represented as a bitstream of size \(N\) bits. Given the number of output classes \(c\), the range of the stored value can be from 1 and \(c\) encoded as the output label. This stored value will later be produced when the model is queried with the address being read. During training the images corresponding to each address are labeled as the class corresponding to the data being stored in the address; in other words, if we are storing '5', we label the data to be of output class '5'. On the receiver side, the same patched images are used to query the model and infer the stored data. Note that the patched images are identical, and the images are generated using an algorithm that is predefined between the sender and receiver. Thus far, this approach is similar to the Capacity Abuse (CA) attack [72], with the modifications to grayscale and RGB images mentioned earlier. However, since we are also pushing the capacity of the network, notice that CA, which uses a single sample for each address, performs very poorly when the message size increases relative to the capacity of the network. Thus, our approach, DeepMem-Static (DM-S), also uses a fixed number of samples for each address by default set to 20. The samples have the same pixel pattern with the same values for pixel color intensities. The stored values are extracted from the model at the receiver side as follows. We assume the model that was trained is deployed and is accessible to the receiver, who is able to query the model with input images. Please recall that the sender and receiver pre-agree on the input pattern sequence representing the addresses. Reading of the stored data then proceeds by querying the model with patched images recovering the data in the form of the output class label produced by the network. **Training protocol for storage and addressing Data Imbalance:** As we push the capacity of the channel, an important issue that arises is that the DeepMem training data samples can overwhelm the baseline model data, degrading its accuracy. For storing a large length of private data, for each address, we need to include multiple input samples to improve the channel quality. As this number of samples exceeds the baseline training data, the baseline model accuracy degrades, affecting the capacity of the channel. We address this issue through data augmentation using a generative adversarial network (GAN) [18] to provide more clean data samples in the same distribution of the original datasets. We also study two approaches: the first approach starts from a pre-trained model that is trained on the baseline dataset first [29] and further trains this model with a mix of the augmented baseline data set, and the patched samples. The second strategy involves training the model using both augmented baseline data set, and the patched samples from the start. We observed that this latter strategy maintains a higher baseline accuracy, also confirmed by Adi et. al. [2] in their black-box watermarking work. Note that we always train the model with a mix of the augmented baseline data set, and the patched data set with a 1:1 ratio to continue to reinforce the baseline model as we store the DeepMem data. Next, we discuss how to recover the private data from the ML model. ### Dynamic DeepMem (DM-D) During training, we expose the network to multiple examples of each address, which is statically set in the baseline implementation. However, we discovered that the number of samples needed for each address increases with the size of the message; as we demand more of the network, it needs more examples to learn the stored value. Moreover, we observed that the model learns a majority of the address patterns with very few samples for each, while the remaining addresses require a few more samples to generalize. These observations lead to the following optimization which we call _Dynamic DeepMem (DM-D)_. The intuition behind DM-D is to include just enough samples for each address to remember the value; for addresses that store efficiently, we include only a small number of samples, but for Figure 3: Samples with outside distribution patches others that do not, we may include a significantly higher number. Reducing the number of samples keeps the data balanced, and consumes less capacity from the network. DM-D works by incrementally adding samples for addresses that do not successfully store their values. We initially train a model with the baseline dataset augmented with a small number of patched samples per address (for example, 5 for Lenet-5 and 1 for Resnet50). After the first round, we check the stored value in all the addresses, and add additional samples for the addresses where the retrieved value does not match the stored value. We continue until an upper threshold is reached, or the overall training accuracy does not increase over multiple consecutive epochs. ## 4 Evaluating DeepMem Without loss of generality, we demonstrate DeepMem using the different datasets [22, 39] and ML models [25, 48]. _Datasets:_ We used MNIST dataset [22] which is a collection of 70,000 grayscale images of handwritten digits, with 60,000 training images and 10,000 testing. We also used the CIFAR10 object classification RGB images dataset [39] consists of 50,000 training images (10 classes total, 5000 images per class) and 10,000 test images [39]. _Models:_ We used Lenet-5 model [25] (**61K parameters**) which is a classic convolutional neural network (CNN) designed for handwritten digit recognition on the MNIST dataset. For hyperparameters, we used batch size 64, learning rate 0.001, softmax activation function to the output, loss function to be sparse categorical cross-entropy, and used Adam for optimizing the loss function. As a representative of a large complex model, we use ResNet50 [48] for CIFAR10 image classification along with transferring private data. We use the softmax activation function for the final classification. The model is compiled with the Adam optimizer with a learning rate of 2e-5 and binary cross-entropy loss. The model has approximately **23.5 million** trainable parameters. We use Python 3 and Tensorflow [1] to implement all ML models and attacks in Google Cloud [9]. The experiments were carried out using Google Compute Engine on a system with 32 Intel(R) Xeon(R) CPU(s) with two cores each at the clock speed of 2.20GH, 208 GB RAM, and four NVIDIA V100-SXM2-16GB GPUs with 32 GB VRAM each. We note that when storing data in the model, there are two metrics of accuracy: (1) Baseline model accuracy, measuring the accuracy of the primary application; and (2) Covert channel accuracy (which we also call _patched accuracy_) which is the accuracy of the retrieved data from the channel. We evaluate four implementations which include: (1) DeepMem-static (DM-S), which uses 20 inputs per address; (2) DM-SG: the same attack but using a GAN to increase the baseline data set; (3) DM-D to dynamically adapt the number of samples for each address; and (4) DM-DG, which augments the data using a GAN, as with the scenario (2) above. DM-S is similar to Song's capacity abuse attack [72], with important differences: (1) instead of using a single sample input for each address, which did not perform well, we modified to use multiple samples for each address so that both the small and large model can better generalize the address; and (2) Minor modifications to the input pattern to extend to RGB and to enable different samples of each input with varying pixel intensities. Figure 4 shows the baseline accuracy after storing different message lengths (measured in terms of addresses, each storing a value from 0 to 9 given that the output is 10 classes). The stored data is uniformly randomly generated. DM-D outperforms DM-S, and using GAN improves both schemes. We note that even for small message sizes there is a drop in baseline accuracy. Recall that the upper bound on capacity for Lenet-5 [25] is around 60,000 parameters, so it is likely that we are already exceeding the capacity of the network at large message sizes. DM-DG has a significant advantage, especially at large message sizes where it minimizes the number of samples needed for each address. Figure 5 shows the number of input data samples needed to store the message within the model, both for the DM-S with 20 samples per address, as well as with DM-DG set to obtain the same channel quality. To achieve the same channel quality for the same size message, DM-DG requires a significantly smaller number of patched training samples. DM-DG savings consist of the yellow shaded region between the two lines shown in figure 5. For example, consider points G and H in the figure: DM-S uses 400000 patched samples (point G), 20 per address to communicate 20000 values with 97.3% accuracy, while DM-DG requires about one-third of that (133305 samples, at point H) to reach the same accuracy. Because it uses fewer samples, DM-DG impact on the baseline model is also smaller (baseline test accuracy drops by 2.04% vs. 3.53% for DM-S). In the next experiment, we compare the performance of the baseline static version of DM (DM-S), to that using both GAN augmentation and dynamic version of DM (DM-DG). We set the number of samples used by the static algorithm to Figure 4: Baseline model accuracy after storing data be the same (rounded up) as that average used by the dynamic scheme, making the number of patched samples roughly the same. The resulting patched accuracy is shown for Lenet-5 is shown in Figure 6. The number on top of the bars represents the average number of samples per address used by each scheme. We pick this number by first finding the average number of samples needed by DM-DG to reach the same accuracy as using 20 samples per address in DM-S. We then reconfigure DM-S to use that number of samples per address (rounded up). For the same size message, DM-DG substantially outperforms DM-S, especially as the message size increases and the network becomes more constrained. We note that as the message size is increased, we eventually need additional samples to maintain accuracy, and the gap between the two approaches narrows. We also repeat the experiment for the much larger Resnet50 (Figure 7). We observe similar patterns with DM-DG significantly outperforming DM-S, especially for medium size messages. The baseline model accuracy was also significantly better in DM-DG (Figure 9). DM-DG has a small advantage in preserving model accuracy, with the exception of very high message sizes for Resnet50, where the advantage was large. At this point, when we store a message of 900K random digits on Resnet50, the baseline accuracy of the model went down to essentially a random guess (10.16%) for DM-S, while DM-DG baseline accuracy continued to be good (88.16%) shown at Figure 9. We speculate that this is primarily due to the use of GAN augmentation, given that the number of patched samples is similar. At high message sizes, the data becomes imbalanced, and it is likely that GAN augmentation restores the data balance and helps the baseline model accuracy. To illustrate an end-to-end channel operation, we use DM-DG approach to transfer compressed images resized to \(90\times 90\) from the CelebA [43] dataset through resnet50/CIFAR10. We send 9 images shown in Figure 8 (top row original, and bottom row, recovered images). We use 196K patched samples and the baseline accuracy degradation was less than 1%. About 99.9% of the data is recovered correctly. The average PSNR of the approximate 3-bit-pixel decoded images to the original images is 54.45. Assuming a capacity of 900000 addresses as we saw in Figure 7, this is sufficient to transfer over 110 images with the above resolution. **Limitation- DeepMem detectability:** We consider a potential issue with DeepMem: it is possible for an audit of the training data to discover that the poisoned images are clearly out of distribution (Figure 10). To illustrate how it is possible to detect that the data set is modified, we first use Local Outlier Factor (LOF) [4], which is an unsupervised machine learning method for outlier/anomaly detection, on the training samples including both the baseline data and the patched data. The results are shown in Figure 9(a) for the MNIST data set with the added patched data for DeepMem. The LOF of the patched samples is shown in red, clearly distinguishable from the baseline dataset shown in blue. Not surprisingly, even simpler statistics tests such as cosine similarity [20] also show that the patched data is different from the baseline shown in Figure 9(b). Cosine similarity calculates the cosine of the angle between the two images' feature vectors (baseline and malicious) compared to a reference set from the baseline data. Figure 5: DM-DG requires less number of patched samples than DM-S for same channel accuracy Figure 6: Patch accuracy with the same number of patched samples, Lenet-5 trained with MNIST Figure 7: Patch accuracy with the same number of patched samples, Resnet50 trained with CIFAR10 Motivated by this observation, we next pursue DeepMem under the additional constraint of making the patched data more difficult to detect. ## 5 Covert DeepMem The baseline version of DeepMem can potentially be detected through analysis of the input dataset used during the training. In this section, we explore alternative implementations of DeepMem that are more difficult to detect. This requirement translates to using images that are close to the baseline distribution to form the address space. More specifically, DeepMem-Covert (DM-C) uses images selected from the baseline distribution that are modified by adding small patches to encode the addresses in the address space. Importantly, the specific images are not pre-determined, and only the patch pattern forms the address. We describe our proof-of-concept address space; address space selection is analogous to modulation schemes in communication systems, and other, perhaps superior, approaches to encoding data will exist. ### Forming a covert address space We create an address space by adding patch patterns to the input data samples. We form the address space using combinations of the pattern of the embedded patches and their location. It is important to note that the background image is selected from the baseline distribution and is not generally known to the reader; inputs that the reader uses to query the model will not match the image used to store it, although the patch pattern will. Specifically, we embed patches in one or more of eight fixed locations selected around the periphery of the images to minimize the likelihood of overlap with MNIST digits; example patched images are shown in Figure 11, enhanced to make the patch more visible. For CIFAR10, we also embedded patches in up to predetermined eight locations (Examples shown in Figure 11: Different patch pattern and location on MNIST Figure 12: Different patch pattern and location on CIFAR10 Figure 8: DM-DG attack applied to Resnet50 models trained with CIFAR10 dataset. The first row shows the images from the sender and the second row shows the images received by the receiver Figure 10: (a) Local Outlier Factor and (b) Cosine Similarity for detecting baseline and patched (malicious) data Figure 9: Baseline accuracy Figure 12). We use ten different patch patterns. Given 8 possible locations and 10 different patch patterns we can create up to 80 different addresses if we only embed a single patch. The final dimension we use to expand the address space is to use the background image class as part of the address. For example, when we embed a specific patch pattern into background images corresponding to baseline class "1" in MNIST, this is a different address than when the same pattern is embedded in the same location in images from a different baseline class. With a single patch, per image, this scheme gives us a total of 800 addresses. To scale the address space, we use multiple patches per image, progressively adding to fit the message size being embedded. With two patches per image in any of the 8 different locations which provide \(\mathcal{C}_{2}^{8}\) patch location combinations each of which can take one of 10 patch patterns in two locations, and embedded into one of the 10 background classes (a total of 28000 unique addresses). Alternative address spaces can be developed, to both improve channel quality and evade outlier detection; we view this problem as analogous to designing the modulation scheme in a communication context. In general, for DeepMem-C because the address space is more stochastic and noisy, the capacity is likely to be substantially lower than the baseline versions of DeepMem. ### Implementing the Channel As before, the data stored in each address is added to the training data labeled with the output corresponding to the stored data. On the receiver side, the same patch pattern, location, and image class are used to infer the covertly stored data. Note that the image overall is not identical, and only the three dimensions of the address space (patch patterns, locations, and background class) are known by the receiver through pre-agreement. As with the baseline DM, for each address, we need to include multiple input samples. We also use GAN augmentation to reduce data imbalance. We inject the patches into both clean and GAN-generated samples. We train the model using both the augmented baseline data set, and the augmented patched samples with a 1:1 ratio from the start to continue to reinforce the baseline model as we store the DeepMem data. The writer may configure aspects of the protocol (e.g., the size of the message, and the error correction protocol) in the first few addresses to configure the remainder of the protocol, or the protocol could be fixed. Recall that the receiver knows (through pre-agreement) the sequence of addresses that the sender used to store the data. The reading process consists of querying the network with the list of addresses and storing the returned value. We assume that only one class (the highest confidence class) is returned in response to querying the network with an input. If more information is returned (e.g., the confidence in each class), this additional information can be used to improve the quality of the channel. Of course, it is possible that the returned value is not correct (noise in the channel); we discuss several techniques for a covert ML channel to improve the effective bandwidth and manage errors next. ### Optimizing DeepMem-C DM-C experiences significant error rates because the input images used to query the network are both close to the baseline distribution (for covertness) but also not identical to the images used during training. Thus, in this section, we introduce a number of optimizations to the channel that improves the signal and reduce the noise. Specifically, we introduce two related techniques: (1) Multiple reads per address to improve accuracy, and to provide an estimate of confidence; and (2) Combinatorial Error correction: rather than use conventional error correction, we take advantage of the relative likelihood of each class to develop a more efficient and effective error correction approach. **Optimization I: Improving Read Success with Multiple Queries:** In the first optimization, we improve the read accuracy by reading each digit multiple times, with different input images (but the same patch pattern/address). Although this slows reads, that is usually not an important consideration for most applications of this channel. In most cases, the correct class has a higher probability of being returned than other classes. Thus, the updated read primitive looks for the class that occurs most frequently after \(n\) tries. We estimate the impact of this idea under idealized assumptions. Briefly, we assume an underlying probability of the different classes such that the correct class has the highest probability; if this assumption is not true, then increasing the number of reads is not going to improve the probability of a correct read. Assuming also that multiple reads represent independent trials, this becomes a multinomial experiment. We model the multiple reads as a multinomial experiment derive using Monte Carlo simulation [53] estimates of the number Figure 13: Probability of correct value with multiple reads of reads necessary to guarantee with a high probability that the most common class is the correct class.The results are shown for different top-1 probabilities, and as we increase the number of reads in Figure 13. We see that even with a few reads, the probability can be very high to get the correct output as the most commonly seen value. Table 1 shows experimentally the success rate of reading a value with the increased number of reads. While the value increases rapidly (even with 3 reads), it does not continue to improve as per the simulated model. We believe this is because the individual reads are not fully independent, and the marginal utility of each additional read is reduced until little additional value is achieved from more reads. Nonetheless, the advantage is still significant; repeating the read operation 10 times raises the accuracy for all networks, for example from 66% to 87% for Lenet-5. There appears to be little advantage for additional reads beyond that point. **Optimization II: Combinatorial Error Correction (CEC):** The next idea we introduce to improve the performance of covert DeepMem is to leverage error correction. Rather than using conventional error correction algorithms such as Reed-Solomon (RS) codes [79], we introduce a new algorithm, CEC, that exploits the properties of the machine learning model. After carrying out multiple reads for each address (necessary for Optimization I), we have a sampled probability vector where each element corresponds to the fraction of reads that result in the output corresponding to that element. Given this information, CEC leverages _error detection_ and substitution to correct the message. Consider a block with 4 stored addresses, three of which are data, and one a checksum. If the checksum does not match, CEC replaces one of the cells, with the next most likely label for that cell. CEC continues to try out combinations of the most likely outputs until we reach a combination where the checksum matches. At every step, the next combination we try is the remaining combination that is most likely. Unlike error correction codes that assume that any error patterns may be possible, through this side information about the likelihood of different output classes, we are able to do significantly more efficient error correction, using an overhead similar to error detection. We illustrate CEC in Figure 14. The sender sends (3,2,1,4) where the first three cells represent the data block (3,2,1) and the last cell that contains 4 is the checksum block (we later show how we choose data and checksum block size). But The receiver retrieves (3,5,1,6) through the channel. Clearly, there is an error in retrieving the 2nd and 4th cell. Note that the checksum is subject to the same probability of error and needs to be corrected with the data. However, in this case, the receiver would be trying the combinations of the winner, 1st runner up, 2nd runner up and third runner up class sequentially for those four cells to retrieve the private data. Next we discuss how we choose the data and checksum block size. _Design Considerations for CEC:_ The complexity of the recovery depends on the number of memory cells per checksum, as well as how deep down the alternative list for each cell we allow the alternatives to be tried. We use Cyclic Redundancy Check (CRC) codes, which have known good performance in error detection with carefully chosen polynomials [55]. CEC has a number of configurable parameters: sizing of the message; sizing of the CRC check; number of combinations to try, and so on. These related parameters interact in complex ways that must be considered when choosing an effective configuration of CEC. We describe these considerations next. With every combination, there is a small chance of \(\frac{1}{2^{n}}\) where \(n\) is the number of CRC bits of aliasing, assuming a well-chosen CRC polynomial. Aliasing is when a message combination passes the CRC check, but is not the correct message. The larger the size of the checksum block, the less the likelihood of aliasing. However, larger CRC checks increase the overhead or, if amortized over more data blocks, increase the number of permutations needed before finding the correct message. Thus, we have to configure CEC to balance these considerations (i.e., computational complexity, accuracy/aliasing and overhead). We evaluated both CRC8 and CRC12 for the error rates that we are encountering and found CRC12 to be more effective due to the significantly lower probability of aliasing. CRC16 or higher could provide even lower aliasing but come at higher storage and computational overhead. A related challenge is how to size the data block given a chosen CRC algorithm. As the checksum block size is fixed \begin{table} \begin{tabular}{|c|c|c|c|} \hline **ReadCount** **(message length-2000)** & **Lenet-5 stored data accuracy(\%)** & **Alexnet stored data accuracy(\%)** & **Resnet50 stored data accuracy(\%)** \\ \hline 1 & 65.73 & 86.6 & 90.62 \\ \hline 3 & 82.98 & 93.19 & 95.9 \\ \hline 10 & 86.63 & 94.7 & 96.4 \\ \hline 20 & 87.29 & 95.6 & 97.6 \\ \hline 50 & 87.5 & 95.7 & 97.75 \\ \hline \end{tabular} \end{table} Table 1: Multiple queries increase the success probability Figure 14: Combinatorial Error Correction (CEC) in size, so ideally we would like to increase the size of the data to have a lower overall storage overhead. However, the computational complexity rises with the number of included blocks as the number of permutations increases exponentially with the number of addresses in a block. For example, if we choose the data block of size 8 and the checksum block of size 4 with top three (topK=3) most probable class for each cell, then we need to find \(3^{12}\) or over half a million permutations if we consider all possible permutations. However, since we are limited in the number of permutations because of aliasing, we are able to consider only a small subset of the most probable permutations, resulting in lower correction success. Empirically, we find that the most efficient configurations based on the top-1 vary as shown in Table 2. Because CEC uses the information about the class likelihood it is able to significantly outperform Reed-Solomon coding [79], an optimal error correction code, at the same overhead level shown at Table 3. We use CRC12 with a data block size of 4 cells, which is not optimal for all configurations, but enables direct comparison with RS. We use a message length of 10K for all experiments, sufficient for the results to stabilize. Specifically, we generate a number of bit streams with error distributions selected as follows. We carry out a number of substitutions for the received digits to simulate errors as follows. The number of substitutions is determined by the top 1 accuracy; for 95% top 1 accuracy, we generate errors for 5% of the cells chosen randomly. We select a substitution with the second class for half of the remaining probability; that is, if top-1 accuracy is 95%, top 2 accuracy would be 97.5% reflecting a 2.5% chance of changing the digit output to the second most likely class. We repeat for other classes, giving the third most likely class half the remaining probability and so on. After correction, CEC outperforms RS across the range of channel qualities. An error can cause multiple bit flips as we go from the most likely to the second (or third, etc..) most likely class. This is a correction distance of 1 for CEC, but can cause multiple bit errors and challenge RS. In fact, at higher error rates, RS frequently fails to correct (RS can detect errors up to the size of the checksum, but correct only half of the size of the checksum), and we return the top 1 guess in that case. CEC cannot correct when it exceeds the preset number of permutations we allow it (set empirically based on Table 2), or when it experiences aliasing, finding an incorrect match. Note that the Table 3 also shows the average number of permutations needed when using CEC, which increases as the channel quality goes down. ## 6 Evaluating DeepMem-C In this section, we evaluate DeepMem-Covert (DM-C) on the same set of networks and benchmarks (MNIST and CIFAR10 image datasets, on Lenet-5 and Resnet50, respectively). We also add the AlexNet to provide a medium-sized model (7M parameters) [81]. ### Evaluating channel quality **Transferring Images:** To illustrate leaking data from an image dataset using DM-C, we show two cases from different distributions and complexity: (i) MNIST images transferred through Lenet-5, and (ii) grayscale images transferred through the Alexnet model. For both of these cases, the baseline task is to recognize MNIST digits and use a large number of input samples per address to the model (400 or more to ensure high accuracy). The high number is necessary due to the covertness of the pattern. Since the pixel value ranges from 0 to 255, we encoded each pixel value p in 3 bits by mapping p (ranges 0 to 255) to \(p^{{}^{\prime}}\) (ranges 0 to 7), allowing a receiver to query the model, infer the raw private data and reconstruct the image. We communicated MNIST an image, occupying 1232 addresses covertly in the Lenet-5 network. We also transfer a grayscale Lena image consisting of 10058 addresses by Alexnet through the covert channel shown in Figure 15. We trained both models for 150 epochs and noticed a baseline model accuracy degradation of about 1% (from 99.74 to 97.75) for Lenet-5 and 1.17% (from 99.18 to 98.01) for Alexnet. Figure 15 shows that CEC can improve the quality of the channel. The bar charts of Figure 15 show that the retrieved data accuracy using CEC is higher than the top 1 accuracy as we take advantage of top 1, top 2, and top 3 classes to correct the message. **Transfering Text and Random Data:** We also test DM-C with text data and random data. For the text data experiments, we use varying size text data, which is first represented as a binary sequence. The sequence is broken into 3 bit digits that are stored each in an address in the network (as before, \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Top 1 accuracy** & **Block size** & **Depth limit** & **topK** \\ \hline \(95<=x<=100\) & 7 & 350 & 3 \\ \hline \(90<=x<95\) & 5 & 450 & 4 \\ \hline \(x<90\) & 5 & 650 & 4 \\ \hline \end{tabular} \end{table} Table 2: Configuration selection based on the channel quality \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Top 1** & **Average depth/** & **CEC cell** & **RS cell** \\ **accuracy** & **permutations** & **accuracy** & **accuracy** \\ **(\%)** & **checking by CEC** & **(\%)** & **(\%)** \\ \hline 95 & 4.69 & 98.23 & 96.81 \\ \hline 90 & 18.82 & 96.87 & 92.87 \\ \hline 85 & 41.51 & 94.10 & 88.05 \\ \hline 80 & 58.01 & 90.22 & 83.26 \\ \hline \end{tabular} \end{table} Table 3: CEC outperforms RS for the same overhead by training with the appropriate patches and with the stored value as the label). Figure 16 shows that we can send up to 2880 addresses (3 bit digit each) with baseline accuracy degradation of 2.13% (from 98.74 to 96.61). For the Alexnet, We could able to send up to 9000 digits with baseline accuracy degradation of 2.03% ( from 99.18 to 97.15) shown in Figure 16. For both models, we notice that baseline accuracy degrades with the increasing size of the private text data length, as we near the capacity of the model. Moreover, to get the advantage of combinatorial error correction (CEC) we added the checksum with the text data from the sender side and Figure 16 clearly shows that we achieved better stored data accuracy/lower symbol error rate for transferring text data using CEC. Text data has built-in redundancy since ASCII values are concentrated in a range that the network can efficiently learn [21, 49]. To get a true measure of capacity, we also communicated random data to the receiver using covert DeepMem (DM-C). We used different lengths of random data and report the symbol error rate of the channel as well as the accuracy degradation of baseline for both models shown in Figure 17. Figure 17 shows the results for sending random data, for message sizes similar to the text experiment. The behavior shows similar overall patterns. The accuracy drop for the same size was marginally higher (e.g., up to 2.66% drop from 98.74 to 96.08 on Lenet-5) as we increase the size of the random message, since the entropy of the random message is higher than that of text. As with text, CEC improves the symbol error rate. Overall, DM-C requires significantly more examples for each address to learn the pattern reliably. As a result, overall the achievable capacity is lower than DM-DG. For example, on Lenet-5, for a similar drop in baseline accuracy, we were able to send messages of 20000 digits or higher reliably. We also consider the capacity of DM-C on larger networks (Resnet50, with 23.5 million parameters) trained on the CIFAR10 dataset. Figure 18 shows the results of an experiment transferring both random and text data as we increase the message size. We sent up to 18000 digits of random data (3 bits each) with baseline accuracy degradation of 0.82 (from 94.57 to 93.75) shown in Figure 18.or the same length of text data. We observed the baseline accuracy degradation of 0.71 ( from 94.57 to 93.86) shown in Figure 18). For both types of data, we notice the same as other models that baseline accuracy degrades with the increasing size of the private data length and random data extraction accuracy degrades a little faster Figure 16: With the increasing number of text data, baseline accuracy is degrading, and the symbol error rate is showing an upward trend as we approach the capacity. Figure 17: Random data: Baseline accuracy degrades, and symbol error rate increases as we approach capacity. Figure 18: With the increasing number of random/text data, baseline accuracy is degrading, and the symbol error rate is showing an upward trend. than text data. However, as Resnet50 has a large number of parameters, we can accommodate a high number of private data without substantially degrading the baseline accuracy. ### Evaluating Covertness To evaluate DM-C for covertness, we examine both the inputs and the network to evaluate whether outliers can be detected. **Visualization in the input space.** We use both LOF (shown in Figure 18(a)), and cosine similarity (in Figure 18(b)) metrics to visualize the distribution shift between the augmented data and the initial data distributions. Figure 18(j) implies that the approach for the address generation of covert DeepMem-BB is more amenable to hiding the poisoned data using small patch perturbation patterns (green dots in Figure 18(j)) that are difficult to detect, whereas the outband images (red dots in Figure 18(j)) are out of the original distribution and hence easily identifiable. **Visualization in the features space.** Figure 18(j) shows the features learned by ResNet50 trained on augmented CIFAR10 using covert DeepMem (DM-C)(Figure 18(a)) and baseline non-covert DeepMem (DM-DG)(Figure 18(b)). The points are sampled from the last dense layer of the model and then projected to 2D using t-SNE [76]. Figure 18(b) clearly demonstrates that the classes of the baseline samples (solid circle) and the classes of the patched samples (cross mark) are mostly distinguishable for the case of DM-DG because of the outside distribution patches compared to the initial data. However, for the case of DM-C shown in Figure 18(a), we notice that the clusters of the baseline data are overlapping with the patched samples. ## 7 Potential Mitigations DeepMem leverages model overparametrization to create a communication channel that could enable communicating sensitive information through the model covertly and without harming the baseline task accuracy. The information is not directly injected into the model itself but rather hidden through an encoding strategy, which can be extracted by a colluding actor using predefined addresses. This mechanism can be effectively seen as a backdoor attack and the accuracy of this channel can potentially be degraded using the same defenses that are used against backdoor attacks, with minor to no modifications. Such defenses include Fine-tuning [44], Pruning [24], distillation [77, 42] and Ensemble techniques [35]. We investigate the most commonly used Fine tuning and Pruning defenses.We make the following realistic assumption: the receiver has access to a small fraction of trusted and unmodified data that it uses for fine tuning. In this section, we present the performance of the Pruning and Fine-tuning based defenses against both a covert and non-covert channel. **Pruning.** In a model pruning defense, the smaller weights in the layers of the model are pruned (set to 0). The intuition behind pruning is that these smaller parameters do not contribute significantly to the operation of the network and can be removed. We perform pruning for 10 epochs and fine tune the model with 10% of training data from each class as common after pruning operations. Figure 19(a) illustrates how DM-C and DM-DG are affected by increasingly aggressive parameter pruning. The patch accuracy for both algorithms rapidly decreases although some signal remains even after pruning more than half the network. For DM-C, the baseline accuracy decreases rapidly with the pruning rate, but DM-DG's baseline accuracy declines gradually and maintains 68% even when we remove 99% of Lenet-5 model's parameters. It is interesting that the pruning impact is not the same for DM-C and DM-DG; for DM-C where the patches are sampled from the baseline distribution, the pruning affects simultaneously both the patch and the baseline accuracy. However, since DM-DG uses out-of-distribution patches, the pruning impacts patches earlier than the baseline accuracy. **Fine-tuning.** Another possible defense strategy is fine-tuning, where a fraction of clean new data is used to retrain the model so that the malicious behavior is overwritten. Intu Figure 18: Outlier detection using (a) LOF and (b) Cosine Similarity. Baseline MNIST samples in blue dots, patched samples with Outside distribution in red dots and Inline distribution in green dots. Figure 19: Outlier detection using (a) LOF and (b) Cosine Similarity. Baseline MNIST samples in blue dots, patched samples with Outside distribution in red dots and Inline distribution in green dots. tively, adding new training epochs would potentially push the model stored in the UPs since it is not reinforced. We retrain the final model for 10 epochs using an increasing fraction of the clean baseline training dataset in Figure 21b.We notice that the patch accuracy of the DM-DG non-covert channel deteriorates more quickly than the DM-C covert channel. This can be explained by the use of baseline distribution in DM-C's patched samples. The baseline accuracy was unchanged or improved slightly from the fine-tuning. ## 8 Related Work **Communicating private data.** Among the recent works on communicating extra information through ML models are capacity abuse attack [72] and watermarking [66, 46]. Song et. al. [72] develop a malicious learning algorithm designed to train a model that leaks details about the training data. Rouhani et. al. [66] proposed DeepSigns that embeds a string watermark (maximum 512 bits) into the model by altering the probability distribution function (PDF) of the activation maps of a layer. Other watermark embedding techniques have been proposed in both white box [75, 57] and black box scenario [46, 40, 2] to protect the rights of ML model. These works are the closest to our approach. While they embed information within the victim model in addition to the baseline task, there is no comprehensive evaluation of the extent/limit of the capacity that can be exploited. Besides, for the malicious data embedding works, especially [72], their approach did not consider covertness into account, which makes them easily detectable by simple analysis of the training set, as shown in Section 4. **Model hijacking.** Several papers proposed to overload ML models with secondary tasks. Salem et al. [68] proposed ModelHijacking attack that hides a model covertly while training a victim model. Elsayed et. al. [26] proposed adversarial reprogramming in which instead of creating adversarial instances, they crafted inputs that would trick the network into performing new tasks. In the same direction, Mallya et. al [47] proposed Packnet that trains the model with multiple tasks. **Privacy Attacks.** In the spirit of extracting information from a trained model, several works tried to exploit the ML model to leak sensitive information. Membership inference attacks that infer whether a specific data sample is part of the training dataset [82, 59, 70]. Property inference attacks infer specific properties of the training data distribution [84, 60, 16]. In model inversion attacks [83, 13, 30] the adversary reconstructs the training data of the target model. On a more advanced side, **memorization** concept has been investigated thoroughly in recent works. Specifically, several papers [11, 12, 14] demonstrated that large language models (LLM) are likely to unintentionally memorize a fraction of training data that contain duplicate sequences. Doubling the parameters in a model facilitates high memorization that leads to the extraction of a significantly larger fraction of the training data. As a possible countermeasure, deduplicating datasets have been suggested [41] to avoid memorization. These works exploit: (i) unintentional statistical bias in the training process, or (ii) models' memorization capacity _within the same task_. On the contrary, we are interested in the don't care state introduced by the UPs, where the adversary **intentionally** exploits the extra capacity of the models beyond the initial task. ## 9 Concluding Remarks In this paper, we propose a novel modeling of ML architectures, which we believe represents a general blueprint for several existing and potential future benign and malicious applications. We conceptualise ML models as communication channels with an effective capacity that increases with over-parametrization. We show that using this approach, we can transfer arbitrary information without impacting the baseline task. We empirically characterize this capacity and propose write and read primitives allowing potential adversaries to communicate information in a black-box setting while being covert.
2304.11810
PARAGRAPH2GRAPH: A GNN-based framework for layout paragraph analysis
Document layout analysis has a wide range of requirements across various domains, languages, and business scenarios. However, most current state-of-the-art algorithms are language-dependent, with architectures that rely on transformer encoders or language-specific text encoders, such as BERT, for feature extraction. These approaches are limited in their ability to handle very long documents due to input sequence length constraints and are closely tied to language-specific tokenizers. Additionally, training a cross-language text encoder can be challenging due to the lack of labeled multilingual document datasets that consider privacy. Furthermore, some layout tasks require a clean separation between different layout components without overlap, which can be difficult for image segmentation-based algorithms to achieve. In this paper, we present Paragraph2Graph, a language-independent graph neural network (GNN)-based model that achieves competitive results on common document layout datasets while being adaptable to business scenarios with strict separation. With only 19.95 million parameters, our model is suitable for industrial applications, particularly in multi-language scenarios.
Shu Wei, Nuo Xu
2023-04-24T03:54:48Z
http://arxiv.org/abs/2304.11810v1
# Paragraph2Graph: A GNN-based framework for layout paragraph analysis ###### Abstract Document layout analysis has a wide range of requirements across various domains, languages, and business scenarios. However, most current state-of-the-art algorithms are language-dependent, with architectures that rely on transformer encoders or language-specific text encoders, such as BERT, for feature extraction. These approaches are limited in their ability to handle very long documents due to input sequence length constraints and are closely tied to language-specific tokenizers. Additionally, training a cross-language text encoder can be challenging due to the lack of labeled multilingual document datasets that consider privacy. Furthermore, some layout tasks require a clean separation between different layout components without overlap, which can be difficult for image segmentation-based algorithms to achieve. In this paper, we present Paragraph2Graph, a language-independent graph neural network (GNN)-based model that achieves competitive results on common document layout datasets while being adaptable to business scenarios with strict separation. With only 19.95 million parameters, our model is suitable for industrial applications, particularly in multi-language scenarios. We are releasing all of our code and pretrained models at this repo. GNN Language-independent Document Layout Layout Paragraph Generalization ## 1 Introduction Document layout analysis is an important task for very-rich document understanding. Given the availability to the text bounding boxes, text info and document image, most current works either integrate all modalities together with BERT-like encoders [1][2][3][4] or simply using visual information [5][6] to model the task as an object detection problem. While effective, industrial applications need to consider very-long multilingual paragraphs, which a BERT-like encoder fails to hold due to the limitation of input sequence length and lack of multilingual document dataset. Moreover, some scenarios expecting a clear separation between layout components make image segmentation-based algorithms hard to adapt due to vague boundaries. Although post-processing can handle the problems, hand-craft rules make the pipeline complicated and hard to be maintained. In contrast, graph neural networks (GNNs) can offer a promising alternative approach that does not rely on language models. With this work, we propose Paragraph2Graph, a language-independent GNN model to address these limitations. Fig.1 shows the overall architecture. We first encode image features with a pre-trained CNN backbone. Since each OCR box can be regarded as a spatially-separated node of a graph, we therefore incorporate the 2d OCR text coordinates, denoted as layout modality, and image features together as node features. Then, we build our neural network with DGCNN [7] to dynamically refresh the graph based on updated node features and layout modality. As for edge features, besides simply concatenating two node features, relationship proposals [8] is also used for better capturing the relative spatial relationship. To improve the computation efficiency and balance between positive and negative training pairs, we also propose a graph sampling method based on layout modality. A sparse graph can benefit forward and backward computations compared to fully-connected graphs. Finally, two linear probes are trained to conduct node classification and edge classification respectively. Our method does not require the use of a tokenizer or language model to extract text features as part of node embedding, making it language-independent and efficient in terms of parameters. In contrast to Transformer Encoder series or previous GNN works [9][10][11][12][13], we have shown that Paragraph2Graph can easily generalize to multilingual documents without any modifications. We also conducted experiments showing that a model trained on Chinese documents performs similarly or even better on an English evaluation dataset than a model trained on English documents. This demonstrates the language-independence of our approach and indicates that the diversity of document layouts is the primary factor affecting performance. Additionally, our GNN models exhibit better generalization than object detection frameworks such as Faster-RCNN and Mask-RCNN. Our contributions can be summarized as follows: * We propose a language-independent GNN framework which we call the Paragraph2Graph. The framework consists of node definition, edge definition, graph sampling, GNN and task-specific layers. Each part of it can be easily edited. * We offer an empirical selection for each part of the Paragraph2Graph that achieves competitive results on several document layout analysis tasks. * We conduct extensive experiments and give our ablation analysis to justify the effective design. * The language-independent design allows us to make use of all public datasets to train a model regardless of language. ## 2 Related Work Fig.2 shows three common practices for document layout analysis. ### Layout Tasks use Transformer Encoder A very-rich document has different modalities available across text info, text position and image. To mimic how humans read, LayoutLM [1], LayoutLMv2 [2], BROS [4] first integrate text features of each token with layout modality and corresponding image features. Afterward, LayoutLMv3 [3] extends the visual backbone to visual transformers. These frameworks define the document layout analysis as a relation extraction and follow the same design by constructing a fully-connected graph to calculate the relation score between all nodes. Then, each relationship in the graph is inferred based on evaluating whether the score is over a threshold or not. All tokens inferred to be relational are grouped as one Figure 1: **The overall Paragraph2Graph architecture**. The whole pipeline consists of five parts: node definition, edge definition, GNN, graph sampling and task-specific layers;dotted lines represent invalid edge connections region. However, these methods are highly language-dependent because of their language-specific tokenizers. Due to the lack of labeled multilingual document datasets without privacy consideration, it is a challenging alternative plan to train a cross-language tokenizer. Besides, with the limitation of input sequence length, Transformer Encoder-series methods cannot deal with very-long documents, such as dense tables in financial reports. The high computational cost introduced by self-attention and fully-connection graphs reduce the availability to the industry as well. ### Layout Tasks use Object Detection Document layout analysis is about detecting the layout location of unstructured digital documents by returning bounding boxes and categories such as figures, tables, headers, footers, paragraphs, etc. Such a task is initially defined as an object detection problem on which many algorithms [5][6][14] related to object detection or segmentation have been successfully applied. However, for all the object detection or segmentation models, the predicted bounding boxes may overlap with each other due to the vague boundary between instances as shown in Fig.3. The slight offset of the prediction boxes has little effect on the training loss, which in turn contributes limited to the model optimization to reach a high IoU. It's hard to assign a label to a text box that is either located at the edge of a predicted region or is shared by multiple predicted regions, which makes the \(AP^{IoU\geq 0.9}\) less satisfying. It has to be mentioned that recent works [3][15][16] replacing CNN backbone with vision transformer to achieve state-of-art results on public datasets, but the uncompetitiveness of computation cost can't be ignored. ### Layout Task use GNN Graph Neural Networks (GNNs) have their special advantages in modeling spatial layout patterns of documents. Each text box of a document can be regarded as a spatially-separated node in a graph;text boxes grouped in the same layout region can be seen as being connected by edges. Since the document layout analysis can be implemented as node and edge classification from a sampled graph, there exist no vague text boxes hard to be assigned to a certain group. We conclude a general pipeline covering all existing GNN-based layout analysis algorithms----node definition, graph sampling, edge definition, GNNs and task-oriented processing. Existing works only explore part of the pipeline. For node definition, [9] uses char embedding with BiLSTM to integrate text into node;[17] adds regional image feature from FPN output;[10] encodes box coordinate \(xywh\) into node embedding. Afterward, Post-OCR [18] adds \(cos\alpha,sin\alpha,xcos\alpha,xsin\alpha,ycos\alpha,ysin\alpha\) and the width of the first word, where \(\alpha\) is the angle of text box;ROPE[11] explores the importance of the reading orders of given word-level node representations in a graph;Doc2Graph [13] uses a pre-trained U-Net to get text image feature;Doc-GCN [12] proposes a large collection for node definition, which includes text embedding from BERT model, image feature from pre-trained Faster-RCNN model, the number of tokens, the ratio of token number and box area, and syntactic feature. For edge definition, [9] uses horizontal and vertical distance and the ratio of height between the two text boxes;ROPE[11] constructs with spatial embeddings from horizontal and vertical normalized relative distances between centers, top left Figure 2: Three common practices for document layout analysis. corners, and bottom right corners and relative height and width aspect ratios;Doc2Graph[13] uses the output of the last GNN layer, softmax of the output logits and polar coordinates for node embedding. For the GNN module, based on the vanilla GNNs, many works have studied sophisticated designs to improve GNNs performances. Graph Convolutional Networks [19] is a type of Graph Neural Network which applies convolution over graph structures. This design is widely used in [12][20]. GAT[21] leverages the self-attention mechanism into GNNs to decouple node update coefficients from the structure of the graph. It has been used in [17][10][18]. For graph sampling, given that a document usually has a large number of text boxes that can be regarded as nodes, it is essential to construct a graph with both high connectivity and sparsity compared to a fully-connected graph to allow necessary gradient propagation. [22] first proposes \(\beta\)-skeleton graph;GraphSage [23] uniformly samples a set of nodes from the neighborhoods and only aggregates feature information from sampled neighbors. [18][11][20] all follow \(\beta\)-skeleton to build their graph, but the miss to cover tabular structures where text box density is relatively high. K-Nearest Neighbor is another good substitution, [10] set \(K=10\) and [24] set \(K=3\), but it is still too tricky to tune satisfying parameters for different business scenarios. ### Other Tasks use GNN: table recognition, text line grouping All aforementioned algorithms can be generalized to table recognition tasks by simply modifying the task-oriented layers to represent whether two adjacent cells are in the same row or col. For the table recognition task, [25] uses KNN to construct a graph and represent text features by encoding character embedding with GRU;[26] constructs a fully-connection graph and set weighted loss to balance between positive and negative samples.[27][28][29][30] share the identical GNN structure. Text line grouping task is more easily adaptable to GNN with minor changes. [24] predicts edge classified probability to judge if the pivot and its neighbors are in the same line;[31] introduces the residual connection mechanism for GNNs. In general, a powerful GNN model can be used in many downstream document analysis tasks. ## 3 Method We follow our conclusion to establish a unified pipeline covering all main steps to build a GNN-based model for layout analysis: they include node definition, graph sampling, edge definition, GNNs, and task-oriented processing. Figure 3: (a) ground truth layout region (b) tricky cases that an object detection method fails to handle with: **region 1**: shows one text box can locate across two layout regions;**region 2**: the text box exactly located at the box boundary;**region 3**: The text box is not located in any regions. green, red, yellow, blue rectangle means detected regions for the Title, Text, Table, and Page Foot. Node definitionGiven a document image \(D\in\mathbb{R}^{H\times W\times 3}\) with \(N\) text boxes generated by any commercial or open source Optical Character Recognition (OCR) engine. we denote all text boxes as: \(\texttt{position\_info}=\{x_{min}^{n},y_{min}^{n},x_{max}^{n},y_{max}^{n}\mid n\in[0,N-1]\}\). The input image D is first resized into \(D^{\prime}\in\mathbb{R}^{400\times 400\times 3}\). \(D^{\prime}\) is sent to a pre-trained ResNet visual backbone to get a series of output features from different scales. These features are integrated with \(D^{\prime}\) into \(F\in\mathbb{R}^{400\times 400\times d}\) with an FPN structure. For \(N\) text boxes, we pick out their corresponding image features with ROIAlign and embed them as \(I^{N\times k}\), aka image embedding. \(d\), \(k\) is the intermediate dimensions. Given the normalized bounding box, the layout information is represented as \(\texttt{layout}_{n}=(x_{min}^{n},y_{min}^{n},x_{max}^{n},y_{max}^{n},x_{ctr}^{n },y_{chr}^{n},w_{n},h_{n})\). The layout information composes of bounding box coordinates, center point coordinates, bounding box width and height to construct a token-level 2D positional embedding, denoted as layout embedding. We then fuse layout embedding and image embedding: \[\texttt{node\_embedding}:=\texttt{MLP}(\texttt{Concate}(\texttt{image\_ embedding},\texttt{layout\_embedding})\] GNN moduleAfter gathering all the node features, they are passed as input to the interaction model. We have tested two graph neural networks to use as the interaction part which are the modified versions of [7] and [32] respectively. These modified networks are referred to as DGCNN* and GravNet* hereafter. We update the node features by aggregating weighted neighbor nodes with DGCNN/GravNet. \[\texttt{node\_embedding}=\max(\texttt{node\_embedding},\texttt{DGCNN} \texttt{ or GravNet}(\texttt{node\_embedding},\texttt{position\_info}))\] Graph samplingText boxes classified into the same layout category can be regarded as having an edge between them. We refer to this task as node grouping: to infer whether there exists an edge between a node pair. To construct node pairs, we therefore connect all potential edges between the nodes with a location-based node search algorithm. Instead of constructing a fully-connected graph, our method can both save computation costs and improve training efficiency. Based on the common structure of a document, each text node can have potential edge connections both vertically and horizontally. For each text box, we pick up its top 1-2 location-nearest text boxes in four directions (top, bottom, left and right). Complex cases need additional processing as shown in Fig.4-b. Our method can effectively sample a sparse graph without missing necessary node pairs. Comparison among KNN, \(\beta\)-skeleton, and our sampling methods can be found in Appendix Fig.5. Edge definitionWe concatenate the node feature of each valid node pair as \(F_{pair}\). Inspired by ROPE [11], we encode natural reading orders of words as \(F_{rope}\) to help capture the better sequential presentation between nodes. A new reading order code is first assigned to neighbors with respect to each text box. Then, a sinusoidal encoding matrix is applied to encode the reading order index. We also consider that the relationship between nodes is an important feature that has been ignored by [8]. Following the relationship proposal, suppose two nodes have a potential relationship, we denote one node as \(S\), a subject, the other as \(O\), an object, and the relationship as \(R\). \(\Delta(S,O)=(t_{x}^{SO},t_{y}^{SO},t_{w}^{SO},t_{h}^{SO},t_{x}^{OS},t_{y}^{OS})\), where \[t_{x}^{SO}=(x^{S}-x^{O})/w^{S},\quad t_{y}^{SO}=(y^{S}-y^{O})/h^{S} \tag{1}\] \[t_{w}^{SO}=\log(w^{S}/w^{O}),\quad t_{h}^{SO}=\log(h^{S}/h^{O})\] (2) \[t_{x}^{OS}=(x^{O}-x^{S})/w^{O},\quad t_{y}^{OS}=(y^{O}-y^{S})/h^ {O} \tag{3}\] Figure 4: **Left**: illustration of our graph sampling strategy: for each node (shown in orange), we sample one edge horizontally (shown as red region) and two edges vertically (shown as green region);**Right**: We vertically sample top-2 nearest edges instead top-1 to ensure the connectivity with this common right-alignment paragraph structures \(x^{S}\), \(y^{S}\), \(w^{S}\), \(h^{S}\) represent the center coordinates, width, and height of a subject box, similarly denotations apply for \(x^{O}\), \(y^{O},w^{O}\), \(h^{O}\). The coordinates of \(R\) is the minimum bounding rectangle of \(S\), \(O\), which means \[(x^{R}_{min},y^{R}_{min},x^{R}_{max},y^{R}_{max})=\min(x^{O}_{min},x^{S}_{min}), \min(y^{O}_{min},y^{S}_{min}),max(x^{O}_{max},x^{S}_{max}),\max(y^{O}_{max},y^{S} _{max})\] The relationship feature \(F_{rel}\) is defined as an concatenation of \(\Delta(S,O)\), \(\Delta(S,R)\) and \(\Delta(O,R)\). Finally, the edge feature is formally represented as: \[F_{edge}=\texttt{Concate}(F_{pair},F_{rope},F_{rel})\] Task-oriented processingFor node classification, we apply a fully-connected layer to fuse features and a linear layer \(W^{h\times c}\) to classify each node. \(h\) is the hidden dimension and \(c\) is the number of categories. For node grouping, we follow the same idea as node classification: a fully-connected layer to fuse features and a linear layer \(W^{h\times 2}\) to infer whether there exists an edge between a node pair or not. All connected nodes are regarded as layout instance. The minimum bounding boxes of connected nodes are the final layout bounding boxes. The category mode of connected nodes is the category of the layout instance. ## 4 Experiments Previous works propose various definitions on edge and nodes, but they either get non-competitive results or introduce expensive computation costs. We therefore study the combinations of these designs and compare them with object detection and the Transformer Encoder model on several document layout analysis tasks. Our experiments demonstrate the effectiveness and competitiveness of our method. We train our model with 1 GeForce 3090 GPUs from scratch, We use an Adam optimizer with 0.937 momentum and 0.005 weight decay. The learning rate is set to 0.0001. ### Results on Public Datasets FunsdThe FUNSD [33] provides 199 annotated forms with 9, 707 entities and 31, 485 word-level annotations for four entity types: header, question, answer, and other. It includes noisy scanned documents in English from various fields, such as research, marketing, and advertising. This dataset is commonly used in GNN-related papers, and we used it for our experiments for easy comparison. FUNSD contains two level labels: word and entity. For word-level labels, we predict the category of each word and determine whether two words belong to the same entity. For entity-level labels, we adds two classification heads: one for entity labeling and the other for entity linking, which predicts whether two entities are matched. We report our best hyperparameter configuration as shown in Tab.3.3 ours-Large in the ablation experiment of section 4.3. We train the models with a batch size of 2 for 60 epochs and a warm-up period of 10 epochs. The training and validation set are split as provided, with 149 for training and 50 for evaluation. To evaluate the performance of our method, we use multi-class F1-scores to for node classification and binary edge classification F1-scores for grouping or linking, along with corresponding precision and recall values. Our method significantly outperformed previous works. Despite not using a language model, our model had a significantly smaller number of parameters as shown in Tab.1. In the entity task,our model shown in Tab.2, achieved a F1 score of 0.80575 for entity-labeling and 0.77031 for entity-linking, using 32.98 million parameters. We achieved state-of-the-art results in entity linking, outperforming other GNN models and the Transformer Encoder series. On the entity labeling task, our F1 score was lower than some Transformer Encoder models such as LayoutLMv2, LayoutLMv3, and BROS. This may be due to those methods having more parameters and being pre-trained on several text-image alignment tasks, giving them strong semantic and visual understanding abilities. Despite being trained from scratch with only 149 samples, our model still outperformed BERT, Roberta, and LayoutLM, and achieves significant improvements over most previous GNN works. However, doc2graph performed \(1.6\%\) better than our model on the entity labeling task, but it still suffers from the problems associated with language-based GNNs due to its use of a language model. Compared to other state-of-the-art models, our GNN model performed competitively. PublayNetPublayNet[37] contains research paper images annotated with bounding boxes and polygonal segmentation across five document layout categories: Text, Title, List, Figure, and Table. The official splits contain 335, 703 training images, 11, 245 validation images, and 11, 405 test images. We train our model on the training split and evaluate our model on the validation split following standard practice. We train our models with the batch size of 4 for 5 epochs and warm-up 1 epoch. Because of training resource limitations, we only report our suboptimal configuration shown in Tab.31 ours-Small. We measure the performance using the mean average precision (mAP) @ intersection over union (IOU) [0.50:0.95] of bounding boxes and report results in Tab.3 with only two categories : Text and Title. Tab.11 shows all categories. Our proposed GNN-based model outperforms several state-of-the-art object detection models. Specifically, our model achieves a mAP of 0.954 and 0.913 for Text and Title detection, with a model size of 77M. Our model shows better performance than Faster-RCNN,Cascade-RCNN and Mask-RCNN regardless of whether they have been pretrained, which sizes ranging from 168M to 538M. Compared to Faster-RCNN-Q and Post-OCR models with mAPs of 0.914 and 0.892 respectively, our GNN-based model achieves higher accuracy in Text category. Fig.6 shows some cases of our algorithm on this dataset. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & F1 & entity-labeling & entity-linking & Params \\ \hline GNN with Language model & FUNSD[33] & 0.57 & 0.04 & 340M \\ GNN with Language model & Named[10] & 0.64 & 0.39 & 201M \\ GNN & FUDGE[34] & 0.6652 & 0.5662 & 17M \\ GNN & Word-FUDGE[34] & 0.7221 & 0.6258 & 17M \\ GNN with Language model & Doc2Graph[13] & 0.8225 & 0.5336 & 6.2M+ \\ Transformer Encoder & BERT-base[35] & 0.6026 & 0.2765 & 110M \\ Transformer Encoder & BERT-L[35] & 0.6563 & 0.2911 & 340M \\ Transformer Encoder & RoBERTa-base[36] & 0.6648 & & 125M \\ Transformer Encoder & RoBERTa-L[36] & 0.7072 & & 355M \\ Transformer Encoder & LayoutLM[1] & 0.7927 & 0.4586 & 113M \\ Transformer Encoder & LayoutLM-L[1] & 0.7789 & 0.4283 & 343M \\ Transformer Encoder & LayoutLMv2[2] & 0.8276 & 0.4291 & 200M \\ Transformer Encoder & LayoutLMv2-L[2] & 0.8420 & 0.7057 & 426M \\ Transformer Encoder & BROS[4] & 0.8305 & 0.7146 & 138M \\ Transformer Encoder & BROS-L[4] & 0.8452 & 0.7701 & 340M \\ Transformer Encoder & LayoutLMv3[3] & 0.9029 & & 133M \\ Transformer Encoder & LayoutLMv3-L[3] & **0.9208** & & 368M \\ \hline GNN & ours-Large & 0.80575 & **0.77031** & **32.98M** \\ \hline \end{tabular} \end{table} Table 1: Performance for word level on FUNSD. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & mAP & Text & Title & Size \\ \hline OD & Faster-RCNN[37] & 0.910 & 0.826 & - \\ OD & Mask-RCNN[37] & 0.916 & 0.84 & 168M \\ OD-pretrained & Faster-RCNN[UDoc][16] & 0.939 & 0.885 & - \\ OD-pretrained & Mask-RCNN[DiT-base][15] & 0.934 & 0.871 & 432M \\ OD-pretrained & Cascade-RCNN[DiT-base][15] & 0.944 & 0.889 & 538M \\ OD-pretrained & Cascade-RCNN[layoutlm-v3][3] & 0.945 & 0.906 & 538M \\ OD & Faster-RCNN-Q[18] & 0.914 & & - \\ GNN & Post-OCR[18] & 0.892 & & - \\ \hline GNN & ours-Large & **0.954** & **0.913** & **77M** \\ \hline \end{tabular} \end{table} Table 2: Performance for entity level on FUNSD. Doc2Graph params counts a spaCy model and a pre-trained U-Net on FUNSD besides its own weights. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & F1 & entity-labeling & entity-linking & Params \\ \hline GNN with Language model & FUNSD[33] & 0.57 & 0.04 & 340M \\ GNN with Language model & Named[10] & 0.64 & 0.39 & 201M \\ GNN & FUDGE[34] & 0.6652 & 0.5662 & 17M \\ GNN & Word-FUDGE[34] & 0.7221 & 0.6258 & 17M \\ GNN with Language model & Doc2Graph[13] & 0.8225 & 0.5336 & 6.2M+ \\ Transformer Encoder & BERT-base[35] & 0.6026 & 0.2765 & 110M \\ Transformer Encoder & BERT-L[35] & 0.6563 & 0.2911 & 340M \\ Transformer Encoder & RoBERTa-base[36] & 0.6648 & & 125M \\ Transformer Encoder & RoBERTa-L[36] & 0.7072 & & 355M \\ Transformer Encoder & LayoutLM[1] & 0.7927 & 0.4586 & 113M \\ Transformer Encoder & LayoutLM-L[1] & 0.7789 & 0.4283 & 343M \\ Transformer Encoder & LayoutLMv2[2] & 0.8276 & 0.4291 & 200M \\ Transformer Encoder & LayoutLMv2-L[2] & 0.8420 & 0.7057 & 426M \\ Transformer Encoder & BROS[4] & 0.8305 & 0.7146 & 138M \\ Transformer Encoder & BROS-L[4] & 0.8452 & 0.7701 & 340M \\ Transformer Encoder & LayoutLMv3[3] & 0.9029 & & 133M \\ Transformer Encoder & LayoutLMv3-L[3] & **0.9208** & & 368M \\ \hline GNN & ours-Large & 0.80575 & **0.77031** & **32.98M** \\ \hline \end{tabular} \end{table} Table 2: Performance for entity level on FUNSD. Doc2Graph params counts a spaCy model and a pre-trained U-Net on FUNSD besides its own weights. DoclaynetDoclaynet [38] is a recently released document layout dataset annotated in COCO format. It contains 80863 manually annotated pages from diverse data sources to represent a wide variability in layouts with 11 distinct classes: Caption, Footnote, Formula, List-item, Page-footer, Page-header, Picture, Section-header, Table, Text, and Title. Compared with publaynet, this dataset covers more complex and diverse document types, including Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. We use the same training parameters as in PubLayNet and evaluate the quality of their predictions using mean average precision (mAP) with 10 overlaps that range from 0.5 to 0.95 in steps of 0.05 ([email protected]). These scores are computed by leveraging the evaluation code provided by the COCO API. Similarly,we only compare categories belonging to the Paragraph type without Table and Picture.The results shown in Tab.4 draw a similar conclusion as we do in Publaynet: ours achieves better results with only 1/7 parameters with respect to object detection models in total with a mAP of 0.771. Specifically, our model performs exceptionally well in Page-Header, Caption, Section-Header, Title, and Text detection with mAPs of 0.796, 0.809, 0.824, 0.643, and 0.827, respectively. Comparatively, YOLO-v5x6 achieves best result in Footnote, Title and Text.Tab.12 shows all categories. ### Discussion on Generalization of Language To demonstrate the language-independence of our model, we first train our model on datasets in English and evaluate them on a dataset in Chinese. Among them, Publaynet is a pure English dataset, Doclaynet is mostly in English, and DGDoc is a pure Chinese dataset containing 12, 000 images from real business scenarios. We use the same F1 indicator as the experiment on FUNSD. As shown in Tab.5, our models trained on Doclaynet data and models trained on Chinese-data behave similarly on Publaynet, and even the latter one gets a higher F1 on two tasks. The model trained on Publaynet data and the model trained on Chinese-data perform similarly on Doclaynet. This experiment proves that our model has the language-independent ability, which allows us to focus more on training data with diversity and complex layout structures instead of languages. The most important advantage of this conclusion is that for non-English application scenarios, we do not need to collect and annotate a large number of documents, which is very time-consuming and expensive. Instead, we can directly collect a variety of public datasets regardless of languages to train the model. ### Discussion on Generalization of Data Complexity In order to compare the generalization of the model, we use two datasets:Doclaynet is a more diverse dataset than Publaynet in layout. As shown in Tab.6, if we trained on Publaynet and predicted on Doclaynet, both models Mask-RCNN or our GNN-based model drop badly in mAP. But if we use the model trained on Doclaynet to predict on Publaynet, our model only slightly decreases, while Mask-RCNN drops significantly. It shows that as long as our model has been trained on complex and diverse layouts, it can also migrate to simple layouts well. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline mAP & Mask-RCNN-res50[38] & Mask-RCNN-resnext10[38] & Faster-RCNN-resnext10[38] & YOLO-v5x6[38] & ours-Small \\ \hline Page-Header & 0.719 & 0.7 & 0.720 & 0.679 & **0.796** \\ Caption & 0.684 & 0.715 & 0.701 & 0.777 & **0.809** \\ Formula & 0.601 & 0.634 & 0.635 & 0.662 & **0.726** \\ Page-footer & 0.616 & 0.593 & 0.589 & 0.611 & **0.920** \\ Section-Header & 0.676 & 0.693 & 0.684 & 0.746 & **0.824** \\ Footnote & 0.709 & 0.718 & 0.737 & **0.772** & 0.625 \\ Title & 0.767 & 0.804 & 0.799 & **0.827** & 0.643 \\ Text & 0.846 & 0.858 & 0.854 & **0.881** & 0.827 \\ \hline Total-paragraph & 0.702 & 0.7143 & 0.7148 & 0.744 & **0.771** \\ \hline Params & - & - & 60M & 140.7M & **19.953** \\ \hline \end{tabular} \end{table} Table 4: performance on Doclaynet on paragraph categories. \begin{table} \begin{tabular}{|c|c c|c c|c c|} \hline F1 & \multicolumn{2}{c|}{val-doclaynet} & \multicolumn{2}{c|}{val-publaynet} & \multicolumn{2}{c|}{val-chineseData} \\ & node & edge & node & edge & node & edge \\ \hline train-doclaynet & 0.94670 & 0.97267 & 0.95774 & **0.97171** & 0.85892 & 0.94851 \\ train-publaynet & 0.72742 & 0.86069 & 0.98729 & 0.99302 & 0.76158 & 0.8839 \\ train-chineseData & **0.88256** & **0.92285** & **0.96289** & 0.96847 & 0.97295 & 0.98872 \\ \hline \end{tabular} \end{table} Table 5: Comparison results for discussion on generalization of language. ### Ablation Experiments #### 4.4.1 Node-definition When considering the node definition, we tested whether the box information is 4 values or 8, the backbone tried res-18 and res-50, GNN tried DGCNN and GravNet, whether the GNN input and output are residual connect.The impact of each factor was compared on four tasks for more comprehensive assessment in Tab.7.The addition of residual connections (using res-18 or res-50) generally leads to better performance than non-residual networks.The use of more big backbone (res-50 vs res-18) can improve the results for all tasks.When comparing different GNN models, DGCNN tend to perform better than Gravnet.However the importance of box information is vague. #### 4.4.2 Edge-definition As for edge definition, we tested four factors:whether to use relationship,ROPE, polar and node class result as shown in Tab.8. Four tasks is also WL (word-labeling), WG (word-grouping), EL (entity-labeling), and EG (entity-linking). The results indicate that adding relation and node-class edges improves the performance of all metrics except EG. Adding the ROPE edge improves WL and WG but not EL and EG. Adding the polar edge does not significantly affect the results. #### 4.4.3 Input Process & Loss Tab.9 shows the results of various ablation experiments conducted on the FUNSD dataset, using different input processes and loss functions. The first row represents the baseline model, which uses a 400*400 image size and cross-entropy (CE) loss. The effect of image padding is investigated by adding a padding of unknown size to the input images. The results indicate that this modification does not significantly affect the word-labeling metric but has a negative impact on word-grouping, entity-labeling, and entity-linking. Increasing the image size to 800*608 pixels leads to some \begin{table} \begin{tabular}{|c|c|c c|c c|} \hline \multicolumn{2}{|c|}{mAP} & \multicolumn{2}{c|}{val-publaynet} & \multicolumn{2}{c|}{val-doclaynet} \\ & & maskrcnn-res50 & ours-Small & maskrcnn-res50 & ours-Small \\ \hline \multirow{2}{*}{train-publaynet} & Section-Header & 0.87 & 0.913 & 0.32 & **0.484** \\ & Text & 0.96 & 0.954 & 0.42 & 0.338 \\ \hline \multirow{2}{*}{train-doclaynet} & Section-Header & 0.53 & **0.811** & 0.68 & 0.796 \\ & Text & 0.77 & 0.769 & 0.84 & 0.827 \\ \hline \end{tabular} \end{table} Table 6: Comparison results for discussion on generalization of data complexity. \begin{table} \begin{tabular}{|c c c c|c c|c c|} \hline box-infor & backbone & GNN & GNN-res & WL & WG & EL & EG \\ \hline 4 & res-18 & D & - & 0.65189 & 0.86821 & 0.76458 & 0.67797 \\ 8 & res-18 & D & - & 0.63914\(\downarrow\) & 0.86248\(\downarrow\) & 0.77573 & 0.69695 \\ 8 & res-18 & D & + & 0.65579 & 0.86399\(\downarrow\) & 0.77144 & 0.70509 \\ 8 & res-50 & D & - & 0.66774 & 0.87487 & 0.77873 & 0.71613 \\ 8 & res-18 & G & - & 0.58861\(\downarrow\) & 0.84461\(\downarrow\) & 0.71397\(\downarrow\) & 0.60032\(\downarrow\) \\ \hline \end{tabular} \end{table} Table 7: Ablation experiments of node definition on FUNSD: WL, WG, EL, EG means word-labeling, word-grouping, entity-labeling, entity-linking;D and G means DGCNN and Gravnet;box-info 4 means \(x_{min},y_{min},w,h\), 8 means \(x_{min},y_{min},x_{max},y_{max},x_{center},y_{center},w,h\). \begin{table} \begin{tabular}{|c c c c|c c|c c|} \hline relation[8] & ROPE[11] & polar[13] & node-class[13] & WL & WG & EL & EG \\ \hline \multirow{4}{*}{+} & & & & 0.63478 & 0.86348 & 0.77873 & 0.70071 \\ & & & + & 0.63914 & 0.86248\(\downarrow\) & 0.77573\(\downarrow\) & 0.69695\(\downarrow\) \\ \cline{1-1} & & & + & 0.63972 & 0.90039 & 0.77916 & 0.74759 \\ \cline{1-1} \cline{2-10} & + & + & + & 0.63374\(\downarrow\) & 0.89869\(\downarrow\) & 0.77358\(\downarrow\) & 0.74925 \\ \hline \end{tabular} \end{table} Table 8: Ablation experiments of edge definition on FUNSD: WL, WG, EL, EG means word-labeling, word-grouping, entity-labeling, entity-linking. improvement in the word-grouping and entity-labeling metrics, but the word-labeling and entity-linking scores remain relatively low.Adding a contrastive loss term in addition to the cross-entropy loss leads to some improvement in all four tasks.Overall, Padding may not be helpful, larger image sizes may not lead to significant improvements, and the addition of a contrastive loss may be beneficial. #### 4.4.4 Final Best Model We study the combinations of a variety of designs for each component in the GNN model and justify their effectiveness on our document layout analysis tasks. As seen in Tab.10, scale-up image size, relationship proposal, and larger CNN backbone are useful to improve accuracy, while contrast loss, and Gravnet are not. The importance of other factors is ambiguous to tell. DGCNN is a better choice than Gravnet. Based on the detailed ablation experiment, we get the best design combination is ours-Large. Since we enlarge the input image size, more memory is cost. Therefore, on the large datasets DoclayNet and PublayNet, we finally report the suboptimal configuration ours-Small instead. ## 5 Conclusion and Future Work In this paper, we propose a language-independent GNN framework for document layout analysis tasks. Our proposed model, Paragraph2Graph, uses a pre-trained CNN to encode image features and incorporates 2d OCR text coordinates and image features as node features in a graph. We use a dynamic graph convolutional neural network (DGCNN) to update the graph based on these features and include edge features based on relationships. To improve efficiency, we also propose a graph sampling method based on layout modality. Our method does not require a tokenizer or language model and can easily generalize to multilingual documents without modifications. We show that our method can achieve competitive results on three public datasets with fewer parameters. There are several potential improvements and attempts we leave for future work: (1) We have only experimented with only a few common GNNs, while torch-geometric [39] officially offers nearly 60 related algorithms. Some of them can be a better substitution for DGCNN. (2) The backbone of image features can be pre-trained on document data making it better at capturing document image features. (3) Similar to the layoutLM which has a reasonable pre-training task to improve the merge of different modalities, our model can be pre-trained with image reconstruction tasks as well, such as MAE. (4) Our model doesn't behave well on grouping tables and figures. Future research is needed to expand its generality on these important document layout components. ## Acknowledgments We would like to acknowledge Xinxing Pan, Weihao Li, Binbin Yang, Hailong Zhang for their helpful suggestions. \begin{table} \begin{tabular}{|c|c c|c c|c c|} \hline Name & backbone & image-size & WL & WG & EL & EG \\ \hline ours-Small & res-18 & 400*400 & 0.66579 & 0.90148 & 0.78216 & 0.75522 \\ ours-Large & res-50 & 800*608 & **0.68933** & **0.91483** & **0.82504** & **0.77031** \\ \hline \end{tabular} \end{table} Table 10: Best model. \begin{table} \begin{tabular}{|c c c|c c|c c|} \hline image-size & image-pad & loss & WL & WG & EL & EG \\ \hline 400*400 & & CE & 0.61066 & 0.86352 & 0.78216 & 0.72722 \\ 400*400 & + & CE & 0.63914 & 0.86248\(\downarrow\) & 0.77573\(\downarrow\) & 0.69695\(\downarrow\) \\ 800*608 & + & CE & 0.68795 & 0.87782 & 0.81947 & 0.70487\(\downarrow\) \\ 400*400 & + & CE+Con & 0.64178 & 0.87105 & 0.77658\(\downarrow\) & 0.71704\(\downarrow\) \\ \hline \end{tabular} \end{table} Table 9: Ablation experiments of input process and loss on FUNSD: WL, WG, EL, EG means word-labeling, word-grouping, entity-labeling, entity-linking;CE and Con means CE loss and Contrastive loss.
2308.09352
Subshifts of finite symbolic rank
The definition of subshifts of finite symbolic rank is motivated by the finite rank measure-preserving transformations which have been extensively studied in ergodic theory. In this paper we study subshifts of finite symbolic rank as essentially minimal Cantor systems. We show that minimal subshifts of finite symbolic rank have finite topological rank, and conversely, every minimal Cantor system of finite topological rank is either an odometer or conjugate to a minimal subshift of finite symbolic rank. We characterize the class of all minimal Cantor systems conjugate to a rank-$1$ subshift and show that it is dense but not generic in the Polish space of all minimal Cantor systems. Within some different Polish coding spaces of subshifts we also show that the rank-1 subshifts are dense but not generic. Finally we study topological factors of minimal subshifts of finite symbolic rank. We show that every infinite odometer and every irrational rotation is the maximal equicontinuous factor of a minimal subshift of symbolic rank $2$, and that a subshift factor of a minimal subshift of finite symbolic rank has finite symbolic rank.
Su Gao, Ruiwen Li
2023-08-18T07:19:07Z
http://arxiv.org/abs/2308.09352v3
# Subshifts of finite symbolic rank ###### Abstract. The definition of subshifts of finite symbolic rank is motivated by the finite rank measure-preserving transformations which have been extensively studied in ergodic theory. In this paper we study subshifts of finite symbolic rank as essentially minimal Cantor systems. We show that minimal subshifts of finite symbolic rank have finite topological rank, and conversely, every minimal Cantor system of finite topological rank is either an odometer or conjugate to a minimal subshift of finite symbolic rank. We characterize the class of all minimal Cantor systems conjugate to a rank-1 subshift and show that it is dense but not generic in the Polish space of all minimal Cantor systems. Within some different Polish coding spaces of subshifts we also show that the rank-1 subshifts are dense but not generic. Finally we study topological factors of minimal subshifts of finite symbolic rank. We show that every infinite odometer and every irrational rotation is the maximal equicontinuous factor of a minimal subshift of symbolic rank 2, and that a subshift factor of a minimal subshift of finite symbolic rank has finite symbolic rank. 2020 Mathematics Subject Classification: Primary 37B10; Secondary 54H15 The first author acknowledges the partial support of his research by the National Natural Science Foundation of China (NSFC) grants 12250710128 and 12271263. transformations of arbitrary finite rank (where there is a uniform finite bound on the number of stacks used in every step of the cutting-and-stacking process) have also been extensively studied. In particular, they are known to have different models, some of which are of geometric nature, and some symbolic. Ferenczi [20] gave an excellent survey over a quarter of a century ago. Motivated by the symbolic models of finite rank measure-preserving transformations, researchers began to consider these dynamical systems in the topological setting. For example, rank-one subshifts (rank-one transformations without the measure) are known to have zero topological entropy, and Bourgain [9] proved Sarnack's conjecture for minimal rank-one subshifts. Other topological properties of rank-one subshifts have been considered in [1],[11],[17],[22], [23],[24],[25], etc. In [26] the authors started a systematic study of subshifts of finite symbolic rank. Among other things, it was proved that they all have zero topological entropy. Much of [26] focused on the combinatorial properties of infinite words that generate subshifts of finite symbolic rank, and not so much on the topological properties of the subshifts themselves. In particular, one of the main questions left unsolved was how the symbolic rank relates to the more well-established notion of topological rank. The topological rank was defined and extensively studied in [13], [15], [8], [7], and [16], etc. There is also a related notion of \(\mathcal{S}\)-adic subshift, which was introduced by Ferenczi in [19] and extensively studied in [14], [5], [34], [6], [12], [18] etc. For \(\mathcal{S}\)-adic subshifts there is associated a notion of alphabet rank, and its relationship with the topological rank is mostly clarified. For example, it was shown in [12] that every minimal Cantor system is either an odometer or conjugate to an \(\mathcal{S}\)-adic subshift of finite alphabet rank. Conversely, every \(\mathcal{S}\)-admic subshift of finite alphabet rank has finite topological rank. In this paper we prove a number of results on the topological properties of subshifts of finite symbolic rank. The subshifts we consider all have alphabet \(\{0,1\}\), and therefore they are subshifts of \(2^{\mathbb{Z}}\). We show that any minimal subshift of finite symbolic rank has finite topological rank (Theorem 6.7) and conversely, any minimal Cantor system of finite topological rank is either an odometer or conjugate to a minimal subshift of finite symbolic rank (Theorem 6.9). In our proofs of these facts we do not need the \(\mathcal{S}\)-adic representations of the minimal Cantor systems, but as the reader call tell, the results obtained are similar to those regarding \(\mathcal{S}\)-adic subshifts. We also consider various classes of Cantor systems and characterize their descriptive complexity. In particular, the class of all Cantor systems can be coded by the Polish space \(\mathrm{Aut}(\mathcal{C})\) (this is defined and discussed in Subsection 2.3), and we show that the classes of all essentially minimal Cantor systems, minimal Cantor systems, as well as those whose topological rank has a fixed bound all form \(G_{\delta}\) subspaces, and hence are Polish (Section 3). On the other hand, the class of all minimal Cantor systems conjugate to a rank-1 subshift is dense but not generic in the Polish space of all minimal Cantor systems (Proposition 7.1). Additionally we consider two more Polish spaces of subshifts as done in [35] and show that the class of minimal subshifts conjugate to a rank-1 subshift is dense in these spaces but is not generic in either of them. This is in contrast with the situation in the measure-theoretic setting. Nevertheless, together with the results of [35], our results show that the class of minimal subshifts conjugate to one of symbolic rank \(\leq 2\) is generic in both of these Polish coding spaces of subshifts. We also consider topological factors of minimal subshifts of finite symbolic rank (Section 8). We improve Theorem 6.9 by showing that a minimal subshift of finite topological rank \(\geq 2\) must be of finite symbolic rank itself (Corollary 8.4), and is not just conjugate to a subshift of finite symbolic rank as guaranteed by Theorem 6.9. However, the symbolic rank of the subshift might be much greater than the one to which it is conjugate. We show that any infinite odometer and any irrational rotation is the maximal equicontinuous factor of a minimal subshift of symbolic rank \(2\) (Theorems 8.6 and 8.7), which is in contrast with known results about rank-1 subshifts. The rest of the paper is organized as follows. In Section 2 we give the preliminaries on descriptive set theory, topological dynamical systems, (essentially) minimal Cantor systems, ordered Bratteli diagrams, Kakutani-Rohlin partitions, subshifts, and what it means for a subshift to have finite symbolic rank. In Section 3 we compute the descriptive complexity of the classes of essentially minimal Cantor systems and those with topological rank \(\leq n\) for some \(n\geq 1\), by giving some topological characterizations of these classes within the Polish space of all Cantor systems. In Section 4 we give a topological characterization of all minimal Cantor systems conjugate to a rank-1 subshift. In Section 5 we characterize minimal subshifts of finite symbolic rank as exactly those admitting a proper finite rank construction with bounded spacer parameter. This will be a basic tool in the study of minimal subshifts of finite symbolic rank. Section 6 is the main section of this paper, in which we prove the main theorems (Theorems 6.7 and 6.9) which clarify the relationship between the notions of symbolic rank and topological rank. We give some examples to show that our results are in some sense optimal. We also prove a result connecting the notion of finite alphabet rank for \(\mathcal{S}\)-adic subshifts with the notion of finite symbolic rank. This gives an alternative proof of Theorem 6.9 via the main result of [12]. In Section 7 we consider the density and the genericity of the class of all minimal subshifts conjugate to a rank-1 subshift in various Polish coding spaces of Cantor systems and subshifts. Finally, in Section 8 we consider topological factors of minimal subshifts of finite symbolic rank. _Acknowledgements._ We thank Fabien Durand, Filipe Garcia-Ramos, Samuel Petite, and Todor Tsankov for useful discussions on the topics of this paper. Particularly, we thank Samuel Petite for pointing out reference [18] to us and Filipe Garcia-Ramos for bringing reference [35] to our attention. ## 2. Preliminaries ### Descriptive set theory In the rest of the paper we will be using some concepts, terminology and notation from descriptive set theory. In this subsection we review these concepts, terminology and notation, which can be found in [31]. A _Polish space_ is a topological space that is separable and completely metrizable. Let \(X\) be a Polish space and \(d_{X}\) be a compatible complete metric on \(X\). Let \(K(X)\) be the space of all compact subsets of \(X\), and let \(d_{H}\) be the _Hausdorff metric_ defined on \(K(X)\) as follows. For \(A\in K(X)\) and \(x\in X\), let \(d(x,A)=\inf\{d(x,y)\,:\,y\in A\}\). Now for \(A,B\in K(X)\), let \[d_{H}(A,B)=\max\left\{\sup\{d(x,B)\,:\,x\in A\},\sup\{d(y,A)\,:\,y\in B\}\right\}.\] Then \(d_{H}\) is a metric on \(K(X)\) that makes \(K(X)\) a Polish space. Moreover, if \(X\) is compact, then \(K(X)\) is compact. Let \(X\) be a Polish space. A subset \(A\) of \(X\) is \(G_{\delta}\) if \(A\) is the intersection of countably many open subsets of \(X\). A subspace \(Y\) of \(X\) is Polish iff \(Y\) is a \(G_{\delta}\) subset of \(X\). We say that a subset \(A\) of \(X\) is _generic_, or the elements of \(A\) are _generic_ in \(X\), if \(A\) contains a dense \(G_{\delta}\) subset of \(X\). More generally, by a transfinite induction on \(1\leq\alpha<\omega_{1}\), we can define the _Borel hierarchy_ on \(X\) as follows: \[\begin{array}{rcl}\mathbf{\Sigma}^{0}_{1}&=&\mbox{the collection of all open subsets of $X$}\\ \mathbf{\Pi}^{0}_{1}&=&\mbox{the collection of closed subsets of $X$}\\ \mathbf{\Sigma}^{0}_{\alpha}&=&\left\{\bigcup_{n\in\mathbb{N}}A_{n}\,:\,A_{n} \in\mathbf{\Pi}^{0}_{\beta_{n}}\mbox{ for some $\beta_{n}<\alpha$}\right\}\\ \mathbf{\Pi}^{0}_{\alpha}&=&\left\{X\setminus A\,:\,A\in\mathbf{\Sigma}^{0}_{ \alpha}\right\}\end{array}\] We also define \(\mathbf{\Delta}^{0}_{\alpha}=\mathbf{\Sigma}^{0}_{\alpha}\cap\mathbf{\Pi}^{0} _{\alpha}\). Thus \(\mathbf{\Delta}^{0}_{1}\) is the collection of all clopen subsets of \(X\). With this notation, \(\bigcup_{\alpha<\omega_{1}}\mathbf{\Sigma}^{0}_{\alpha}=\bigcup_{\alpha< \omega_{1}}\mathbf{\Pi}^{0}_{\alpha}=\bigcup_{\alpha<\omega_{1}}\mathbf{ \Delta}^{0}_{\alpha}\) is the collection of all _Borel_ subsets of \(X\). The collection of all \(G_{\delta}\) subsets of \(X\) is exactly \(\mathbf{\Pi}^{0}_{2}\). Let \(X\) be a topological space. Recall that a subset \(A\) of \(X\) is _nowhere dense_ in \(X\) if the interior of the closure of \(A\) is empty. \(A\) is _meager_ in \(X\) if \(A\subseteq\bigcup_{n\in\mathbb{N}}B_{n}\) where each \(B_{n}\) is nowhere dense in \(X\). \(A\) is _nonmeager_ in \(X\) if it is not meager in \(X\); \(A\) is _comeager_ in \(X\) if \(X\setminus A\) is meager in \(X\). \(A\) has the _property of Baire_ if there is an open subset \(U\) of \(X\) such that \(A\triangle U=(A\setminus U)\cup(U\setminus A)\) is meager. In a Polish space, all Borel subsets have the property of Baire. Meagerness, nonmeagerness, and comeagerness can be defined similarly for an open subset of a Polish space. The following lemma is a folklore in descriptive set theory. **Lemma 2.1**.: _Let \(X,Y\) be Polish spaces, \(V\subseteq Y\) be nonempty open, \(\alpha<\omega_{1}\), and \(A\subseteq X\times Y\). Then the following hold._ 1. _If_ \(A\) _is_ \(\mathbf{\Sigma}^{0}_{\alpha}\)_, then the set_ \[\{x\in X\,:\,\,\,\{y\in V\,:\,(x,y)\in A\}\mbox{ is nonmeager in $V$}\}\] _is_ \(\mathbf{\Sigma}^{0}_{\alpha}\) _in_ \(X\)_._ 2. _If_ \(A\) _is_ \(\mathbf{\Pi}^{0}_{\alpha}\)_, then the set_ \[\left\{x\in X\,:\;\;\left\{y\in V\,:\,(x,y)\in A\right\}\text{ is comeager in }V\right\}\] _is_ \(\mathbf{\Pi}^{0}_{\alpha}\) _in_ \(X\)_._ Proof.: Note that (ii) follows from (i) for each \(\alpha<\omega_{1}\). We prove both by a transfinite induction on \(1\leq\alpha<\omega_{1}\). First suppose \(A\) is \(\mathbf{\Sigma}^{0}_{1}\), i.e., \(A\) is open in \(X\times Y\). Without loss of generality assume \(A\cap(X\times V)\) is nonempty. Fix \(x_{0}\in X\) and suppose \(\left\{y\in V\,:\,(x,y)\in A\right\}\) is nonmeager in \(V\). In particular, there is \(y_{0}\in V\) such that \((x_{0},y_{0})\in A\). Thus there is a basic open set \(U\times W\) in \(X\times V\) such that \((x_{0},y_{0})\in U\times W\subseteq A\cap(X\times V)\). This shows that for all \(x\in U\), the set \(\left\{y\in V\,:\,(x,y)\in A\right\}\) contains \(W\), and therefore is nonmeager in \(V\). Thus the set \(\left\{x\in X\,:\;\;\left\{y\in V\,:\,(x,y)\in A\right\}\text{ is nonmeager in }V\right\}\) is open. For a general \(\alpha>0\), suppose \(A\) is \(\mathbf{\Sigma}^{0}_{\alpha}\). Thus \(A=\bigcup_{n\in\mathbb{N}}B_{n}\) where each \(B_{n}\) is \(\mathbf{\Pi}^{0}_{\beta_{n}}\) for some \(\beta_{n}<\alpha\). Again assume \(A\cap(X\times V)\) is nonempty. Fix any \(x\in X\) such that \(\left\{y\in V\,:\,(x,y)\in A\text{ is nonmeager in }V\right\}\). Since \(A=\bigcup_{n}B_{n}\), \(\left\{y\in V\,:\,(x,y)\in A\right\}=\bigcup_{n}\left\{y\in V\,:\,(x,y)\in B_ {n}\right\}\). Since this set is nonmeager in \(V\), at least one of \(\left\{y\in V\,:\,(x,y)\in B_{n}\right\}\) is nonmeager in \(V\), and therefore there is a basic open \(V^{\prime}\subseteq V\) such that \(\left\{y\in V^{\prime}\,:\,(x,y)\in B_{n}\right\}\) is comeager in \(V^{\prime}\). This shows that the set \(S=\left\{x\in X\,:\;\left\{y\in V\,:\,(x,y)\in A\right\}\text{ is nonmeager in }V\right\}\) can be written as \[\bigcup_{V^{\prime}\subseteq V}\bigcup_{n}\left\{x\in X\,:\;\;\left\{y\in V^{ \prime}\,:\,(x,y)\in B_{n}\right\}\text{ is comeager in }V^{\prime}\right\}.\] By the inductive hypothesis, the set \[\left\{x\in X\,:\;\left\{y\in V^{\prime}\,:\,(x,y)\in B_{n}\right\}\text{ is comeager in }V^{\prime}\right\}\] is \(\mathbf{\Pi}^{0}_{\beta_{n}}\) for some \(\beta_{n}<\alpha\). Thus \(S\) is a countable union of \(\mathbf{\Pi}^{0}_{\beta_{n}}\) sets, and therefore is \(\mathbf{\Sigma}^{0}_{\alpha}\). ### Topological dynamical systems The concepts we review in this subsection are standard and can be found in any standard text on topological dynamics, e.g. [4] and [33]. By a _topological dynamical system_ we mean a pair \((X,T)\), where \(X\) is a compact metrizable space and \(T:X\to X\) is a homeomorphism. If \((X,T)\) is a topological dynamical system and \(Y\subseteq X\) satisfies \(TY=Y\), then \(Y\) is called a \(T\)_-invariant_ subset. If \((X,T)\) and \((Y,S)\) are topological dynamical systems and \(\varphi:X\to Y\) is a continuous surjection satisfying \(\varphi\circ T=S\circ\varphi\), then \(\varphi\) is called a _factor map_ and \((Y,S)\) is called a (_topological_) _factor_ of \((X,T)\). If in addition \(\varphi\) is a homeomorphism, then it is called a (_topological_) _conjugacy (map)_ and we say that \((X,T)\) and \((Y,S)\) are (_topologically_) _conjugate_. If \((X,T)\) is a topological dynamical system and \(\rho\) is a compatible metric on \(X\), then \(\rho\) is necessarily complete since \(X\) is compact. Let \((X,T)\) be a topological dynamical system and fix \(\rho\) a compatible metric on \(X\). We say that \((X,T)\) is _equicontinuous_ if for all \(\epsilon>0\) there is \(\delta>0\) such that for all \(n\in\mathbb{Z}\), if \(\rho(x,y)<\delta\) then \(\rho(T^{n}x,T^{n}y)<\epsilon\). Since \(X\) is compact, the equicontinuity is a topological notion and does not depend on the compatible metric \(\rho\). Every topological dynamical system \((X,T)\) has a _maximal equicontinuous factor_ (or _MEF_), i.e., an equicontinuous factor \((Y,S)\) with the factor map \(\varphi\) such that if \((Z,G)\) is another factor of \((X,T)\) with factor map \(\psi\) then there is a factor map \(\theta:(Y,S)\to(Z,G)\) such that \(\psi=\theta\circ\varphi\). If \((X,T)\) is a topological dynamical system and \(x\in X\), the _orbit_ of \(x\) is defined as \(\{T^{k}x\,:\,k\in\mathbb{Z}\}\). If \(A\) is a clopen subset of \(X\), the _return times_ of \(x\) to \(A\) is defined as \(\operatorname{Ret}_{A}(x)=\{n\in\mathbb{Z}\,:\,T^{n}x\in A\}\). We regard \(\operatorname{Ret}_{A}(x)\) as an element of \(2^{\mathbb{Z}}=\{0,1\}^{\mathbb{Z}}\). ### Minimal Cantor systems Recall that a _Cantor space_ is a zero-dimensional, perfect, compact metrizable space. Let \(X\) be a Cantor space and \(T:X\to X\) be a homeomorphism. Then \((X,T)\) is called a _Cantor system_. \(T\) is _minimal_ if every orbit is dense, i.e., for all \(x\in X\), \(\{T^{k}x\,:\,k\in\mathbb{Z}\}\) is dense in \(X\). A _minimal Cantor system_ is a pair \((X,T)\) where \(X\) is a Cantor space and \(T:X\to X\) is a minimal homeomorphism. Let \(\mathcal{C}=2^{\mathbb{N}}=\{0,1\}^{\mathbb{N}}\) be the infinite product of the discrete space \(\{0,1\}\) with the product topology. Then every Cantor space is homeomorphic to \(\mathcal{C}\). Let \(d_{\mathcal{C}}\) be the canonical compatible complete metric on \(\mathcal{C}\), i.e., for \(x,y\in\mathcal{C}\), if \(x\neq y\) then \[d_{\mathcal{C}}(x,y)=2^{-n},\text{ where }n\in\mathbb{N}\text{ is the least such that }x(n)\neq y(n).\] Let \[\operatorname{Aut}(\mathcal{C})=\{T\,:\,T\text{ is a homeomorphism from }\mathcal{C}\text{ to }\mathcal{C}\}\] be equipped with the compact-open topology, or equivalently the supnorm metric, i.e., for \(T,S\in\operatorname{Aut}(\mathcal{C})\), \[d(T,S)=\sup\{d_{\mathcal{C}}(Tx,Sx)\,:\,x\in\mathcal{C}\}.\] Then \(\operatorname{Aut}(\mathcal{C})\) is a Polish space (c.f., e.g. [32]). Let \(M(\mathcal{C})\) be the set of all minimal homeomorphisms of \(\mathcal{C}\). Then for a \(T\in\operatorname{Aut}(\mathcal{C})\), \(T\in M(\mathcal{C})\) iff for all nonempty clopen \(U\subseteq\mathcal{C}\), there is \(N\in\mathbb{N}\) such that \(\mathcal{C}=\bigcup_{-N\leq n\leq N}T^{n}U\). This characterization implies that \(M(\mathcal{C})\) is a \(G_{\delta}\) subset of \(\operatorname{Aut}(\mathcal{C})\), and hence \(M(\mathcal{C})\) is also a Polish space. \(M(\mathcal{C})\) is our coding space for all minimal Cantor systems. We will also consider essentially minimal Cantor systems. A Cantor system \((X,T)\) is _essentially minimal_ if it contains a unique minimal set, i.e., a nonempty closed \(T\)-invariant set which is minimal among all such sets. ### Ordered Bratteli diagrams The concepts and terminology reviewed in this subsection are from [29], [27] and [13]. Some notations are from [12]. Recall that a _Bratteli diagram_ is an infinite graph \((V,E)\) with the following properties: * The vertex set \(V\) is decomposed into pairwise disjoint nonempty finite sets \(V=V_{0}\cup V_{1}\cup V_{2}\cup\cdots\), where \(V_{0}\) is a singleton \(\{v_{0}\}\); * The edge set \(E\) is decomposed into pairwise disjoint nonempty finite sets \(E=E_{1}\cup E_{2}\cup\cdots\); * For any \(n\geq 1\), each \(e\in E_{n}\) connects a vertex \(u\in V_{n-1}\) with a vertex \(v\in V_{n}\). In this case we write \(\mathsf{s}(e)=u\) and \(\mathsf{r}(e)=v\). Thus \(\mathsf{s},\mathsf{r}:V\to E\) are maps such that \(\mathsf{s}(E_{n})\subseteq V_{n-1}\) and \(\mathsf{r}(E_{n})\subseteq V_{n}\) for all \(n\geq 1\). * \(\mathsf{s}^{-1}(v)\neq\varnothing\) for all \(v\in V\) and \(\mathsf{r}^{-1}(v)\neq\varnothing\) for all \(v\in V\setminus V_{0}\). An _ordered Bratteli diagram_ is a Bratteli diagram \((V,E)\) together with a partial ordering \(\preceq\) on \(E\) so that edges \(e\) and \(e^{\prime}\) are \(\preceq\)-comparable if and only if \(\mathsf{r}(e)=\mathsf{r}(e^{\prime})\). A finite or infinite _path_ in a Bratteli diagram \((V,E)\) is a sequence \((e_{1},e_{2},\dots)\) where \(\mathsf{r}(e_{i})=\mathsf{s}(e_{i+1})\) for all \(i\geq 1\). Given a Bratteli diagram \((V,E)\) and \(0\leq n<m\), let \(E_{n,m}\) be the set of all finite paths connecting vertices in \(V_{n}\) and those in \(V_{m}\). If \(p=(e_{n+1},\dots,e_{m})\in E_{n,m}\), define \(\mathsf{r}(p)=\mathsf{r}(e_{m})\) and \(\mathsf{s}(p)=\mathsf{s}(e_{n+1})\). If in addition the Bratteli diagram is partially ordered by \(\preceq\), then we also define a partial ordering \(p\preceq^{\prime}q\) for \(p=(e_{n+1},\dots,e_{m}),q=(f_{n+1},\dots,f_{m})\in E_{n,m}\) as either \(p=q\) or there exists \(n+1\leq i\leq m\) such that \(e_{i}\neq f_{i}\), \(e_{i}\preceq f_{i}\) and \(e_{j}=f_{j}\) for all \(i<j\leq m\). For an arbitrary strictly increasing sequence \((n_{k})_{k\geq 0}\) of natural numbers with \(n_{0}=0\), we define the _contraction_ or _telescoping_ of a Bratteli diagram \((V,E)\) with respect to \((n_{k})_{k\geq 0}\) as \((V^{\prime},E^{\prime})\) where \(V^{\prime}_{k}=V_{n_{k}}\) for \(k\geq 0\) and \(E^{\prime}_{k}=E_{n_{k-1},n_{k}}\) for \(k\geq 1\). If in addition the given Bratteli diagram is ordered, then by contraction or telescoping we also obtain an ordered Bratteli diagram \((V^{\prime},E^{\prime},\preceq^{\prime})\) with the order \(\preceq^{\prime}\) defined above. The inverse of the telescoping process is called _microscoping_. Two ordered Bratteli diagrams are _equivalent_ if one can be obtained from the other by a sequence of telescoping and microscoping processes. A Bratteli diagram \((V,E)\) is _simple_ if there is a strictly increasing sequence \((n_{k})_{k\geq 0}\) of natural numbers with \(n_{0}=0\) such that the telescoping \((V^{\prime},E^{\prime})\) of \((V,E)\) with respect to \((n_{k})_{k\geq 0}\) satisfies that for all \(n\geq 1\), \(u\in V^{\prime}_{n-1}\) and \(v\in V^{\prime}_{n}\), there is \(e\in E^{\prime}_{n}\) with \(\mathsf{s}(e)=u\) and \(\mathsf{r}(e)=v\). This is equivalent to the property that for any \(n\geq 1\) there is \(m>n\) such that every pair of vertices \(u\in V_{n}\) and \(v\in V_{m}\) are connected by a finite path. It is obvious that if a Bratteli diagram \(B\) is simple, then any Bratteli diagram equivalent to it is also simple. Given a Bratteli diagram \(B=(V,E)\), define \[X_{B}=\{(e_{n})_{n\geq 1}\,:\,e_{n}\in E_{n},\mathsf{r}(e_{n})=\mathsf{s}(e_{n +1})\text{ for all }n\geq 1\}.\] Since \(X_{B}\) is a subspace of the product space \(\prod_{n\geq 1}E_{n}\), we equip \(X_{B}\) with the subspace topology of the product topology on \(\prod_{n\geq 1}E_{n}\). An ordered Bratteli diagram \(B=(V,E,\preceq)\) is _essentially simple_ if there are unique elements \(x_{\max}=(e_{n})_{n\geq 1},x_{\min}=(f_{n})_{n\geq 1}\in X_{B}\) such that for every \(n\geq 1\), \(e_{n}\) is a \(\preceq\)-maximal element and \(f_{n}\) is a \(\preceq\)-minimal element. \(B=(V,E,\preceq)\) is _simple_ if \((V,E)\) is simple and \(B\) is essentially simple. If an ordered Bratteli diagram \(B\) is (essentially) simple, then any ordered Bratteli diagram equivalent to it is also (essentially) simple. Given an essentially simple ordered Bratteli diagram \(B=(V,E,\preceq)\), we define the _Vershik map_\(\lambda_{B}:X_{B}\to X_{B}\) as follows: \(\lambda_{B}(x_{\max})=x_{\min}\); if \((e_{n})_{n\geq 1}\in X_{B}\) and \((e_{n})_{n\geq 1}\neq x_{\max}\), then let \[\lambda_{B}((e_{1},e_{2},\ldots,e_{k},e_{k+1},\ldots))=(f_{1},f_{2},\ldots,f_ {k},e_{k+1},\ldots),\] where \(k\) is the least such that \(e_{k}\) is not \(\preceq\)-maximal, \(f_{k}\) is the \(\preceq\)-successor of \(e_{k}\), and \((f_{1},\ldots,f_{k-1})\) is the unique path from \(v_{0}\) to \(\mathsf{s}(f_{k})=\mathsf{r}(f_{k-1})\) such that \(f_{i}\) is \(\preceq\)-minimal for each \(1\leq i\leq k-1\). Then \((X_{B},\lambda_{B})\) is an essentially minimal Cantor system ([29]), which we call the _Bratteli-Vershik system_ generated by \(B\). If \(B=(V,E,\preceq)\) is a simple ordered Bratteli diagram and \(X_{B}\) is infinite, then \((X_{B},\lambda_{B})\) is a minimal Cantor system ([27]). If two simple ordered Bratteli diagrams are equivalent, then the Bratteli-Vershik systems generated by them are conjugate, with the conjugacy map sending \(x_{\min}\) to \(x_{\min}\). An essentially minimal Cantor system \((X,T)\) is of _finite topological rank_ if it is conjugate to a Bratteli-Vershik system given by an essentially simple ordered Bratteli diagram \((V,E,\preceq)\) where \((|V_{n}|)_{n\geq 1}\) is bounded by a natural number \(d\). The minimum possible value of \(d\) is called the _topological rank_ of the system, and is denoted by \(\operatorname{rank}_{\operatorname{top}}(X,T)\). An essentially minimal Cantor system \((X,T)\) with topological rank \(1\) is called an _(infinite) odometer_. It is easy to see that any ordered Bratteli diagram for such an odometer is necessarily simple, and therefore an odometer is in fact minimal. The infinite odometers coincide with all equicontinuous minimal Cantor systems. ### Kakutani-Rohlin partitions The concepts and terminology reviewed in this subsection are again from [29] and [27], with some notations from [12]. For an essentially minimal Cantor system \((X,T)\), a _Kakutani-Rohlin partition_ is a partition \[\mathcal{P}=\{T^{j}B(k)\,:\,1\leq k\leq d,\,0\leq j<h(k)\}\] of clopen sets, where \(d,h(1),\ldots,h(d)\) are positive integers and \(B(1),\ldots,B(d)\) are clopen subsets of \(X\) such that \[\bigcup_{k=1}^{d}T^{h(k)}B(k)=\bigcup_{k=1}^{d}B(k).\] The set \(B(\mathcal{P})=\bigcup_{k=1}^{d}B(k)\) is called the _base_ of \(\mathcal{P}\). For \(1\leq k\leq d\), the subpartition \(\mathcal{P}(k)=\{T^{j}B(k)\,:\,0\leq j<h(k)\}\) is the \(k\)_-th tower_ of \(\mathcal{P}\), which has _base_\(B(k)\) and _height_\(h(k)\). The following is a basic fact regarding the construction of Kakutani-Rohlin partitions. **Lemma 2.2** ([29], Lemma 4.1).: _Let \((X,T)\) be an essentially minimal Cantor system, \(Y\) be the unique minimal set, \(y\in Y\) and \(Z\) be a clopen subset of \(X\) containing \(y\), and let \(\mathcal{Q}\) be a finite partition of \(X\) into clopen sets. Then there is a Kakutani-Rohlin partition \(\mathcal{P}\) such that \(y\in B(\mathcal{P})=Z\) and \(\mathcal{P}\) refines \(\mathcal{Q}\), i.e., every element of \(\mathcal{Q}\) is a union of elements of \(\mathcal{P}\)._ The proof of the lemma gives a canonical construction of Kakutani-Rohlin partitions. Specifically, given \(y\in Y\) and clopen set \(Z\) containing \(y\), the function \(Z\to\mathbb{N}\), \(x\mapsto n_{x}\), where \(n_{x}\) is the least positive integer \(n\) such that \(T^{n}x\in Z\), is continuous. Thus by the compactness of \(X\), \(x\mapsto n_{x}\) is bounded. For any \(h>0\), let \(A_{h}=\{x\in Z\,:\,n_{x}=h\}\). Let \(h(1),\ldots,h(d)\) enumerate all \(h>0\) where \(A_{h}\neq\varnothing\). Then \(\{T_{j}A_{h(k)}\,:\,1\leq k\leq d,\,0\leq j<h(k)\}\) is a Kakutani-Rohlin partition with base \(Z\). Applying Lemma 2.2 repeatedly, one quickly obtains the following theorem. **Theorem 2.3** ([29], Theorem 4.2).: _For any essentially minimal Cantor system \((X,T)\) and \(x\) in the unique minimal set, there exist_ * _positive integers_ \(d_{n}\) _for_ \(n\geq 0\)_, with_ \(d_{0}=1\)_,_ * _positive integers_ \(h_{n}(k)\) _for_ \(n\geq 0\) _and_ \(0\leq k<d_{n}\)_, with_ \(h_{0}(1)=1\)_,_ * _Kakutani-Rohlin partitions_ \(\mathcal{P}_{n}\) _for_ \(n\geq 0\)_, where_ \[\mathcal{P}_{n}=\{T^{j}B_{n}(k)\,:\,1\leq k\leq d_{n},\,0\leq j<h_{n}(k)\},\] _with_ \(B_{0}(1)=X\)_,_ _such that for all \(n\geq 0\),_ 1. _each_ \(\mathcal{P}_{n+1}\) _refines_ \(\mathcal{P}_{n}\)_,_ 2. \(B(\mathcal{P}_{n+1})\subseteq B(\mathcal{P}_{n})\)_,_ 3. \(\bigcap_{n}B(\mathcal{P}_{n})=\{x\}\)_,_ 4. \(\bigcup_{n}\mathcal{P}_{n}\) _generates the topology of_ \(X\)_._ We call the system of Kakutani-Rohlin partitions in Theorem 2.3 a _nested system_. From such a system we define an ordered Bratteli diagram following [29]. For each \(n\geq 0\), let \[V_{n}=\{\mathcal{P}_{n}(k)\,:\,1\leq k\leq d_{n}\}.\] For \(n\geq 1\), \(1\leq k\leq d_{n}\), \(1\leq\ell\leq d_{n-1}\) and \(0\leq j<h_{n}(k)\), there is an edge \(e_{j}\in E_{n}\) connecting \(\mathcal{P}_{n}(k)\) to \(\mathcal{P}_{n-1}(\ell)\) if \(T^{j}B_{n}(k)\subseteq\bigcup_{0\leq i<h_{n}(\ell)}T^{i}B_{n-1}(\ell)\). Then, if \(e_{j_{1}},\ldots,e_{j_{m}}\) are all edges in \(E_{n}\) connecting \(\mathcal{P}_{n}(k)\) to some element of \(V_{n-1}\), we set the partial ordering \(\preceq\) by letting \(e_{j}\preceq e_{j^{\prime}}\) iff \(j\leq j^{\prime}\). It was proved in [29] that this ordered Bratteli diagram is essentially simple and that the Bratteli-Vershik system generated by this ordered Bratteli diagram is conjugate to \((X,T)\), with the conjugacy map sending \(x_{\min}\) to \(x\). If in addition \((X,T)\) is a minimal Cantor system, then the resulting ordered Bratteli diagram is necessarily simple. Thus we have described a procedure to obtain an ordered Bratteli diagram given an essentially minimal Cantor system \((X,T)\) and a point \(x\) in the unique minimal set. It was proved in [29] that the equivalence class of the ordered Bratteli diagram does not depend on the choice of the Kakutani-Rohlin partitions in the procedure, i.e., all ordered Bratteli diagrams obtained through this procedure are equivalent. Conversely, if \(B=(V,E,\preceq)\) is an essentially simple ordered Bratteli diagram and \((X_{B},\lambda_{B})\) is the Bratteli-Vershik system generated by \(B\), then there is a nested system of Kakutani-Rohlin partitions for \((X_{B},\lambda_{B})\) and \(x_{\min}\) such that the ordered Bratteli diagram \(B^{\prime}\) defined above is equivalent to \(B\). Thus, if an essentially minimal Cantor system \((X,T)\) has finite topological rank \(d\), then there is a nested system of Kakutani-Rohlin partitions \(\{\mathcal{P}_{n}\}_{n\geq 1}\) where \(d_{n}=d\) for all \(n\geq 1\). ### Subshifts The concepts and notation reviewed in this subsection are from [22] and [26]. We also prove a basic fact. By a _finite word_ we mean an element of \(2^{<\mathbb{N}}=\{0,1\}^{<\mathbb{N}}=\bigcup_{N\in\mathbb{N}}\{0,1\}^{N}\). If \(v\) is a finite word, we regard it as a function with domain \(\{0,1,\ldots,N-1\}\) for some \(N\in\mathbb{N}\), and call \(N\) its _length_, denoted as \(|v|=N\). The _empty word_ is the unique finite word with length \(0\) (or the empty domain), and we denote it as \(\varnothing\). If \(v\) is a finite word and \(s,t\) are integers such that \(0\leq s\leq t\leq|v|-1\), then \(v\!\upharpoonright\![s,t]\) denotes the finite word \(u\) of length \(t-s+1\) where for \(0\leq i<t-s+1\), \(u(i)=v(s+i)\); \(v\!\upharpoonright\![0,s]\), and is called a _prefix_ or an _initial segment_ of \(v\). An _end segment_ or a _suffix_ of \(v\) is \(v\!\upharpoonright\![s,|v|-1]\) for some \(0\leq s\leq|v|-1]\). The empty word is both a prefix and a suffix of any word. Any word is also both a prefix and a suffix of itself. If \(u,v\) are finite words, then \(uv\) denotes the finite word \(w\) of length \(|u|+|v|\) where \(w\!\upharpoonright\![0,|u|-1]=u\) and \(w\!\upharpoonright\![|u|,|u|+|v|-1]=v\). For finite words \(u,v\) with \(|u|\leq|v|\), we say that \(u\) is a _subword_ of \(v\) if there is \(0\leq s\leq|v|-|u|\) such that \(u=v\!\upharpoonright\![s,s+|u|-1]\); when this happens we also say that \(u\)_occurs_ in \(v\) at position \(s\). An _infinite word_ is an element of \(2^{\mathbb{N}}\), and a _bi-infinite word_ is an element of \(2^{\mathbb{Z}}\). For any infinite word \(V\in 2^{\mathbb{N}}\) and integers \(s,t\) with \(0\leq s\leq t\), the notions \(V\!\upharpoonright\![s,t]\), \(V\!\upharpoonright\!s\), finite subwords and their occurrences are similarly defined. For any bi-infinite word \(x\in 2^{\mathbb{Z}}\) and integers \(s,t\) with \(s\leq t\), the notions \(x\!\upharpoonright\![s,t]\), finite subwords and their occurrences are also similarly defined. We consider the _Bernoulli shift_ on \(2^{\mathbb{Z}}=\{0,1\}^{\mathbb{Z}}\), which is the homeomorphism \(\sigma:2^{\mathbb{Z}}\to 2^{\mathbb{Z}}\) defined by \[\sigma(x)(n)=x(n+1).\] Since \(2^{\mathbb{Z}}\) is homeomorphic to \(\mathcal{C}=2^{\mathbb{N}}\), \((2^{\mathbb{Z}},\sigma)\) is a Cantor system. A _subshift_\(X\) is a closed \(\sigma\)-invariant subset of \(2^{\mathbb{Z}}\). By a subshift we also refer to the Cantor system \((X,\sigma\!\upharpoonright\!X)\) or simply \((X,\sigma)\) when there is no danger of confusion. If a subshift \(X\) is finite, we say that it is _degenerate_; otherwise it is _nondegenerate_. The following simple fact is a folklore. **Lemma 2.4**.: _A nondegenerate subshift is not equicontinuous. In particular, it is not conjugate to any infinite odometer._ Proof.: Let \(d\) be the standard metric on \(2^{\mathbb{Z}}\), i.e., for any \(x,y\in 2^{\mathbb{Z}}\), \(d(x,y)=0\) if \(x=y\); for \(x\neq y\), \(d(x,y)=2^{-n}\) where \(n\) is the least nonnegative integer such that either \(x(n)\neq y(n)\) or \(x(-n)\neq y(-n)\). Let \(X\) be a nondegenerate subshift. Then \(d\upharpoonright X\) is still a compatible metric on \(X\). Assume \(X\) is equicontinuous. Then there exists \(\delta>0\) such that for all \(x,y\in X\) with \(d(x,y)<\delta\), we have \(d(\sigma^{n}(x),\sigma^{n}(y))<1/2\) for all \(n\in\mathbb{Z}\). Since \(X\) is infinite but compact, there exist \(x,y\in X\) such that \(x\neq y\) and \(d(x,y)<\delta\). But from \(d(\sigma^{n}(x),\sigma^{n}(y))<1/2\) for all \(n\in\mathbb{Z}\), we conclude that \(x(n)=y(n)\) for all \(n\in\mathbb{Z}\), and thus \(x=y\), a contradiction. If \(V\in 2^{\mathbb{N}}\) is an infinite word, let \[X_{V}=\{x\in 2^{\mathbb{Z}}\,:\,\,\,\text{every finite subword of $x$ is a subword of $V$}\}.\] Then \((X_{V},\sigma)\) is a subshift and we call it the subshift _generated by \(V\)_. For any \(V\in 2^{\mathbb{N}}\), \(X_{V}\) is always nonempty. Note that for any \(x\in X_{V}\) and finite subword \(u\) of \(x\), \(u\) must occur in \(V\) infinitely many times. We say that \(V\) is _recurrent_ if every finite subword of \(V\) occurs in \(V\) infinitely many times. When \(V\) is recurrent, \(X_{V}\) is either finite or a Cantor set, and \(X_{V}\) is finite iff \(V\) is _periodic_, i.e., there is a finite word \(v\) such that \(V=vvv\cdots\). Thus a nondegenerate subshift generated by a recurrent \(V\) is a Cantor system. It is well known that all infinite odometers form a dense \(G_{\delta}\) in the space \(M(\mathcal{C})\) of all minimal Cantor systems. We give a proof of this fact in Corollary 3.7 and Proposition 3.8. ### Subshifts of finite symbolic rank Some of the concepts and notation reviewed in this subsection are from [22] and [26], and some are new. Subshifts of finite symbolic rank are defined from infinite words of finite symbolic ranked constructions, whose definitions are inspired by the cutting-and-stacking processes that were used to construct measure-preserving transformations of finite rank ([20]). We first define (symbolic) rank-1 subshifts, which are also called _Ferenczi subshifts_ in [2] to honor the fact that Ferenczi popularized the concept in [20]. An infinite (symbolic) rank-1 word \(V\) is defined as follows. Given a sequence of positive integers \(\{r_{n}\}_{n\geq 0}\) with \(r_{n}>1\) for all \(n\geq 0\) (called the _cutting parameter_) and a doubly-indexed sequence of nonnegative integers \(\{s_{n,i}\}_{n\geq 0,1\leq i<r_{n}}\) (called the _spacer paramter_), a _(symbolic) rank-1 generating sequence_ given by the parameters is the recursively defined sequence of finite words: \[\begin{array}{rcl}v_{0}&=&0,\\ v_{n+1}&=&v_{n}1^{s_{n,1}}v_{n}\cdots v_{n}1^{s_{n,r_{n}-1}}v_{n}.\end{array}\] Since \(v_{n}\) is a prefix of \(v_{n+1}\), it makes sense to define \(V=\lim_{n}v_{n}\). This \(V\) is called a _(symbolic) rank-1 word_ and \(X_{V}\) is called a _(symbolic) rank-1 subshift_. To generalize and define (symbolic) rank-\(n\) subshifts we use the following concepts and notation. Let \(\mathcal{F}\) be the set of all finite words in \(2^{<\mathbb{N}}\) that begin and end with \(0\). For a finite set \(S\subseteq\mathcal{F}\) and finite word \(w\in\mathcal{F}\), a _building_ of \(w\) from \(S\) consists of a sequence \((v_{1},\ldots,v_{k+1})\) of elements of \(S\) and a sequence \((s_{1},\ldots,s_{k})\) of nonnegative integers for \(k\geq 1\) such that \[w=v_{1}1^{s_{1}}v_{2}\cdots v_{k}1^{s_{k}}v_{k+1}.\] The sequence \((s_{1},\ldots,s_{k})\) is called the _spacer parameter_ of the building; it is _bounded by \(M\)_ if \(s_{1},\ldots,s_{k}\leq M\). We say that _every word of \(S\) is used_ in this building if \(\{v_{1},\ldots,v_{k+1}\}=S\). When there is a building of \(w\) from \(S\), we also say that \(w\) is _built from \(S\)_; when the building consists of \((v_{1},\ldots,v_{k+1})\) and \((s_{1},\ldots,s_{k})\), we also say that \(w\) is built from \(S\)_starting with \(v_{1}\)_. These notions can be similarly defined when the finite word \(w\) is replaced by an infinite word \(W\). For \(n\geq 1\), a _(symbolic) rank-\(n\) generating sequence_ is a doubly-indexed sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n_{i}}\) of finite words satisfying, for all \(i\geq 0\), * \(n_{i}\leq n\), * \(v_{0,j}=0\) for all \(1\leq j\leq n_{0}\), * \(v_{i+1,1}\) is built from \(S_{i}\triangleq\{v_{i,1},\ldots,v_{i,n_{i}}\}\) starting with \(v_{i,1}\), * \(v_{i+1,j}\) is built from \(S_{i}\) for all \(2\leq j\leq n_{i+1}\). A _(symbolic) rank-\(n\) construction_ is the (symbolic) rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n_{i}}\) together with exactly one building of \(v_{i+1,j}\) from \(S_{i}\) (for \(v_{i+1,1}\), the building should start with \(v_{i,1}\)) for all \(i\geq 0,1\leq j\leq n_{i}\). We call \(S_{i}\) the \(i\)_-th level_ of the construction. The _spacer parameter_ of the rank-\(n\) construction is the collection of all spacer parameters of all the buildings in the construction; it is _bounded_ if there is an \(M>0\) such that all the spacer parameters of all the buildings in the construction are bounded by \(M\). The (symbolic) rank-\(n\) construction is _proper_ if for all \(i\geq 0\), \(n_{i}=n\) and for all \(1\leq j\leq n\), every word of \(S_{i}\) is used in the building of each \(v_{i+1,j}\). Since each \(v_{i,1}\) is a prefix of \(v_{i+1,1}\), it makes sense to define \(V=\lim_{i}v_{i,1}\). Given a rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n_{i}}\), we define the set of all _expected subwords_ of \(v_{i,j}\), for \(i\geq 0\) and \(1\leq j\leq n_{i}\), inductively as follows: for each \(v_{0,j}\), the set of all of its expected subwords is \(\{v_{0,j}\}=\{0\}\); for \(i\geq 0\), the set of all expected subwords of \(v_{i+1,j}\) consists of * \(v_{i+1,j}\), * \(u_{1},\ldots,u_{k+1}\in S_{i}\), where \((u_{1},\ldots,u_{k+1})\) and \((a_{1},\ldots,a_{k})\) give the building of \(v_{i+1,j}\) from \(S_{i}\), * all expected subwords of \(u_{1},\ldots,u_{k+1}\in S_{i}\). Finally define the set of all _expected subwords_ of \(V=\lim_{i}v_{i,1}\) to be the union of the sets of all expected subwords of \(v_{i,1}\) for all \(i\geq 0\). Without loss of generality, we may assume that for all \(i\geq 0\), all finite words in \(S_{i}\) are expected subwords of \(V\). It follows immediate from the construction that for all \(i\geq 0\), the infinite word \(V\) is built from \(S_{i}\) starting with \(v_{i,1}\). Let \(w\in\mathcal{F}\) and \(S,T\subseteq\mathcal{F}\) are finite. Suppose \(w\) is built from \(S\) and that every word in \(S\) is built from \(T\). Then by _composing_ the building of \(w\) from \(S\) with the buildings of each element of \(S\) from \(T\), we obtain a building of \(w\) from \(T\), and thus \(w\) is also built from \(T\). Given a rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n_{i}}\), and given \(i<i^{\prime}\), for all \(1\leq j^{\prime}\leq n_{i^{\prime}}\) we obtain a building of \(v_{i^{\prime},j^{\prime}}\) from \(S_{i}\) by composing the buildings of elements of \(S_{\iota}\) from \(S_{\iota-1}\) for all \(i+1\leq\iota\leq i^{\prime}\). With this repeated composition process, we may obtain, for any increasing sequence \(\{i_{k}\}_{k\geq 0}\) with \(i_{0}=0\), a rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i_{k},j}\}_{k\geq 0,1\leq j\leq n_{i_{k}}}\). Since \(\lim_{i}v_{i,1}=\lim_{k}v_{i_{k},1}\), the resulting infinte words are the same. We call this process _telescoping_. An infinite word \(V\) is called a _(symbolic) rank-\(n\) word_ if it has a rank-\(n\) construction but not a rank-\((n-1)\) construction. A subshift \(X\) has _finite symbolic rank_ if for some \(n\geq 1\), \(X=X_{V}\) where \(V\) has a rank-\(n\) construction; the smallest such \(n\) is called the _symbolic rank_ of \(X\), and is denoted \(\operatorname{rank}_{\operatorname{symb}}(X)=\operatorname{rank}_{ \operatorname{symb}}(X,\sigma)\). By definition, if \(\operatorname{rank}_{\operatorname{symb}}(X)=n\) then there is a rank-\(n\) word \(V\) such that \(X=X_{V}\). ## 3. Some computations of descriptive complexity In this section we compute the descriptive complexity of various classes of Cantor systems. We first show that the class of all essentially minimal Cantor systems is a \(G_{\delta}\) subset of \(\operatorname{Aut}(\mathcal{C})\). Then we give a characterization of all essentially minimal Cantor systems with bounded topological rank. As a consequence, we show that for each \(n\geq 1\), the class of all essentially minimal Cantor systems of topological rank \(\leq n\) is a \(G_{\delta}\) subset of \(\operatorname{Aut}(\mathcal{C})\). This implies that for each \(n\geq 1\), the class of all minimal Cantor systems of topological rank \(\leq n\) is a \(G_{\delta}\) subset of \(M(\mathcal{C})\). We first give a characterization of essential minimality for a Cantor system \((X,T)\). We say a subset \(A\) of \(X\) has the _finite covering property_ if there is some \(N\in\mathbb{N}\) such that \(\bigcup_{-N\leq n\leq N}T^{n}A=X\). **Theorem 3.1**.: _Let \((X,T)\) be a Cantor system and let \(\rho\leq 1\) be a compatible metric on \(X\). Then the following are equivalent:_ 1. \((X,T)\) _is essentially minimal._ 2. _For any clopen set_ \(A\) _of_ \(X\)_, if_ \(A\) _has the finite covering property then there is a clopen subset_ \(B\) _of_ \(A\) _with the finite covering property such that_ \(\operatorname{diam}(B)\leq\operatorname{diam}(A)/2\)_._ Proof.: First assume \((X,T)\) is essentially minimal. Suppose \(A\) is a clopen subset of \(X\) with the finite covering property, i.e. for some \(N\in\mathbb{N}\) we have \(\bigcup_{-N\leq n\leq N}T^{n}A=X\). Let \(x\) be an arbitrary element of the unique minimal set of \(X\). Then for some \(-N\leq n\leq N\), \(x\in T^{n}A\), where \(T^{n}A\) is still clopen. Let \(Y\) be a clopen subset of \(T^{n}A\) containing \(x\) such that \(\operatorname{diam}(T^{-n}Y)\leq\operatorname{diam}(A)/2\). By Theorem 1.1 of [29], \(\bigcup_{k\in\mathbb{Z}}T^{k}Y=X\). By the compactness of \(X\), \(Y\) has the finite covering property. Let \(B=T^{-n}Y\). Then \(B\subseteq A\), \(\operatorname{diam}(B)\leq\operatorname{diam}(A)/2\) and \(B\) also has the finite covering property. Conversely, assume (2) holds. Starting with \(A_{0}=X\) and repeatedly applying (2), we obtain a decreasing sequence \(\{A_{n}\}_{n\geq 0}\) of clopen subsets of \(X\) such that \(\operatorname{diam}(A_{n})\leq 2^{-n}\) and each \(A_{n}\) has the finite covering property. Let \(x\) be the unique element of \(\bigcap_{n}A_{n}\). Then any clopen subset \(B\) of \(X\) containing \(x\) has the finite covering property. By Theorem 1.1 of [29], \((X,T)\) is essentially minimal. Let \(E(\mathcal{C})\) be the set of all essentially minimal homeomorphisms of \(\mathcal{C}\). **Corollary 3.2**.: \(E(\mathcal{C})\) _is a \(G_{\delta}\) subset of \(\operatorname{Aut}(\mathcal{C})\), hence is a Polish space._ Proof.: Note that for any clopen subset \(A\) of \(\mathcal{C}\), \(A\) has the finite covering property for \((\mathcal{C},T)\) is an open condition for \(T\in\operatorname{Aut}(\mathcal{C})\). Thus condition (2) of Theorem 3.1 gives a \(G_{\delta}\) condition for \(T\in\operatorname{Aut}(\mathcal{C})\). We next give a characterization of essentially minimal Cantor systems of bounded topological rank. **Theorem 3.3**.: _Let \((X,T)\) be an essentially minimal Cantor system, \(\rho\leq 1\) be a compatible complete metric on \(X\), and \(n\geq 1\). The following are equivalent:_ 1. \((X,T)\) _has topological rank_ \(\leq n\)_._ 2. _There exists_ \(x\in X\) _such that for all_ \(\epsilon>0\)_, there is a Kakutani-Rohlin partition_ \(\mathcal{P}\) _with no more than_ \(n\) _many towers such that_ \(\operatorname{diam}(A)<\epsilon\) _for all_ \(A\in\mathcal{P}\)_,_ \(\operatorname{diam}(B(\mathcal{P}))<\epsilon\)_, and_ \(x\in B(\mathcal{P})\)_._ 3. _For any clopen subset_ \(Z\) _of_ \(X\) _with the finite covering property, for any finite partition_ \(\mathcal{Q}\) _into clopen sets, there is a Kakutani-Rohlin partition_ \(\mathcal{P}\) _with no more than_ \(n\) _many towers such that_ \(B(\mathcal{P})\subseteq Z\)_,_ \(\operatorname{diam}(B(\mathcal{P}))\leq\operatorname{diam}(Z)/2\) _and_ \(\mathcal{P}\) _refines_ \(\mathcal{Q}\)_._ Proof.: We first show (1)\(\Rightarrow\)(2). Suppose \((X,T)\) has topological rank \(\leq n\). Then there is an essentially simple ordered Bratteli diagram \(B=(V,E,\preceq)\) such that \((X,T)\) is conjugate to the Bratteli-Vershik system \((X_{B},\lambda_{B})\) generated by \(B\), and for all \(k\geq 1\), \(|V_{k}|\leq n\). It suffices to verify that (2) holds for \((X_{B},\lambda_{B})\). Let \(y=x_{\min}\). Each level \(V_{k}\) of \(B\) gives rise to a Kakutani-Rohlin partition \(\mathcal{P}_{k}\), where each set in \(\mathcal{P}_{k}\) corresponds to a path from \(V_{0}\) to a vertex in \(V_{k}\), and \(B(\mathcal{P}_{k})\) consists of all the basic open sets correspondent to the minimal paths from \(V_{0}\) to each vertex of \(V_{k}\). Since \(y\) is the unique minimal infinite path, we have \(\bigcap_{k}B(\mathcal{P}_{k})=\{y\}\). Let \(\eta\leq 1\) be the standard metric on \(X_{B}\). Let \(\epsilon>0\). Then there is a large enough \(k\) such that all the minimal paths from \(V_{0}\) to the vertices of \(V_{k}\) agree on the first \(k^{\prime}<k\) many edges where \(2^{-k^{\prime}}<\epsilon\). This implies that \(\operatorname{diam}(B(\mathcal{P}_{k}))\leq 2^{-k^{\prime}}<\epsilon\). Also, for all \(A\in\mathcal{P}_{k}\), \(\operatorname{diam}(A)\leq 2^{-k}<2^{-k^{\prime}}<\epsilon\). \(\mathcal{P}_{k}\) has \(|V_{k}|\leq n\) many towers. Since \(y\in B(\mathcal{P}_{k})\), we have that \(\mathcal{P}_{k}\) witnesses (2) for \((X_{B},\lambda_{B})\). Next we show (2)\(\Rightarrow\)(3). Suppose (2) holds for \(x\in X\). We note that for any \(n\in\mathbb{Z}\), (2) holds also for \(T^{n}x\). This is because, the property described in (2) is invariant under topological conjugacy, and \(T^{n}:(X,T)\to(X,T)\) is a topological conjugacy sending \(x\) to \(T^{n}x\). Let \(Z\) be a clopen subset of \(X\) with the finite covering property. Then \(Z\) meets every orbit in \(X\), and therefore there is \(x\in Z\) such that the property in (2) holds. Let \(\mathcal{Q}\) be a finite partition of \(X\) into clopen sets. Let \(\delta>0\) be the infimum of \(d(y,z)\) where \(y,z\) are from different elements of \(\mathcal{Q}\). Let \(\xi>0\) be such that \(x\in\{y\in X\,:\,\rho(x,y)<\xi\}\subseteq Z\). Let \(\epsilon=\min\{\delta,\xi\}>0\). Let \(\mathcal{P}\) be a Kakutani-Rohlin partition with no more than \(n\) many towers such that \(\operatorname{diam}(A)<\epsilon\) for all \(A\in\mathcal{P}\), \(\operatorname{diam}(B(\mathcal{P}))<\epsilon\), and \(x\in B(\mathcal{P})\). Then \(B(\mathcal{P})\subseteq\{y\in X\,:\,\rho(x,y)<\xi\}\subseteq Z\), \(\operatorname{diam}(B(\mathcal{P}))<\xi<\operatorname{diam}(Z)/2\), and \(\mathcal{P}\) refines \(\mathcal{Q}\) because for any \(A\in\mathcal{P}\), \(\operatorname{diam}(A)<\delta\). Finally we prove (3)\(\Rightarrow\)(1). Assume \((X,T)\) is essentially minimal and (3) holds. Note that the base of any Kakutani-Rohlin partition has the finite covering property. By applying (3) repeatedly, we obtain a system of Kakutani-Rohlin partitions \(\{\mathcal{P}_{k}\}_{k\geq 0}\) so that \(\mathcal{P}_{0}=\{X\}\), each \(\mathcal{P}_{k+1}\) refines \(\mathcal{P}_{k}\), \(B(\mathcal{P}_{k+1})\subseteq B(\mathcal{P}_{k})\), \(\operatorname{diam}(B(\mathcal{P}_{k+1}))\leq\operatorname{diam}(B(\mathcal{ P}_{k}))/2\), each \(\mathcal{P}_{k}\) consists of no more than \(n\) many towers, and \(\bigcup_{k}\mathcal{P}_{k}\) generates the topology of \(X\). Let \(x\) be the unique element of \(\bigcap_{k}B(\mathcal{P}_{k})\). Then any clopen subset of \(X\) containing \(x\) has the finite covering property. By Theorem 1.1 of [29], \(x\) is in the unique minimal set of \(X\). Now \(\{\mathcal{P}_{k}\}_{k\geq 0}\) is a nested system of Kakutani-Rohlin partitions in the sense of Theorem 2.3, which gives rise to an ordered Bratteli diagram for \((X,T)\) with each level consisting of no more than \(n\) vertices. Thus \(\operatorname{rank}_{\operatorname{top}}(X,T)\leq n\). **Corollary 3.4**.: _For any \(n\geq 1\), the set of all essentially minimal \(T\in\operatorname{Aut}(\mathcal{C})\) with topological rank \(\leq n\) is a \(G_{\delta}\) subset of \(E(\mathcal{C})\). Similarly, for any \(n\geq 1\), the set of all minimal \(T\in\operatorname{Aut}(\mathcal{C})\) with topological rank \(\leq n\) is a \(G_{\delta}\) subset of \(M(\mathcal{C})\)._ Proof.: This follows immediately from clause (3) of Theorem 3.3. We also have the following immediate corollary regarding the descriptive complexity of (essentially) minimal Cantor systems with finite topological rank. **Corollary 3.5**.: _The set of all essentially minimal \(T\in\operatorname{Aut}(\mathcal{C})\) with finite topological rank is a \(\mathbf{\Sigma}^{0}_{3}\) subset of \(E(\mathcal{C})\). Similarly, the set of all minimal \(T\in\operatorname{Aut}(\mathcal{C})\) with finite topological rank is a \(\mathbf{\Sigma}^{0}_{3}\) subset of \(M(\mathcal{C})\)._ The proofs of the implications (1)\(\Rightarrow\)(2)\(\Rightarrow\)(3) in Theorem 3.3 also gives the following immediate corollary. **Corollary 3.6**.: _Let \((X,T)\) be an essentially minimal Cantor system, \(\rho\leq 1\) be a compatible metric on \(X\), and \(n\geq 1\). Suppose \((X,T)\) has topological rank \(n\). Then the following hold:_ 1. _There exists_ \(x\in X\) _such that for all_ \(\epsilon>0\)_, there is a Kakutani-Rohlin partition_ \(\mathcal{P}\) _with exactly_ \(n\) _many towers such that_ \(\operatorname{diam}(A)<\epsilon\) _for all_ \(A\in\mathcal{P}\)_,_ \(\operatorname{diam}(B(\mathcal{P}))<\epsilon\)_, and_ \(x\in B(\mathcal{P})\)_._ 2. _For any clopen subset_ \(Z\) _of_ \(X\) _with the finite covering property, for any finite partition_ \(\mathcal{Q}\) _into clopen sets, there is a Kakutani-Rohlin partition_ \(\mathcal{P}\) _with exactly_ \(n\) _many towers such that_ \(B(\mathcal{P})\subseteq Z\)_,_ \(\operatorname{diam}(B(\mathcal{P}))\leq\operatorname{diam}(Z)/2\) _and_ \(\mathcal{P}\) _refines_ \(\mathcal{Q}\)_._ We also note the following corollary of the proof of Theorem 3.3. **Corollary 3.7**.: _Let \((X,T)\) be a minimal Cantor system, \(\rho\leq 1\) be a compatible metric on \(X\), and \(n\geq 1\). Then the following are equivalent:_ 1. \((X,T)\) _has topological rank_ \(\leq n\)_._ 2. _There exists_ \(x\in X\) _such that for all_ \(\epsilon>0\)_, there is a Kakutani-Rohlin partition_ \(\mathcal{P}\) _with no more than_ \(n\) _many towers such that_ \(\operatorname{diam}(A)<\epsilon\) _for all_ \(A\in\mathcal{P}\)_,_ \(\operatorname{diam}(B(\mathcal{P}))<\epsilon\)_, and_ \(x\in B(\mathcal{P})\)_._ 3. _For nonmeager many_ \(x\in X\)_, for all_ \(\epsilon>0\)_, there is a Kakutani-Rohlin partition_ \(\mathcal{P}\) _with no more than_ \(n\) _many towers such that_ \(\operatorname{diam}(A)<\epsilon\) _for all_ \(A\in\mathcal{P}\)_,_ \(\operatorname{diam}(B(\mathcal{P}))<\epsilon\)_, and_ \(x\in B(\mathcal{P})\)_._ 4. _For comeager many_ \(x\in X\)_, for all_ \(\epsilon>0\)_, there is a Kakutani-Rohlin partition_ \(\mathcal{P}\) _with no more than_ \(n\) _many towers such that_ \(\operatorname{diam}(A)<\epsilon\) _for all_ \(A\in\mathcal{P}\)_,_ \(\operatorname{diam}(B(\mathcal{P}))<\epsilon\)_, and_ \(x\in B(\mathcal{P})\)_._ 5. _For any clopen subset_ \(Z\) _of_ \(X\)_, for any finite partition_ \(\mathcal{Q}\) _into clopen sets, there is a Kakutani-Rohlin partition_ \(\mathcal{P}\) _with no more than_ \(n\) _many towers such that_ \(B(\mathcal{P})\subseteq Z\)_,_ \(\operatorname{diam}(B(\mathcal{P}))\leq\operatorname{diam}(Z)/2\) _and_ \(\mathcal{P}\) _refines_ \(\mathcal{Q}\)_._ Proof.: The equivalence of (1) and (5) follows immediately from Theorem 3.3 because for a minimal Cantor system \((X,T)\), any clopen subset has the finite covering property. We have also established the equivalence of (1) and (2) in Theorem 3.3. Suppose now (2) holds for \(x\in X\). As we noted in the proof of Theorem 3.3 (2) holds also for all elements of the orbit of \(x\), which is dense in \(X\) by minimality. Now note that the property for this \(x\in X\) described in (2) is \(G_{\delta}\). Thus the set of all \(y\in X\) that satisfy the property described in (2) is a dense \(G_{\delta}\) set, hence is comeager. This proves (4). The implications (4)\(\Rightarrow\)(3)\(\Rightarrow\)(2) are immediate. By Lemma 2.1, clause (4) of the above corollary gives another \(G_{\delta}\) characterization of the set of all \(T\in M(\mathcal{C})\) which has topological rank \(\leq n\). Finally we note that the set of all infinite odometers form a dense \(G_{\delta}\) in \(M(\mathcal{C})\). **Proposition 3.8**.: _The set of all infinite odometers is a dense \(G_{\delta}\) in the space of all minimal Cantor systems._ Proof.: Since the set of all infinite odometers is just the set of all minimal Cantor systems of topological rank \(1\), it is a \(G_{\delta}\) in the space of all minimal Cantor systems by Corollary 3.4. We only verify that it is dense. Let \((X,T)\) be a minimal Cantor system and suppose \(\mathcal{P}\) is a clopen partition of \(X\). We only need to define an infinite odometer \(S\) on \(X\) such that \(SZ=TZ\) for all \(Z\in\mathcal{P}\). Consider \(\tilde{T}=T^{-1}\). Then \((X,\tilde{T})\) is again a minimal Cantor system. If we define an infinite odometer \(\tilde{S}\) on \(X\) such that \(\tilde{S}^{-1}Z=\tilde{T}^{-1}Z\) for all \(Z\in\mathcal{P}\), then \(S=\tilde{S}^{-1}\) is again an infinite odometer, and \(SZ=TZ\) holds for all \(Z\in\mathcal{P}\). Thus we focus on \((X,\tilde{T})\) in the rest of this proof. By Lemma 2.2\(\mathcal{P}\) can be refined by a Kakutani-Rohlin partition for \(\tilde{T}\). Therefore, without loss of generality, we may assume that \(\mathcal{P}\) itself is a Kakutani-Rohlin partition. Suppose \(\mathcal{P}=\{\tilde{T}^{j}B(k)\,:\,1\leq k\leq d,\,0\leq j<h(k)\}\). We define a directed graph \(G=(V,E)\), where \(V=\{v_{1},\cdots,v_{d}\}\) has \(d\) vertices, and for any \(1\leq k,k^{\prime}\leq d\), there is a directed edge \(e\in E\) from \(v_{k}\) to \(v_{k^{\prime}}\) iff \(B(k^{\prime})\cap\tilde{T}^{h(k)}B(k)\neq\varnothing\). It follows from the minimality of \(\tilde{T}\) that \(G=(V,E)\) is strongly connected, i.e, there is a directed path from any vertex to any other vertex. Now fix a finite sequence \(p=(e_{1},\cdots,e_{m})\) of edges in \(G\) such that \((e_{1},\cdots,e_{m},e_{1})\) is a directed path and \(\{e_{1},\cdots,e_{m}\}=E\). Then \(p\) is a directed cycle in \(G\). Consider an edge \(e\in E\), say \(e\) is from \(v_{k}\) to \(v_{k^{\prime}}\). Let \(n_{e}\) be the number of times \(e\) appears in \(p\). Let \(A_{e}=B(k)\cap\tilde{T}^{-h(k)}(B(k^{\prime}))\). Then \(A_{e}\) is a clopen set in \(X\). Let \(\{A_{e,1},\ldots,A_{e,n_{e}}\}\) be a partition of \(A_{e}\) into \(n_{e}\) many clopen subsets of \(X\). If \(e\) appears in \(p\) as \(e_{i_{1}},\ldots,e_{i_{n_{e}}}\), we associate with each \(e_{i_{j}}\) the set \(A_{e,j}\) for \(1\leq j\leq n_{e}\). Thus we have obtained disjoint nonempty clopen sets \(C_{1},\cdots,C_{m}\) such that \(\mathcal{Q}\triangleq\{C_{1},\cdots,C_{m}\}\) is a partition of \(B(\mathcal{P})\), and for any \(1\leq i\leq m\), if \(e_{i}\) is an edge from \(v_{k}\) to \(v_{k^{\prime}}\) then \(C_{i}\subseteq B(k)\) and \(\tilde{T}^{h(k)}C_{i}\subseteq B(k^{\prime})\). We define an odometer \(\tilde{S}\,:\,X\to X\) such that for any \(1\leq i\leq m\), if \(e_{i}\) is an edge from \(v_{k}\) to \(v_{k^{\prime}}\) then \(\tilde{S}^{h(k)}C_{i}=C_{i+1}\) (with \(C_{m+1}=C_{1}\)) and \(\tilde{S}^{j}C_{i}=\tilde{T}^{j}C_{i}\) for \(1\leq j<h(k)\). In fact \(\mathcal{Q}\) is a Kakutani-Rohlin partition for \(\tilde{S}\) (with one tower), and \(\tilde{S}\) is defined by recursive refinements starting with \(\mathcal{Q}\). It is now clear that \(\tilde{S}^{-1}Z=\tilde{T}^{-1}Z\) for all \(Z\in\mathcal{P}\) as desired. ## 4. A characterization of minimal rank-1 subshifts In this section we give an explicit topological characterization for all minimal Cantor systems which are conjugate to nondegenerate rank-1 subshifts. In contrast to the results in Section 3, the descriptive complexity of this characterization will be on a higher level than \(G_{\delta}\). Define \[\mathcal{Z}=\left\{x\in 2^{\mathbb{Z}}\,:\,\forall n\ \exists m>n\ x(m)=0\ \text{and}\ \forall n\ \exists m>n\ x(-m)=0\right\}.\] Then \(\mathcal{Z}\) is a \(\sigma\)-invariant dense \(G_{\delta}\) subset of \(2^{\mathbb{Z}}\). For a bi-infinite word \(x\in\mathcal{Z}\) and a finite word \(v\in\mathcal{F}\), we say that \(x\) is _built from \(v\)_ if \(\sigma^{n}(x)\) can be written in the form \[\sigma^{n}(x)=\cdots v1^{s_{-2}}v1^{s_{-1}}v1^{s_{0}}\cdot v1^{s_{1}}v1^{s_{2}}\cdots\] for an bi-infinite sequence \((\cdots,s_{-2},s_{-1},s_{0},s_{1},s_{2},\cdots)\) of nonnegative integers and for some \(n\in\mathbb{Z}\). For finite words \(u,v\in\mathcal{F}\), we say that \(u\) is _built from \(v\)_ if there are nonnegative integers \(s_{1},\ldots,s_{k}\) for \(k\geq 1\) such that \[u=v1^{s_{1}}v\cdots v1^{s_{k}}v.\] The demonstrated occurrences of \(v\) in \(u\) are called _expected occurrences_. **Lemma 4.1**.: _Let \(x\in\mathcal{Z}\) and \(v\in\mathcal{F}\). Then the following are equivalent:_ 1. \(x\) _is built from_ \(v\)_._ 2. _For all_ \(m\in\mathbb{N}\) _there exist_ \(m_{1},m_{2}\in\mathbb{N}\) _with_ \(m_{1},m_{2}\geq m\) _such that_ \(x\,\lceil\,[-m_{1},m_{2}]\) _is built from_ \(v\)_._ 3. _For all_ \(m\in\mathbb{N}\) _there exists a finite word_ \(u\) _such that_ \(x\,\uDash[-m,m]\) _is a subword of_ \(u\) _and_ \(u\) _is built from_ \(v\)_._ Proof.: The implications (i)\(\Rightarrow\)(ii)\(\Rightarrow\)(iii) are immediate. We only show (iii)\(\Rightarrow\)(i). Let \(n\) be the number of \(0\)s in \(v\), i.e., \(n\) is the number of distinct occurrences of \(0\) in \(v\). Let \(m_{0}\) be large enough such that \(x\,\uDash[-m_{0},m_{0}]\) contains at least \(n\) many \(0\)s. Let \(k_{1}<\cdots<k_{n}\in[-m_{0},m_{0}]\) be such that \(x(k_{i})=0\) for all \(1\leq i\leq n\) and that if \(k_{1}\leq k\leq k_{n}\) is such that \(x(k)=0\) then \(k=k_{i}\) for some \(1\leq i\leq n\). By (iii), for each \(m\geq m_{0}\) there is a finite word \(u\) such that \(x\,\uDash[-m,m]\) is a subword of \(u\) and \(u\) is built from \(v\). Exactly one of \(k_{1},\ldots,k_{n}\) corresponds to a starting position of an expected occurrence of \(v\) in \(u\). We denote this value of \(k\in\{k_{1},\ldots,k_{n}\}\) as \(k(m)\). Let \(k_{\infty}\in\{k_{1},\ldots,k_{n}\}\) be such that for infinitely many \(m\geq m_{0}\), \(k(m)=k_{\infty}\). Let \(M_{\infty}\) be the infinite set such that for all \(m\in M_{\infty}\), \(k(m)=k_{\infty}\). Then \(v\) occurs in \(x\) starting at position \(k_{\infty}\). We claim that for all \(k>k_{\infty}\) such that \(x(k)=0\) and there are a multiple of \(n\) many \(0\)s from \(k_{\infty}\) to \(k-1\), \(v\) occurs in \(x\) starting at position \(k\). This is because, fixing such a \(k\) and letting \(m\in M_{\infty}\) with \(m\geq k+|v|\), there is a finite word \(u\) such that \(x\,\uDash[-m,m]\) is a subword of \(u\) and \(u\) is built from \(v\); since the occurrence of \(v\) starting at position \(k_{\infty}\) corresponds to an expected occurrence of \(v\) in \(u\), it follows that there is another expected occurrence of \(v\) in \(u\) starting at the position corresponding to \(k\), and so \(v\) occurs in \(x\) starting at position \(k\). By a similar argument we can also prove a claim that for all \(k<k_{\infty}\) such that \(x(k)=0\) and there are a multiple of \(n\) many \(0\)s from \(k\) to \(k_{\infty}-1\), \(v\) occurs in \(x\) starting at position \(k\). Putting these two claims together, we conclude that \(x\) is built from \(v\). **Lemma 4.2**.: _For any \(v\in\mathcal{F}\), the set of all \(x\in\mathcal{Z}\) such that \(x\) is built from \(v\) is closed in \(\mathcal{Z}\)._ Proof.: This is an easy consequence of clause (iii) of Lemma 4.1. Let \((X,T)\) be a Cantor system and let \(A\) be a clopen subset of \(X\). Define \(\mathcal{B}_{T}(A)\) to be the smallest Boolean algebra \(\mathcal{B}\) of subsets of \(X\) such that \(T^{n}A\in\mathcal{B}\) for all \(n\in\mathbb{Z}\). We say that \((T,A)\) is _generating_ if \(\mathcal{B}_{T}(A)\) contains all clopen subsets of \(X\). **Theorem 4.3**.: _Let \((X,T)\) be a minimal Cantor system and \(x_{0}\in X\). Then the following are equivalent:_ 1. \((X,T)\) _is conjugate to a (nondegenerate) rank-_\(1\) _subshift._ 2. _There is a clopen subset_ \(A\) _of_ \(X\) _such that_ \((T,A)\) _is generating and for all_ \(n\in\mathbb{N}\) _there is a_ \(v\in\mathcal{F}\) _satisfying:_ * \(|v|\geq n\) _and_ \(\operatorname{Ret}_{A}(x_{0})\) _is built from_ \(v\)_, and_ * _for any_ \(u\in\mathcal{F}\) _such that_ \(|u|\geq|v|\) _and_ \(\operatorname{Ret}_{A}(x_{0})\) _is built from_ \(u\)_, there exists_ \(u^{\prime}\in\mathcal{F}\) _such that_ \(|u^{\prime}|\leq|u|+|v|\)_,_ \(u^{\prime}\) _is built from_ \(v\)_, and_ \(u\) _is an initial segment of_ \(u^{\prime}\)_._ Proof.: Clause (2) is apparently conjugacy invariant, thus to see (1)\(\Rightarrow\)(2), we may assume \(V\) is a rank-\(1\) word, \(X=X_{V}\) is a nondegenerate minimal rank-\(1\) subshift, and \(T=\sigma\). Let \(A=\{x\in X\,:\,x(0)=1\}\). Then \((T,A)\) is generating, and \(\operatorname{Ret}_{A}(x_{0})=x_{0}\). The set of all finite words \(v\) such that \(V\) is built from \(v\) is a subset of the set of all finite words \(v\) such that \(x_{0}\) is built from \(v\). Now given any \(n\in\mathbb{N}\), let \(v\in\mathcal{F}\) be such that \(V\) is built _fundamentally_ from \(v\) (see Definition 2.13 of [22]). Then by Proposition 2.16 of [22], for any \(u\in\mathcal{F}\) such that \(|u|\geq|v|\) and \(V\) is built from \(u\), \(u\) is built from \(v\). This proves (2) by Proposition 2.36 of [22]. Conversely, assume \(A\) is a clopen subset of \(X\) witnessing (2). Since \((T,A)\) is generating, the map \(\operatorname{Ret}_{A}:X\to 2^{\mathbb{Z}}\) is a homeomorphic embedding such that \(\operatorname{Ret}_{A}\circ T=\sigma\circ\operatorname{Ret}_{A}\). Thus \(\operatorname{Ret}_{A}(X)\) is a minimal subshift, and \(\operatorname{Ret}_{A}\) is a conjugacy map. By repeatedly applying (2), we obtain an infinite sequence of finite words \(\{v_{n}\}_{n\geq 0}\) in \(\mathcal{F}\) such that \(\operatorname{Ret}_{A}(x_{0})\) is built from each \(v_{n}\) and for all \(n\geq 0\), \(v_{n}\) is an initial segment of \(v_{n+1}\) and \(v_{n+1}\) is an initial segment of some \(u\) which is built from \(v_{n}\). This allows us to define an infinite word \(V=\lim_{n}v_{n}\). By definition, \(V\) is a rank-\(1\) word. To finish the proof it suffices to verify that \(\operatorname{Ret}_{A}(X)=X_{V}\). By the minimality of \(\operatorname{Ret}_{A}(X)\), for any \(y\in\operatorname{Ret}_{A}(X)\), the set of all finite subwords of \(y\) coincides with the set of all finite subwords of \(\operatorname{Ret}_{A}(x_{0})\). On the other hand, our assumption guarantees that the set of all finite subwords of \(\operatorname{Ret}_{A}(x_{0})\) coincides with the set of all finite subwords of \(V\). Thus \(\operatorname{Ret}_{A}(X)=X_{V}\) and \(X\) is conjugate to \(X_{V}\), a rank-\(1\) subshift. The apparent descriptive complexity given by clause (2) of the above theorem is \(\mathbf{\Sigma}_{5}^{0}\), which is significantly more complex than \(G_{\delta}\). ## 5. Proper finite rank constructions The following is a basic property regarding symbolic rank-\(n\) constructions. **Proposition 5.1**.: _Let \(n\geq 1\). Suppose \(\{T_{i}\}_{i\geq 0}\) is a sequence of finite subsets of \(\mathcal{F}\) such that \(T_{0}=\{0\}\) and for all \(i\geq 0\), \(|T_{i}|\leq n\) and each element of \(T_{i+1}\) is built from \(T_{i}\). Then there is a rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n_{i}}\) such that for all \(i\geq 0\), \(v_{i,1},\ldots,v_{i,n_{i}}\in T_{i}\)._ Proof.: For each \(i\geq 0\) and \(v\in T_{i+1}\), fix a building of \(v\) from \(T_{i}\). Define a binary relation \(R\) on \(\bigcup_{i\geq 0}T_{i}\) by \(R(u,v)\) if for some \(i\geq 0\), \(u\in T_{i}\), \(v\in T_{i+1}\), and the building of \(v\) from \(T_{i}\) starts with \(u\). Let \(<\) be the transitive closure of \(R\). Then \(<\) is a (strict) partial order on \(\bigcup_{i\geq 0}T_{i}\). We inductively define an infinite \(R\)-chain of words \(\{u_{i}\}_{i\geq 0}\), i.e., \(u_{i}\in\bar{T_{i}}\) and \(R(u_{i},u_{i+1})\) for all \(i\geq 0\). Let \(u_{0}=0\). Note that there are infinitely many words \(u\in\bigcup_{i\geq 0}T_{i}\) such that \(u_{0}<u\) (in fact \(u_{0}<u\) for all \(u\in\bigcup_{i\geq 0}T_{i}\)). In general, assume \(u_{i}\) has been defined such that there are infinitely many \(w\in\bigcup_{i\geq 0}T_{i}\) with \(u_{i}<w\). In particular the set \(W=\{w\in\bigcup_{j\geq i+2}T_{j}\,:\,u_{i}<w\}\) is infinite. Note that for each \(w\in W\) there is a \(u_{w}\in\bar{T_{i+1}}\) such that \(R(u_{i},u_{w})\) and \(u_{w}<w\). Since \(T_{i+1}\) is finite, there is a \(v\in T_{i+1}\) such that for infinitely many \(w\in W\), \(u_{w}=v\). Let \(u_{i+1}=v\). Then there are infinitely many \(w\in\bigcup_{i\geq 0}T_{i}\) such that \(u_{i+1}<w\). This finishes the inductive construction. Now define \(v_{i,j}\) for each \(i\geq 0\) so that \(v_{i,1}=u_{i}\) and \(\{v_{i,1},\ldots,v_{i,n_{i}}\}=T_{i}\), where \(n_{i}=|T_{i}|\). With the fixed buildings, this gives a rank-\(n\) construction as required. Next we characterize the rank-\(n\) subshifts which have proper rank-\(n\) constructions. We use \(1^{\mathbb{Z}}\) to denote the element \(x\in 2^{\mathbb{Z}}\) where \(x(k)=1\) for all \(k\in\mathbb{Z}\). **Theorem 5.2**.: _Let \(n\geq 1\) and let \(X\) be a subshift of symbolic rank \(n\). The following are equivalent:_ 1. _There exists a rank-_\(n\) _word_ \(V\) _such that_ \(X=X_{V}\)_, and_ \(V\) _has a proper rank-_\(n\) _construction._ 2. _For any rank-_\(n\) _word_ \(V\) _such that_ \(X=X_{V}\)_,_ \(V\) _has a proper rank-_\(n\) _construction._ 3. _For any_ \(x\in X\) _such that_ \(x\neq 1^{\mathbb{Z}}\)_, the orbit of_ \(x\) _is dense in_ \(X\)_._ Proof.: We first show (1)\(\Rightarrow\)(3). Suppose \(V\) is a rank-\(n\) word such that \(X=X_{V}\), and \(V\) has a proper rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\). For each \(i\geq 0\), define \(a_{i}=\max_{1\leq j\leq n}|v_{i,j}|\). Let \(x\in X_{V}\) and assume \(x\neq 1^{\mathbb{Z}}\). There exists an \(m\in\mathbb{Z}\) so that \(x(m)=0\). Fix an \(i\geq 0\) and consider the finite word \(u=x\upharpoonright[m-a_{i+1},m+a_{i+1}]\). By the definition of \(X_{V}\), \(u\) is a subword of \(V\). Since \(V\) is built from \(S_{i+1}\), by considering the length of \(u\) we get that there is \(1\leq j_{0}\leq n\) such that \(v_{i+1,j_{0}}\) is a subword of \(u\). By the properness of the rank-\(n\) construction, \(v_{i,1}\) is a subword of \(v_{i+1,j_{0}}\), and hence a subword of \(x\). This implies that the orbit of \(x\) is dense in \(X_{V}\). Next we show (3)\(\Rightarrow\)(2). Let \(V\) be a rank-\(n\) word such that \(X=X_{V}\). Suppose for any \(x\in X_{V}\) such that \(x\neq 1^{\mathbb{Z}}\), the orbit of \(x\) is dense in \(X_{V}\). We fix a rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n_{i}}\), where \(V=\lim_{i}v_{i,1}\). Since \(V\) is a rank-\(n\) word, it does not have a rank-\((n-1)\) construction; by telescoping if necessary, we may assume that \(n_{i}=n\) for all \(i\geq 0\). Also, without loss of generality, we assume that for all \(i\geq 0\), all finite words in \(S_{i}\) are expected subwords of \(V\). In particular, if \(i<i^{\prime}\), then every finite word in \(S_{i}\) is an expected subword of some word in \(S_{i^{\prime}}\). Next we claim that for any \(i_{0}>0\) and \(1\leq j_{0}\leq n\), there exists \(i>i_{0}\) such that for any \(1\leq j\leq n\), \(v_{i_{0},j_{0}}\) is an expected subword of \(v_{i,j}\). Assume not; then for any \(i>i_{0}\) there is \(1\leq j\leq n\) such that \(v_{i_{0},j_{0}}\) is not an expected subword of \(v_{i,j}\). We define a sequence \(\{T_{k}\}_{k\geq 0}\) of finite subsets of \(\mathcal{F}\) as follows. Let \(T_{0}=\{0\}\). For \(k>0\), let \(T_{k}\) be the set of all \(v_{i_{0}+k,j}\) for \(1\leq j\leq n\) such that \(v_{i_{0},j_{0}}\) is not an expected subword of \(v_{i_{0}+k,j}\). Then for all \(k\geq 0\), \(T_{k}\subseteq S_{i_{0}+k}\) and so \(|T_{k}|\leq n-1\). Also, for all \(k\geq 0\), each element of \(T_{k+1}\) is built from \(T_{k}\). By Proposition 5.1, there is a rank-\((n-1)\) construction with associated rank-\((n-1)\) generating sequence \(\{w_{k,\ell}\}_{k\geq 0,1\leq\ell\leq m_{k}}\) such that for all \(k\geq 0\) and \(1\leq\ell\leq m_{k}\), \(w_{k,\ell}\in T_{k}\subseteq S_{i_{0}+k}\). Let \(W=\lim_{k}w_{k,1}\). Then every finite subword of \(W\) is a subword of \(V\). Hence \(X_{W}\) is an closed invariant subset of \(X_{V}\). It is clear from the construction of \(W\) that there is \(x\in X_{W}\) such that \(x\neq 1^{\mathbb{Z}}\). Since the orbit of \(x\) is dense in \(X_{V}\), we get that \(X_{W}=X_{V}\). This contradicts our assumption that \(\operatorname{rank}_{\operatorname{symb}}(X_{V})=n\). Using the claim, and by telescoping, we obtain a proper rank-\(n\) construction for \(V\). Finally, (2)\(\Rightarrow\)(1) is immediate. Note that the implications (1)\(\Rightarrow\)(3) in the above theorem do not require that \(X\) is of symbolic rank \(n\). **Corollary 5.3**.: _Let \(n\geq 1\) and \(X\) be a nondegenerate subshift of symbolic rank \(\leq n\). Suppose \(X=X_{V}\) and \(V\) has a proper rank-\(n\) construction. Then \((X,\sigma)\) is an essentially minimal Cantor system. In particular, there is \(k\in\mathbb{N}\) such that \(0^{k}\) is not a subword of \(V\)._ Proof.: Since \(X=X_{V}\) where \(V\) has a proper rank-\(n\) construction, \(V\) is recurrent. Since \(X_{V}\) is nondegenerate, it is a Cantor set. Now if \(1^{\mathbb{Z}}\not\in X\), then by Theorem 5.2 (3) \(X\) is minimal; if \(1^{\mathbb{Z}}\in X\) then \(\{1^{\mathbb{Z}}\}\) is invariant and by Theorem 5.2 (3) it is the unique minimal set in \(X\). Thus \((X,\sigma)\) is an essentially minimal Cantor system. In either case, \(0^{\mathbb{Z}}\not\in X_{V}\), thus there is \(k\in\mathbb{N}\) such that \(0^{k}\) is not a subword of \(V\). Note that any rank-\(1\) construction is proper, and thus any nondegenerate rank-\(1\) subshift is an essentially minimal Cantor system. **Corollary 5.4**.: _Let \(n\geq 1\) and let \(X\) be a nondegenerate subshift of symbolic rank \(n\). Then the following are equivalent:_ 1. \(X\) _is minimal._ 2. _There exists a rank-_\(n\) _word_ \(V\) _such that_ \(X=X_{V}\)_, and_ \(V\) _has a proper rank-_\(n\) _construction with bounded spacer parameter._ 3. _For any rank-_\(n\) _word_ \(V\) _such that_ \(X=X_{V}\)_,_ \(V\) _has a proper rank-_\(n\) _construction with bounded spacer parameter._ Proof.: To see (1)\(\Rightarrow\)(3), suppose \(X\) is minimal. Then \(1^{\mathbb{Z}}\not\in X\) and clause (3) of Theorem 5.2 holds. By Theorem 5.2, for any rank-\(n\) word \(V\) such that \(X=X_{V}\), \(V\) has a proper rank-\(n\) construction with associate rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\). Without loss of generality, we may assume that every word in this sequence is an expected subword of \(V\). We claim that this given proper rank-\(n\) construction has bounded spacer parameter. Otherwise there are arbitrarily large \(k\) with \(1^{k}\) as a subword of \(V\), and then \(1^{\mathbb{Z}}\in X_{V}=X\), a contradiction. The implication (3)\(\Rightarrow\)(2) is immediate. Finally, we prove (2)\(\Rightarrow\)(1). Suppose \(V\) is a rank-\(n\) word such that \(X=X_{V}\), and \(V\) has a proper rank-\(n\) construction with bounded spacer parameter. Then \(1^{\mathbb{Z}}\not\in X_{V}\), and by Theorem 5.2\(X\) is minimal. Again, we remark that the implications (2)\(\Rightarrow\)(1) of the above corollary do not require that \(X\) be a subshift of symbolic rank \(n\). ## 6. Finite symbolic rank and finite topological rank In this section we prove that minimal subshifts of finite symbolic rank have finite topological rank, and conversely, any minimal Cantor system of finite topological rank is either an odometer or conjugate to a subshift of finite symbolic rank. ### From finite symbolic rank to finite topological rank We first consider minimal subshifts of finite symbolic rank. The following concept of unique readability will be useful in our proofs to follow. Let \(n\geq 1\). Fix a symbolic rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\). Let \(V=\lim_{i}v_{i,1}\). Without loss of generality assume every \(v_{i,j}\) is an expected subword of \(V\), and that for each \(i\geq 1\), the words \(v_{i,1},\ldots,v_{i,n}\) are distinct. For \(x\in X_{V}\), a _reading_ of \(x\) is a sequence \(\{E_{i}\}_{i\geq 0}\) satisfying, for each \(i\geq 0\), * each element of \(E_{i}\) is a pair \((k,j)\), where \(1\leq j\leq n\) and \(k\) is the starting position of an occurrence of \(v_{i,j}\) in \(x\); * if \((k_{1},j_{1}),(k_{2},j_{2})\in E_{i}\) and \(k_{1}<k_{2}\), then \(k_{1}+|v_{i,j_{1}}|\leq k_{2}\); * \(E_{0}=\{(k,j)\,:\,x(k)=0\text{ and }j=1\}\); and * for each \((k,j)\in E_{i}\), there is exactly one \((k^{\prime},j^{\prime})\in E_{i+1}\) such that \(k^{\prime}\leq k\) and \(k^{\prime}+|v_{i+1,j^{\prime}}|\geq k+|v_{i,j}|\). If every \(x\in X_{V}\) has a unique reading, we say that \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq m}\) has _unique readability_, and we call an occurrence (starting at position) \(k\) of \(v_{i,j}\) in \(x\)_expected_ if \((k,j)\in E_{i}\) for the unique reading of \(x\). Every rank-\(1\) generating sequence whose induced infinite rank-\(1\) word is not periodic has unique readability (Proposition 2.29 of [22]). **Lemma 6.1**.: _Let \(n\geq 1\), \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\) be a rank-\(n\) generating sequence, and \(V=\lim_{i}v_{i,1}\). Then any \(x\in X_{V}\) has a reading._ Proof.: We fix a rank-\(n\) construction of \(V\) with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\). Without loss of generality we may assume that every \(v_{i,j}\) is an expected subword of \(V\), and that for each \(i\geq 1\), the words \(v_{i,1},\ldots,v_{i,n}\) are distinct. For each \(i\geq 0\), define \(a_{i}=\max_{1\leq j\leq n}|v_{i,j}|\) and \(b_{i}=\inf_{1\leq j\leq n}|v_{i,j}|\). Then \(b_{i+1}\geq 2b_{i}\) for all \(i\geq 0\). We consider several cases. Case 1: \(x=1^{\mathbb{Z}}\). In this case a unique reading is given by \(E_{i}=\varnothing\) for all \(i\geq 0\). Case 2: There exists \(k_{0}\in\mathbb{Z}\) such that \(x(k_{0})=0\) and \(x(k)=1\) for all \(k<k_{0}\). First fix any \(i\geq 0\). Define \(u_{i}=x\!\upharpoonright\![k_{0}-a_{i},k_{0}+a_{i}]\). Since \(u_{i}\) is a subword of \(V\), \(V\) is built from \(\{v_{i,1},\ldots,v_{i,n}\}\), and \(a_{i}\geq|v_{i,j}|\) for all \(1\leq j\leq n\), we have that for some \(1\leq j_{0}\leq n\), \(k_{0}\) is the starting position of an occurrence of \(v_{i,j_{0}}\) in \(x\). Now following the rank-\(n\) construction of \(V\), by an induction on \(t=i,i-1,\ldots,0\), we define collections \(E_{t}^{i}\) for \(t\leq i\) as follows. First let \(E_{i}^{i}=\{(k_{0},j_{0})\}\). Suppose now \(E_{t}^{i}\) has been defined, which is a collection of some pairs \((k,j)\), where \(k\) is the starting position of an occurrence of \(v_{t,j}\) in \(x\). Assume that for \((k_{1},j_{1}),(k_{2},j_{2})\in E_{t}^{i}\) with \(k_{1}<k_{2}\), we have \(k_{1}+|v_{t,j_{1}}|\leq k_{2}\). Now for each \((t,j)\in E_{t}^{i}\), the building of \(v_{t,j}\) from \(\{v_{t-1,1},\ldots,v_{t-1,n}\}\) in the fixed rank-\(n\) construction gives rise to pairs \((k^{\prime},j^{\prime})\), where \(v_{t-1,j^{\prime}}\) occurs at position \(k^{\prime}\) and this occurrence corresponds to the occurrence of \(v_{t-1,j^{\prime}}\) in the building of \(v_{t,j}\) as an expected subword. We put all such \((k^{\prime},j^{\prime})\) in \(E_{t-1}^{i}\). It is clear that for \((k^{\prime}_{1},j^{\prime}_{1}),(k^{\prime}_{2},j^{\prime}_{2})\in E_{t-1}^{i}\) with \(k^{\prime}_{1}<k^{\prime}_{2}\), we have \(k^{\prime}_{1}+|v_{t-1,j^{\prime}_{1}}|\leq k^{\prime}_{2}\). It is also clear that for each \((k^{\prime},j^{\prime})\in E_{t-1}^{i}\), there is exactly one \((k,j)\in E_{t}^{i}\) such that \(k\leq k^{\prime}\) and \(k+|v_{t,j}|\geq k^{\prime}+|v_{t-1,j^{\prime}}|\). Finally, we note that \(E_{0}^{i}=\{(k,j)\,:\,x(k)=0,k_{0}\leq k\leq k_{0}+|v_{i,j}|-1,\text{ and }j=1\}\). This finishes the definition of \(E_{t}^{i}\) for \(t\leq i\). We have that for all \(k_{0}\leq k\leq k_{0}+b_{i}+1\), \((k,1)\in E_{0}^{i}\) iff \(x(k)=0\). For \(i\geq 0\), define \(e_{i}\in\{0,1\}^{\mathbb{N}\times\mathbb{Z}\times\{1,\cdots,n\}}\) by letting \(e_{i}(t,k,j)=1\) iff \(t\leq i\) and \((k,j)\in E_{t}^{i}\). Since \(\{0,1\}^{\mathbb{N}\times\mathbb{Z}\times\{1,\cdots,n\}}\) is compact, there exists an accumulation point \(e\) of \(\{e_{i}\}_{i\geq 0}\). For each \(t\geq 0\), define \(E_{t}=\{(k,j)\,:\,e(t,k,j)=1\}\). Since \(\{b_{i}\}_{i\geq 0}\) is strictly increasing, we conclude that for all \(k\geq k_{0}\), \((k,1)\in E_{0}\) iff \(x(k)=0\). The other properties of a reading are also easily verified. Thus \(\{E_{t}\}_{t\geq 0}\) is a reading of \(x\). Case 3: There exists \(k_{0}\in\mathbb{Z}\) such that \(x(k_{0})=0\) and \(x(k)=1\) for all \(k>k_{0}\). This case is similar to Case 2. Case 4: For any \(k\in\mathbb{Z}\) there are \(k_{1}<k<k_{2}\) such that \(x(k_{1})=x(k_{2})=0\). Let \(k_{0}\) be an integer satisfying \(x(k_{0})=0\). For \(i\geq 0\), let \(\ell_{i,1}\) be the \((2a_{i}+1)\)th natural number such that \(x(k_{0}+\ell_{i,1})=0\); let \(\ell_{i,2}\) be the \((2a_{i}+1)\)th natural number such that \(x(k_{0}-\ell_{i,2})=0\). Define \(u_{i}=x\!\upharpoonright\![k_{0}-\ell_{i,2},k_{0}+\ell_{i,1}]\). Then \(u_{i}\) is a subword of \(V\). Since \(V\) is built from \(\{v_{i,1},\ldots,v_{i,n}\}\), by the definition of \(a_{i}\), there exist \(m_{i}<k_{0}\) and a subword \(w_{i}\) of \(V\) such that * \(w_{i}\) is of the form \(v_{i,j_{1}}1^{s_{1}}v_{i,j_{2}}1^{s_{2}}v_{i,j_{3}}\), where \(1\leq j_{1},j_{2},j_{3}\leq n\) and \(s_{1},s_{2}\geq 0\), * \(m_{i}\) is the starting position of an occurrence of \(w_{i}\) in \(x\), and * \(m_{i}+|v_{i,j_{1}}1^{s_{1}}|\leq k_{0}\leq m_{i}+|v_{i,j_{1}}1^{s_{1}}v_{i,j_ {2}}|-1\). Now we proceed as in the proof of Case 2 to define \(E_{t}^{i}\) for all \(t\leq i\) and finally obtain a reading \(\{E_{t}\}_{t\geq 0}\) of \(x\) by compactness. Next we define a concept that guarantees unique readability. Let \(n\geq 1\). We say a rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\) is _good_ if it is proper and for any \(i\geq 0\) and \(1\leq j\leq n\), \(v_{i,j}\) is not of the form \[\alpha 1^{s_{1}}v_{i,j_{1}}1^{s_{2}}v_{i,j_{2}}\cdots v_{i,j_{k-1}}1^{s_{k}}\beta\] where \(k\geq 1\), \(\alpha\) is a nonempty suffix of some \(v_{i,j_{k}}\), and \(\beta\) is a nonempty prefix of some \(v_{i,j_{k+1}}\). If a rank-\(n\) construction is good, we say that the infinite word \(V=\lim_{i}v_{i,1}\) is _good_. **Lemma 6.2**.: _Consider a good rank-\(n\) construction with associate rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\). Then \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\) has unique readability._ Proof.: Let \(x\in X_{V}\). Let \(\{E_{i}\}_{i\geq 0}\) be a reading of \(x\), which exists by Lemma 6.1. By the definition of a reading, we have that for each \(i\geq 0\), \(E_{i}\) gives a way in which \(x\) is built from \(\{v_{i,1},\ldots,v_{i,n}\}\). Now suppose \(\{E^{\prime}_{i}\}_{i\geq 0}\) is another reading of \(x\), and suppose \(i\geq 0\) is the smallest such that \(E_{i}\neq E^{\prime}_{i}\). Without loss of generality, let \((k,j)\in E_{i}\setminus E^{\prime}_{i}\). Consider two cases. Case 1: there is \(j^{\prime}\neq j\) such that \((k,j^{\prime})\in E^{\prime}_{i}\). In this case without loss of generality assume \(|v_{i,j}|>|v_{i,j^{\prime}}|\). Then \(v_{i,j}\) can be written in the form \(v_{i,j^{\prime}}1^{s_{1}}v_{i,j_{1}}\cdots v_{i,j_{\ell}}1^{s_{\ell+1}}\beta\) for some \(\ell\geq 0\) and nonempty \(\beta\) where \(\beta\) is a prefix of some \(v_{i,j_{\ell+1}}\). This contradicts the assumption that our rank-\(n\) construction is good. Case 2: there is no \(j^{\prime}\) such that \((k,j^{\prime})\in E^{\prime}_{i}\). By the definition of a reading, there is a unique \((k^{\prime},j^{\prime})\in E^{\prime}_{i}\) where \(k^{\prime}<k\) such that \(k\leq k^{\prime}+|v_{i,j^{\prime}}|\). If \(k^{\prime}+|v_{i,j^{\prime}}|\leq k+|v_{i,j}|\) then \(v_{i,j}\) can be written in the form \(\alpha 1^{s_{1}}v_{i,j_{1}}\cdots v_{i,j_{\ell}}1^{s_{\ell+1}}\beta\), contradicting the assumption that our rank-\(n\) construction is good. If \(k^{\prime}+|v_{i,j^{\prime}}|>k+|v_{i,j}|\) then \(v_{i,j^{\prime}}\) can be written in the form \(\alpha 1^{s_{1}}v_{i,j_{1}}\cdots v_{i,j_{\ell}}1^{s_{\ell+1}}\beta\), again contradicting our assumption. Note that the definition of goodness does not rule out the possibility that some \(v_{i,j}\) is a subword of \(v_{i,j^{\prime}}\) for \(j^{\prime}\neq j\). **Lemma 6.3**.: _Suppose \(V\) has a good rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\). The for any \(i\geq 0\), \(1\leq j\leq n\) and \(k\in\mathbb{Z}\), the set_ \[\{x\in X_{V}\,:\,\text{ there is an expected occurrence of $v_{i,j}$ at position $k$}\}\] _is clopen in \(X_{V}\)._ Proof.: Let \(E_{i,j,k}\) denote the set in question. Then \(x\in E_{i,j,k}\) iff for any \(0\leq t\leq|v_{i,j}|-1\), \(x(k+t)=v_{i,j}(t)\) and for any \(1\leq j^{\prime}\leq n\) and \(k^{\prime}\leq k\), if \(j^{\prime}\neq j\) and \(k^{\prime}+|v_{i,j^{\prime}}|\geq k+|v_{i,j}|\), then there is \(0\leq s\leq|v_{i,j^{\prime}}|-1\) such that \(x(k^{\prime}+s)\neq v_{i,j^{\prime}}(s)\). This implies that \(E_{i,j,k}\) is clopen. **Proposition 6.4**.: _Let \(n\geq 1\) and \(X\) be a nondegenerate subshift of symbolic rank \(\leq n\). Suppose \(X=X_{V}\) and \(V\) has a proper rank-\(n\) construction. Then there exists a good word \(W\) such that \(X\) is a factor of \(X_{W}\). Moreover, if in addition \(X\) is minimal, then \(W\) can be chosen so that \(X_{W}\) is minimal._ Proof.: Fix a proper rank-\(n\) construction of \(V\) with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\). Without loss of generality assume \(v_{i,1},\ldots,v_{i,n}\) are distinct for all \(i\geq 1\). By Corollary 5.3 there is a \(k_{0}\in\mathbb{N}\) such that \(0^{k_{0}}\) is not a subword of \(V\); we fix such a \(k_{0}\). For all \(i\geq 0\), define \(a_{i}=\max_{1\leq j\leq n}|v_{i,j}|\) and \(b_{i}=\inf_{1\leq j\leq n}|v_{i,j}|\). Then \(b_{i+1}\geq nb_{i}\) for all \(i\geq 0\), hence in particular \(b_{i}\geq n^{i}\) for all \(i\geq 0\). We define a rank-\(2n\) generating sequence \(\{w_{p,q}\}_{p\geq 0,1\leq q\leq 2n}\). Let \(w_{0,q}=0\) for \(1\leq q\leq 2n\). To define \(w_{1,q}\), let \(i_{1}\geq 0\) be such that \(b_{i_{1}}>4k_{0}+4\). Then define \[w_{1,q}=\left\{\begin{array}{ll}v_{i_{1},q}&\mbox{if $1\leq q\leq n$},\\ \\ 010^{|v_{i_{1},q-n}|-4}10&\mbox{if $n+1\leq q\leq 2n$}.\end{array}\right.\] Note that for all \(1\leq j\leq n\), \(|w_{1,j}|=|w_{1,j+n}|=|v_{i_{1},j}|\). For \(p\geq 1\), suppose \(i_{p}\) has been defined and \(w_{p,q}\) have been defined for all \(1\leq q\leq 2n\). We define \(i_{p+1}\) and \(w_{p+1,q}\) as follows. First set \[m_{p}=\left\lceil\frac{a_{i_{p}+1}}{b_{i_{p}}}\right\rceil.\] Then let \(i_{p+1}>i_{p}\) be large enough such that by telescoping using the buildings in the proper rank-\(n\) construction \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\), we can write, for all \(1\leq j\leq n\), \(v_{i_{p+1},j}\) in the form \[v_{i_{p},j_{1}}1^{s_{1}}\cdots 1^{s_{\ell}}v_{i_{p},j_{\ell+1}} \tag{1}\] with \(\ell>12m_{p}+4n\). This is doable since \(\ell\geq n^{i_{p+1}-i_{p}}\). Note that for \(j=1\) we have \(j_{1}=1\). We also note the following property (*) of the word in (1): for any \(1\leq t\leq\ell+2-2m_{p}\), \(\{j_{t},\cdots,j_{t+2m_{p}-1}\}=\{1,\cdots,n\}\). This is because, \[u\triangleq v_{i_{p},j_{t}}1^{s_{t}}\cdots v_{i_{p},j_{t+2m_{p}-1}} \tag{2}\] consists of \(2m_{p}\) many consecutive expected occurrences of subwords of \(v_{i_{p+1},j}\) of the form \(v_{i_{p},j^{\prime}}\); since \(v_{i_{p+1},j}\) is built from \(\{v_{i_{p}+1,1},\ldots,v_{i_{p}+1,n}\}\), by our definition of \(m_{p}\), \(u\) must contain some expected occurrence of \(v_{i_{p}+1,j^{\prime}}\), where \(1\leq j^{\prime}\leq n\), as a subword. Hence (*) holds by the properness of the construction. We now fix \(1\leq j\leq n\) and assume that \(v_{i_{p+1},j}\) is in the form (1). For \(1\leq t\leq\ell+1\), define \[\phi(t)=\left\{\begin{array}{ll}j_{t}+n&\mbox{if $2\leq t\leq 2m_{p}+j+1$ or $\ell-2m_{p}-j+1\leq t\leq\ell$},\\ j_{t}&\mbox{otherwise}\end{array}\right.\] and \[\psi(t)=\left\{\begin{array}{ll}j_{t}+n&\mbox{if $2\leq t\leq 2m_{p}+j+n+1$ or $\ell-2m_{p}-j-n+1\leq t\leq\ell$},\\ j_{t}&\mbox{otherwise}.\end{array}\right.\] Then define \[w_{p+1,j}=w_{p,\phi(1)}1^{s_{1}}\cdots 1^{s_{\ell}}w_{p,\phi(\ell+1)}\] \[w_{p+1,j+n}=w_{p,\psi(1)}1^{s_{1}}\cdots 1^{s_{\ell}}w_{p,\psi(\ell+1)}.\] This finishes the defintion of \(\{w_{p,q}\}_{p\geq 0,1\leq q\leq 2n}\). We verify that the construction defined is proper, i.e., for all \(p\geq 1\) and \(1\leq q\leq 2n\), all words in \(\{w_{p,1},\ldots,w_{p,2n}\}\) are used in the building of \(w_{p+1,q}\). We first assume \(1\leq q\leq n\). Since \(\ell>12m_{p}+4n\), there exists \(1\leq t_{0}\leq\ell+1\) such that for all \(t_{0}\leq t\leq t_{0}+2m_{p}-1\), \(\phi(t)=j_{t}\). By property (*), \(\{\phi(t_{0}),\ldots,\phi(t_{0}+2m_{p}-1)\}=\{j_{t_{0}},\ldots,j_{t+2m_{p}-1} \}=\{1,\ldots,n\}\). Thus all words in \(\{w_{p,1},\ldots,w_{p,n}\}\) are used in the building of \(w_{p+1,q}\). On the other hand, for \(2\leq t\leq 2m_{p}+1\), \(\phi(t)=j_{t}+n\). By property (*) again, \(\{\phi(2),\ldots,\phi(2m_{p}+1)\}=\{j_{2}+n,\ldots,j_{2m_{p}+1}+n\}=\{n+1, \ldots,2n\}\). Hence all words in \(\{w_{p,n+1},\ldots,w_{p,2n}\}\) are also used in the building of \(w_{p+1,q}\). The case \(n+1\leq q\leq 2n\) is similar. Next we claim that * for all \(p\geq 2\) and \(1\leq q\leq 2n\), \(w_{p,q}\) is not of the form (3) \[\alpha 1^{r_{1}}w_{p,q_{1}}1^{r_{2}}w_{p,q_{2}}\cdots w_{p,q_{d-1}}1^{r_{d}}\beta\] where \(d\geq 1\), \(\alpha\) is a nonempty suffix of some \(w_{p,q_{d}}\) and \(\beta\) is a nonempty prefix of some \(w_{p,q_{d+1}}\), and * for all \(p\geq 2\) and \(1\leq q,q^{\prime}\leq 2n\), if \(q\neq q^{\prime}\) then \(w_{p,q}\) is not a subword of \(w_{p,q^{\prime}}\). We prove this claim by induction on \(p\geq 2\). First suppose \(p=2\). We observe that \(w_{2,q}\) can be written as \(u_{1}yu_{2}\), where \(0^{k_{0}}\) is not a subword of \(y\), \(y\) begins and ends with \(0\), every word of \(\{v_{i_{1},1},\ldots,v_{i_{1},n}\}\) occurs at least \(3\) different times in \(y\), and both \(u_{1}\) and \(u_{2}\) are of the form \[\alpha 01^{s_{1}}010^{h_{1}}101^{s_{2}}010^{h_{2}}10\cdots 010^{h_{2m_{p}+q}}10 1^{s_{2m_{p}+q+1}}0\beta \tag{4}\] where \(\alpha,\beta\) have lengths at least \(3k_{0}\), \(0^{k_{0}}\) is not a subword of either \(\alpha\) or \(\beta\), and \(h_{t}>4k_{0}\) for all \(1\leq t\leq 2m_{p}+q\). The statement about \(y\) is based on the observation that, by (1), \(y\) can be taken to contain subwords of the form (2) for three different values \(2m_{p}+2n+1<t_{1}<t_{2}<t_{3}<\ell-4m_{p}-2n\) where \(t_{3}-t_{2},t_{2}-t_{1}>2m_{p}\); by property (*), for each value \(t_{1},t_{2},t_{3}\), the subword of the form (2) contains a distinct occurrence of each word in \(\{v_{i_{1},1},\ldots,v_{i_{1},n}\}\). Note that for \(q^{\prime}\neq q\), \(w_{2,q^{\prime}}\) does not have a subword of the form (4), hence \(w_{2,q}\) is not a subword of \(w_{2,q^{\prime}}\). Now suppose \(w_{2,q}\) can be written in the form of (3), then by the above observation, there are \(1\leq q_{1},q_{2}\leq 2n\), a nonempty suffix \(y_{1}\) of \(w_{2,q_{1}}\), and a nonempty prefix \(y_{2}\) of \(w_{2,q_{2}}\), such that \(w_{2,q}=y_{1}1^{s}y_{2}\) for some nonnegative integer \(s\). First suppose \(q_{1}=q_{2}=q\). Then \(w_{2,q}\) must have a subword of the form \(z\triangleq 0^{2k_{0}}101^{s_{1}}v_{i_{1},j_{1}}1^{s}v_{i_{1},j_{2}}1^{s_{2}}010 ^{2k_{0}}\), and in fact \(y\) must be a subword of \(z\). However, note that \(y\) has at least \(3\) different occurrences of each word in \(\{v_{i_{1},1},\ldots,v_{i_{1},n}\}\), while \(z\) does not have this property, a contradiction. Next suppose \(q_{1}\neq q\). Then \(u_{1}\) is not a subword of \(w_{2,q_{1}}\), so \(y_{1}\) is a prefix of \(u_{1}\). It follows that \(yu_{2}\) is a suffix of \(y_{2}\), \(q_{2}=q\) and \(y_{2}\) must be \(w_{2,q}\) itself, contradicting the assumption that \(y_{1}\) is nonempty. The case \(q_{2}\neq q\) is similar. This completes the proof of the claim for \(p=2\). Suppose the claim holds for \(p\geq 2\). We verify it for \(p+1\). First we observe that for any \(1\leq q\leq 2n\), \(w_{p+1,q}\) can be written as \(u_{1}yu_{2}\), where \(u_{1}\) and \(u_{2}\) are of the form \[w_{p,q_{1}}1^{s_{1}}w_{p,q_{2}}1^{s_{2}}\cdots w_{p,q_{2m_{p}+q+1}}1^{s_{2m_{p} +q+1}}w_{p,q_{2m_{p}+q+2}} \tag{5}\] where \(1\leq q_{1},q_{2m_{p}+q+2}\leq n\), \(n+1\leq q_{t}\leq 2n\) for all \(2\leq t\leq 2m_{p}+q+1\), and by inductive hypothesis, if \(w_{p,\kappa}\) is a subword of \(y\), then \(1\leq\kappa\leq n\). By the inductive hypothesis, if \(q\neq q^{\prime}\) then \(w_{p+1,q^{\prime}}\) does not contain a subword of the form (5), hence \(w_{p+1,q}\) is not a subword of \(w_{p+1,q^{\prime}}\). Next assume \(w_{p+1,q}\) can be written in the form (3) with \(p+1\) replacing \(p\). Then by the above observation, there are \(1\leq q_{1},q_{2}\leq 2n\), a nonempty suffix \(y_{1}\) of \(w_{p+1,q_{1}}\), and a nonempty prefix \(y_{2}\) of \(w_{p+1,q_{2}}\) such that \(w_{p+1,q}=y_{1}1^{s}y_{2}\) for some nonnegative integer \(s\). First suppose \(q_{1}=q_{2}=q\). Then \(w_{p+1,q}\) has a subword of the form \(z\triangleq w_{p,j_{1}}1^{s_{1}}w_{p,j_{2}}1^{s}w_{p,j_{3}}1^{s_{2}}w_{p,j_{4}}\), where \(n+1\leq j_{1},j_{4}\leq 2n\) and \(1\leq j_{2},j_{3}\leq n\). In fact, \(y\) must be a subword of \(z\). However, \(y\) contains at least \(3\) different occurrences of words in \(\{w_{p,1},\ldots,w_{p,n}\}\), a contradiction. Next suppose \(q_{1}\neq q\). then \(u_{1}\) is not a subword of \(w_{p+1,q_{1}}\), so \(y_{1}\) is a prefix of \(u_{1}\). It follows that \(yu_{2}\) is a suffix of \(y_{2}\), \(q_{2}=q\), and \(y_{2}\) must be \(w_{p+1,q}\) itself, contradicting the assumption that \(y_{1}\) is nonempty. The case \(q_{2}\neq q\) is similar. This completes the proof of the claim. In view of the claim, if we define \(\{w^{\prime}_{p,q}\}_{p\geq 0,1\leq q\leq 2n}\) by letting \(w^{\prime}_{0,q}=0\) and \(w^{\prime}_{p,q}=w_{p+1,q}\) for \(p\geq 1\) and \(1\leq q\leq 2n\), then we obtain a good proper rank-\(2n\) construction. Let \(W=\lim_{p}w^{\prime}_{p,1}\). Note that \(W=\lim_{p}w_{p,1}\). We define a factor map \(\varphi:X_{W}\to X_{V}\). For \(x\in X_{W}\) and \(k\in\mathbb{Z}\), if there is \(1\leq j\leq n\) such that the position \(k\) is part of an expected occurrence of \(w^{\prime}_{1,j}\) or \(w^{\prime}_{1,j+n}\) which starts at position \(k^{\prime}\leq k\), then let \(\varphi(x)(k)=v_{i_{2},j}(k-k^{\prime})\); otherwise let \(\varphi(x)(k)=1\). By the unique readability, and since for all \(1\leq j\leq n\), \(|w^{\prime}_{1,j}|=|w^{\prime}_{1,j+n}|=|w_{2,j}|=|w_{2,j+n}|=|v_{i_{2},j}|\), \(\varphi\) is well defined. By Lemma 6.3\(\varphi\) is continuous. It is clear that \(\varphi\) is a factor map. Finally, if \(X_{V}\) is minimal, then the construction associated with \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\) must have bounded spacer parameter, because otherwise \(1^{\mathbb{Z}}\in X_{V}\) and \(\{1^{\mathbb{Z}}\}\) is invariant. Now it follows from our construction that the defined proper rank-\(2n\) construction of \(W\) also has bounded spacer parameter, and by the implication (2)\(\Rightarrow\)(1) of Corollary 5.4 (which does not require the assumption on the symbolic rank of \(X_{W}\)), \(X_{W}\) is minimal. The following is a corollary to the proof of Proposition 6.4. **Proposition 6.5**.: _Let \(n\geq 1\) and \(X\) be a nondegenerate subshift of symbolic rank \(\leq n\). Suppose \(X=X_{V}\) and \(V\) has a proper rank-\(n\) construction \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\) which has unique readability. Then for any \(i\geq 0\), \(1\leq j\leq n\), and \(k\in\mathbb{Z}\), the set_ \[\{x\in X\ :\ \text{there is an expected occurrence of $v_{i,j}$ in $x$ at position $k$}\}\] _is clopen in \(X\)._ Proof.: Let \(W\) be the infinite word with a good rank-\(2n\) construction with associated rank-\(2n\) generating sequence \(\{w^{\prime}_{p,q}\}_{p\geq 0,1\leq q\leq 2n}\) and let \(\varphi:X_{W}\to X_{V}\) be the factor map both given in the proof of Proposition 6.4. Given \(i\geq 0\), \(1\leq j\leq n\), and \(k\in\mathbb{Z}\), let \(E_{i,j,k}\) denote the set in question. It suffices to show that \(E_{i,j,k}\) is clopen for all \(i=i_{p}\) for some \(p>1\). Suppose \(i=i_{p}\) for some \(p>1\). Note that a reading of \(y\in X_{W}\) can determine a reading of \(\varphi(y)\). Thus \(\varphi^{-1}(E_{i,j,k})\) consists exactly of those \(y\in X_{W}\) such that there is an expected occurrence of \(w^{\prime}_{p,j}\) or \(w^{\prime}_{p,j+n}\) in \(y\) at position \(k\). By our construction, \(\varphi^{-1}(E_{i,j,k})\) is easily seen to be clopen. Thus \(E_{i,j,k}\) is clopen. We are now ready to compute the topological rank of a minimal Cantor system if it has a good construction. **Proposition 6.6**.: _Let \(n\geq 1\). Let \(X\) be a nondegenerate minimal subshift of symbolic rank \(\leq n\). Suppose \(X=X_{V}\) and \(V\) has a good rank-\(n\) construction. Then \(X\) has finite topological rank._ Proof.: Fix a good rank-\(n\) construction with associated rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n}\). Let \(\ell\in\mathbb{N}\) be such that \(1^{\ell}\) is not a subword of \(V\). Note that for each \(i\geq 0\), there are at least \(4\) distinct expected occurrences of \(v_{i,1}\) in \(v_{i+3,1}\). We let \(k_{i}\) be the starting position of the second expected occurrence of \(v_{i,1}\) in \(v_{i+3,1}\). Let \(x_{0}\) be the unique element of \(2^{\mathbb{Z}}\) such that for all \(i\geq 0\), there exists an occurrence of \(v_{3i,1}\) which starts at the position \(-\sum_{0\leq i^{\prime}\leq i-1}k_{3i^{\prime}}\). Then every finite subword of \(x_{0}\) is a subword of \(v_{3i,1}\) for some \(i\geq 0\), and thus \(x_{0}\in X_{V}\). Now for every \(m\geq 2\), let \(A_{m}\) be the set of all \(x\in X_{V}\) such that there is an expected occurrence of \(v_{3m,1}\) in \(x\) starting at the position \(-\sum_{0\leq i\leq m-1}k_{3i}\), which is the second expected occurrence of \(v_{3m,1}\) in an expected occurrence of \(v_{3m+3,j}\) in \(x\) for some \(1\leq j\leq n\). By Lemma 6.3 each \(A_{m}\) is clopen in \(X_{V}\). By definition \(x_{0}\in A_{m}\). Now consider the canonical Kakutani-Rohlin partition \(\mathcal{P}\) with base \(A_{m}\) defined in the remark after Lemma 2.2. The number of towers in \(\mathcal{P}\) corresponds to the number of different \(h>0\) such that \(h\) is the smallest positive integer with \(\sigma^{h}(x)\in A_{m}\) for some \(x\in A_{m}\). Suppose \(x\in A_{m}\) and let \(1\leq j\leq n\) be the integer such that an expected occurence of \(v_{3m+3,j}\) in \(x\) contains the position \(0\). Suppose \(h\) is the smallest positive integer with \(\sigma^{h}(x)\in A_{m}\). Then there is an expected occurrence of \(v_{3m+3,j^{\prime}}\) in \(x\) for some \(1\leq j^{\prime}\leq n\) such that the second expected occurrence of \(v_{3m,1}\) in this occurrence of \(v_{3m+3,j^{\prime}}\) starts exactly at \(h-\sum_{0\leq i\leq m-1}k_{3i}\). By the minimality of \(h\), we get that the expected occurrence of \(v_{3m+3,j}\) and this expected occurrence of \(v_{3m+3,j^{\prime}}\) are only separated by some \(1^{s}\). Conversely, the expected occurrence of some \(v_{3m+3,j^{\prime}}\) immediately to the right of the expected occurrence of \(v_{3m+3,j}\) determines the smallest \(h\) such that \(\sigma^{h}(x)\in A_{m}\). Therefore, for \(1\leq j,j^{\prime}\leq n\) and \(0\leq s\leq\ell\), if we let \(B_{j,s,j^{\prime}}\) be the set of all \(x\in X_{V}\) such that \(B_{j,s,j^{\prime}}\ that there is an expected occurrence of \(v_{3m,1}\) in \(x\) starting at the position \(-\sum_{0\leq i\leq m-1}k_{3i}\), which is the second expected occurrence of \(v_{3m,1}\) in an expected occurrence of \(v_{3m+3,j}\) in \(x\), and this expected occurrence of \(v_{3m+3,j}\) is followed by \(1^{s}\) and an expected occurrence of \(v_{3m+3,j^{\prime}}\) in \(x\), we know that \(\{B_{j,s,j^{\prime}}:1\leq j,j^{\prime}\leq n;0\leq s\leq\ell\}\) is a clopen partition of \(A_{m}\) and this partition refines \(\mathcal{P}\upharpoonright A_{m}\). In summary, we obtain a new Kakutani-Rohlin partition \(\mathcal{P}^{\prime}\) whose base is still \(A_{m}\), and if \(B_{j,s,j^{\prime}}\neq\varnothing\), then \(B_{j,s,j^{\prime}}\in\mathcal{P}^{\prime}\). \(\mathcal{P}^{\prime}\) has at most \(n^{2}\ell\) towers. Finally note that the diameter of \(A_{m}\) is at most \(2^{-|v_{3m-3,1}|}\) since for any \(x\in A_{m}\), \(x\upharpoonright[-|v_{3m-3,1}|,|v_{3m-3,1}|]=x_{0}\upharpoonright[-|v_{3m-3,1}|, |v_{3m-3,1}|]\) and is thus completely fixed. Similarly, every clopen set in \(\mathcal{P}^{\prime}\) has a diameter at most \(2^{-|v_{3m-3,1}|}\). By Corollary 3.7 (2), \(X_{V}\) has topological rank at most \(n^{2}\ell\). **Theorem 6.7**.: _Let \(X\) be a nondegenerate minimal subshift of finite symbolic rank. Then \(X\) has finite topological rank._ Proof.: By Corollary 5.4\(X=X_{V}\) where \(V\) has a proper rank-\(n\) construction for some \(n\geq 1\). By Proposition 6.4 there is a word \(W\) which has a good rank-\(2n\) construction such that \(X_{V}\) is a factor of \(X_{W}\) and \(X_{W}\) is minimal. By Proposition 6.6, \(X_{W}\) has finite topological rank. Thus \(X_{V}\) has finite topological rank by the main theorem (Theorem 1.1) of [28]. In [28] the authors showed that if a minimal Cantor system \((Y,S)\) is a factor of a minimal Cantor system \((X,T)\) of finite topological rank, then \(\operatorname{rank}_{\operatorname{top}}(Y,S)\leq 3\operatorname{rank}_{ \operatorname{top}}(X,T)\). In [18] Corollary 4.8 this is improved to \(\operatorname{rank}_{\operatorname{top}}(Y,S)\leq\operatorname{rank}_{ \operatorname{top}}(X,T)\). Combining these with our results, we can state the following quantitative result. **Corollary 6.8**.: _Let \(X_{V}\) be a nondegenerate minimal subshift of finite symbolic rank. Then_ \[\operatorname{rank}_{\operatorname{top}}(X_{V},\sigma)\leq 4(M+1)( \operatorname{rank}_{\operatorname{symb}}(X_{V}))^{2}\] _where \(M\) is a bound for the spacer parameter of any proper construction of \(V\)._ ### From finite topological rank to finite symbolic rank It was proved in [13] that every minimal Cantor system of finite topological rank is either an odometer or a subshift on a finite alphabet. We will show that in case it is a subshift it is conjugate to a subshift of finite symbolic rank. We use the following notation from [28] (with a slight modification) in this subsection. Let \(B=(V,E,\preceq)\) be an ordered Bratteli diagram. For each \(i\geq 1\), let \(V_{i}^{*}\) denote the set of all words on the alphabet \(V_{i}\), and define a map \(\eta_{i+1}:V_{i+1}\to V_{i}^{*}\) as follows. For \(v\in V_{i+1}\), enumerate all edges \(e\in E_{i+1}\) with \(\mathsf{r}(e)=v\) in the \(\preceq\)-increasing order as \(e_{1},\dots,e_{k}\), and define \[\eta_{i+1}(v)=\mathsf{s}(e_{1})\cdots\mathsf{s}(e_{k}).\] We also define \(\eta_{1}:V_{1}\to E_{1}^{*}\), where \(E_{1}^{*}\) is the set of all words on the alphabet \(E_{1}\). For \(v\in V_{1}\), enumerate all \(e\in E_{1}\) with \(\mathsf{r}(e)=v\) in the \(\preceq\)-increasing order as \(e_{1}\dots e_{k}\), and define \[\eta_{1}(v)=e_{1}\cdots e_{k}.\] **Theorem 6.9**.: _Every minimal Cantor system \((X,T)\) of finite topological rank is an odometer or is conjugate to a minimal subshift \(X_{V}\) of finite symbolic rank. Moreover, if \((X,T)\) is not an odometer, then \(\operatorname{rank}_{\operatorname{symb}}(X_{V})\leq\operatorname{rank}_{ \operatorname{top}}(X,T)\)._ Proof.: We just need to show that for every simple ordered Bratteli diagram \(B=(W,E,\preceq)\) where \(|W_{i}|\leq n\) for all \(i\geq 1\), if the Bratteli-Vershik system \((X_{B},\lambda_{B})\) generated by \(B\) is not an odometer, then it is conjugate to \(X_{V}\) for a word \(V\) which has a rank-\(n\) construction. By telescoping if necessary, we assume without loss of generality that the following properties hold for \(B\): 1. for each \(i\geq 0\), \(w\in W_{i}\) and \(w^{\prime}\in W_{i+1}\), there is an edge \(e\in E_{i+1}\) with \(\mathsf{s}(e)=w\) and \(\mathsf{r}(e)=w^{\prime}\); 2. for each \(i\geq 1\), \(|W_{i}|\geq 2\); 3. for each \(i\geq 1\), there are vertices \(w^{i}_{\min}\) and \(w^{i}_{\max}\) in \(W_{i}\) such that for every \(w\in W_{i+1}\), \(\eta_{i+1}(w)\) starts with \(w^{i}_{min}\) and ends with \(w^{i}_{max}\); 4. for each \(w\in W_{1}\), \(|\eta_{1}(w)|\gg n\); 5. for any \(x,y\in X_{B}\), if \(x\neq y\), then there exists \(w\in W_{1}\) such that \(\operatorname{Ret}_{A_{w}}(x)\neq\operatorname{Ret}_{A_{w}}(y)\), where \(A_{w}\) denotes the union of the Kakutani-Rohlin tower determined by \(w\). For (3), we consider the unique \(x_{\min}\) and \(x_{\max}\). Fix an \(i\geq 1\). Let \(w^{i}_{\min}\in W_{i}\) be the vertex in \(W_{i}\) which \(x_{\min}\) passes through and \(w^{i}_{\max}\in W_{i}\) be the vertex in \(W_{i}\) which \(x_{\max}\) passes through. Then by the uniqueness of \(x_{\min}\), there is an \(i_{0}>i\) such that for all \(i^{\prime}\geq i_{0}\) and \(w\in W_{i^{\prime}}\) the minimal path between \(v_{0}\) and \(w\) passes through \(w^{i}_{\min}\). Similarly, there is an \(i_{1}\) such that for all \(i^{\prime}\geq i_{1}\) and \(w\in W_{i^{\prime}}\), the maximal path between \(v_{0}\) and \(w\) passes through \(w^{i}_{\max}\). Now we get (3) by telescoping. For (5) we use the main theorem of [13], which guarantees that \((X,T)\) is a subshift on a finite alphabet. In particular there is a finite partition \(\mathcal{P}\) of \(X\) into clopen sets such that the smallest Boolean algebra containing elements of \(\mathcal{P}\) and closed under \(T\) and \(T^{-1}\) contains all clopen subsets of \(X\). Now we also have that \((X,T)\) is conjugate to \((X_{B},\lambda_{B})\). Thus there is also a finite partition \(\mathcal{Q}\) of \(X_{B}\) into clopen sets such that the smallest Boolean algebra containing elements of \(\mathcal{Q}\) and closed under \(\lambda_{B}\) and \(\lambda_{B}^{-1}\) contains all clopen subsets of \(X_{B}\). Hence there is \(i\geq 1\) such that every element of \(\mathcal{Q}\) is the union of the basic open sets given by the paths from \(v_{0}\) to some elements of \(W_{i}\). Let \(F\) be the set of all paths from \(v_{0}\) to an element of \(W_{i}\). For each \(p\in F\) let \(N_{p}\) denote the basic open set of \(X_{B}\) given by \(p\). Then for all \(x,y\in X_{B}\) with \(x\neq y\), there is \(p\in F\) such that \(\operatorname{Ret}_{N_{p}}(x)\neq\operatorname{Ret}_{N_{p}}(y)\). Now for any \(w\in W_{i}\), let \(A_{w}\) denote the union of the Kakutani-Rohlin tower determined by \(w\), then \(A_{w}\) is the clopen set given by all paths from \(v_{0}\) to \(w\). We claim that for all \(x,y\in X_{B}\) with \(x\neq y\), there is \(w\in W_{i}\) such that \(\operatorname{Ret}_{A_{w}}(x)\neq\operatorname{Ret}_{A_{w}}(y)\). For this, note that for any \(w\in W_{i}\), if we enumerate all paths from \(v_{0}\) to \(w\) in the \(\preceq^{\prime}\)-increasing order as \(p_{1},\ldots,p_{k}\), then for any \(x\in X_{B}\), \(1\leq j\leq k\) and \(m\in\mathbb{Z}\), \(m\in\operatorname{Ret}_{N_{p_{j}}}(x)\) iff \(m-ak-j+1,\ldots,m\in\operatorname{Ret}_{A_{w}}(x)\) and \(m-ak-j\notin\operatorname{Ret}_{A_{w}}(x)\) for a natural number \(a\). Thus if \(x\neq y\in X_{B}\) and \(p\) is a path from \(v_{0}\) to \(w\) such that \(\operatorname{Ret}_{N_{p}}(x)\neq\operatorname{Ret}_{N_{p}}(y)\), then \(\operatorname{Ret}_{A_{w}}(x)\neq\operatorname{Ret}_{A_{w}}(y)\). Now (5) follows by telescoping. For each \(i\geq 1\), enumerate the elements of \(W_{i}\) as \(w_{i,1},w_{i,2},\cdots,w_{i,n_{i}}\), where \(2\leq n_{i}\leq n\), so that \(w_{i,1}=w_{\min}^{i}\). Define \[v_{1,j}=0(01)^{j}0^{\left\lvert\eta_{1}(w_{1,j})\right\rvert-2n-4j-2}(10)^{j+ n}0\] for \(1\leq j\leq n_{1}\). For \(i\geq 2\), assume \(v_{i-1,j}\) have been defined for all \(1\leq j\leq n_{i-1}\). Then we define \[v_{i,j}=v_{i-1,j_{1}}v_{i-1,j_{2}}\cdots v_{i-1,j_{k}}\] if \[\eta_{i}(w_{i,j})=w_{i-1,j_{1}}w_{i-1,j_{2}}\cdots w_{i-1,j_{k}}.\] It is clear that this defines a rank-\(n\) construction. Let \(V=\lim_{i}v_{i,1}\). We note that \(V\) has a proper rank-\(m\) construction for some \(m\leq n\). In fact, let \(m\geq 2\) be the smallest such that \(n_{i}=m\) for infinitely many \(i\geq 2\). Let \(\{i_{k}\}_{k\geq 0}\) enumerate this infinite set of indices. Then by telescoping with respect to \(\{i_{k}\}_{k\geq 0}\), we obtain a proper rank-\(m\) construction for \(V\). We note in addition that the rank-\(m\) generating sequence associated to this construction is a subsequence of the rank-\(n\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq n_{i}}\), and therefore has bounded spacer parameter. Thus by Corollary 5.4\(X_{V}\) is minimal. We claim that for any \(1\leq j\leq n_{1}\), \(v_{1,j}\) is not of the form \[\alpha 1^{s_{1}}v_{1,j_{1}}1^{s_{2}}\cdots v_{1,j_{k-1}}1^{s_{k}}\beta\] where \(k>0\), \(\alpha\) is a nonempty suffix of some \(v_{1,j_{k}}\), and \(\beta\) is a nonempty prefix of some \(v_{1,j_{k+1}}\). This follows easily from the observation that \(v_{1,j}\) has a prefix of the form \(00(10)^{j}00\) and a suffix of the form \(00(01)^{j+n}00\), and for \(1\leq j^{\prime}\leq n_{1}\) where \(j^{\prime}\neq j\), \(v_{1,j^{\prime}}\) does not contain either of these words as a subword. Let \(Y=(2^{W_{1}})^{\mathbb{Z}}\) and view it as a shift over the alphabet \(2^{W_{1}}\). Define \(\theta:X_{B}\to Y\) by \[\theta(x)(k)(w)=\operatorname{Ret}_{A_{w}}(x)(k).\] Then \(\theta\) is clearly continuous. It is easy to check that \(\theta\circ\lambda_{B}=\sigma\circ\theta\). By (5), \(\theta\) is injective. Thus \(\theta\) is a conjugacy map between \((X_{B},\lambda_{B})\) and \(\theta(X_{B})\), which is a subshift of \(Y\). Finally, we verify that \(X_{V}\) is conjugate to \(\theta(X_{B})\). For this we define \(\varphi:X_{V}\to(2^{W_{1}})^{\mathbb{Z}}\) by letting \(\varphi(z)(k)(w_{1,j})=1\) iff there is \(k^{\prime}\leq k\) with \(k^{\prime}+\left\lvert v_{1,j}\right\rvert-1\geq k\) such that \(v_{1,j}\) occurs in \(z\) starting at position \(k^{\prime}\). \(\varphi\) is well defined because of the above claim. It is clear that \(\varphi\) is continuous and injective, and \(\varphi\circ\sigma=\sigma\circ\varphi\). Thus \(\varphi\) is a conjugacy map between \(X_{V}\) and \(\varphi(X_{V})\). To complete our proof, it suffices to show \(\theta(X_{B})=\varphi(X_{V})\). Consider a \(y\in X_{V}\) such that \(y\operatorname{\upharpoonright}\left[0,\infty\right)=V\). Then by our definitions of \(\theta\) and \(\varphi\), and particularly because \(|v_{i,j}|=|\eta(w_{1,j})|\) for all \(1\leq j\leq n_{1}\), we have \(\theta(x_{\min})\!\upharpoonright\![0,\infty)=\varphi(y)\!\upharpoonright\![0,\infty)\). By the shift-invariance and the compactness of \(\theta(X_{B})\) and \(\varphi(X_{V})\), we get \(\theta(X_{B})\cap\varphi(X_{V})\neq\varnothing\). By the minimality of \(\theta(X_{V})\) and \(\varphi(X_{B})\), we have \(\theta(X_{B})=\varphi(X_{V})\) as required. The consideration of the shift \(Y\) in the above proof is motivated by the work in [30] (the construction before Theorem 3.4 in [30]). ### Some examples In this subsection we give some examples to demonstrate that the results in the preceding subsections are optimal. We first show that a non-minimal rank-\(1\) subshift need not have finite topological rank. **Proposition 6.10**.: _There exists a rank-\(1\) word \(V\) such that \(X_{V}\) is not minimal and \(X_{V}\) is not of finite topological rank._ Proof.: For any \(n\geq 0\), let \(r_{n}\geq 2n+5\) and \(s_{n,1},\ldots,s_{n,r_{n}-1}\) be nonnegative integers satisfying the following: 1. \(s_{n,1}=3n+1\), and \(s_{n,r_{n}-1}=3n+2\); 2. for all \(1<i<r_{n}-1\), \(s_{n,i}=3m\) for some \(0\leq m\leq n+1\); 3. for any \(1\leq m\leq n+1\), there exist \(1<i<r_{n}-1\) such that \(s_{n,i}=s_{n,i+1}=3m\). Then as usual, define \(v_{0}=0\) and \(v_{n+1}=v_{n}1^{s_{n,1}}v_{n}1^{s_{n,2}}\cdots 1^{s_{n,r_{n}-1}}v_{n}\) inductively, and let \(V=\lim_{n}v_{n,1}\). We note that for any \(n\geq 1\), \(0\leq m\leq n+1\) and \(u\) a nonempty prefix of \(v_{n}\), \(01^{3m}u\) is not a suffix of \(v_{n}\). Toward a contradiction, assume \(X_{V}\) has topological rank \(K\geq 1\). Fix a positive integer \(N\geq 1\). Then by Theorem 3.3 there is a Kakutani-Rohlin partition \(\mathcal{P}\) of \(X_{V}\) with the following properties: 1. \(\mathcal{P}\) has \(K\) many towers, with bases \(B_{1},\ldots,B_{K}\); 2. \(1^{\mathbb{Z}}\in B(\mathcal{P})=\bigcup_{1\leq k\leq K}B_{k}\) and \(\operatorname{diam}(B(\mathcal{P}))<2^{-N-2}\); 3. \(\operatorname{diam}(A)<2^{-N-2}\) for all \(A\in\mathcal{P}\). Since every \(A\in\mathcal{P}\) is clopen, there exists \(k_{0}>N+4\) such that for every \(A\in\mathcal{P}\), there exists \(U_{A}\subseteq\{0,1\}^{2k_{0}+1}\) with \[A=\{x\in X_{V}\,:\,x\!\upharpoonright\![-k_{0},k_{0}]\in U_{A}\}.\] Let \(n\gg k_{0}+3N\). Fix any \(1\leq m\leq n+1\). Let \(x\in X_{V}\) be such that \(v_{n}1^{3m}v_{n}1^{3m}v_{n}\) occurs in \(x\) at position \(-|v_{n}|\), and each of the three demonstrated occurrences of \(v_{n}\) is expected. Then from the definition of \(k_{0}\), and because \(|v_{n}|\geq 2^{n}\gg n\gg k_{0}\), we have that \(\sigma^{|v_{n}|+3m}(x)\) and \(x\) belong to the same set in the partition \(\mathcal{P}\). Thus there is \(0<j\leq|v_{n}|+3m\) such that \(\sigma^{j}(x)\in B(\mathcal{P})\). Let \(t=3m-j\). Then \(-|v_{n}|\leq t<3m\), \(y\triangleq\sigma^{3m-t}(x)\in B_{k}\) for some \(1\leq k\leq K\), \(v_{n}1^{3m}v_{n}1^{3m}v_{n}\) occurs in \(y\) at position \(t-|v_{n}|-3m\), and every \(z\in X_{V}\) with an occurrence of \(v_{n}1^{3m}v_{n}1^{3m}v_{n}\) at position \(t-|v_{n}|-3m\) is in \(B_{k_{0}}\). Let \(t_{m}\) be the least such \(t\) and \(k_{m}\) be the corresponding \(k\). We note the following two properties of the element \(y\). First, there is an occurrence of \(v_{n}1^{3m}v_{n}\) in \(y\) at position \(t_{m}\). Second, because of the minimality of \(t_{m}\), we have that for any \(0\leq j\leq t_{m}+|v_{n}|\), \(\sigma^{j}(y)\in\sigma^{j}(B_{k_{m}})\) and \(\sigma^{j}(B_{k_{m}})\cap B(\mathcal{P})=\varnothing\) when \(j\neq 0\), and so \(\sigma^{j}(B_{k_{m}})\) is an element of \(\mathcal{P}\) (it is one of the sets in the \(k_{m}\)-th tower of \(\mathcal{P}\)). We claim that for any \(1\leq m_{1},m_{2}\leq\lfloor N/3\rfloor\) with \(m_{1}\neq m_{2}\), we must have \(k_{m_{1}}\neq k_{m_{2}}\). Toward a contradiction, assume \(k\triangleq k_{m_{1}}=k_{m_{2}}\). Without loss of generality assume \(t_{m_{1}}\leq t_{m_{2}}\). Consider first the case \(t_{m_{1}}<t_{m_{2}}\). Then we have a subclaim that \(t_{m_{2}}\geq t_{m_{1}}+3m_{1}\). To see this, let \(y_{1}\in B_{k}\) be an element with an occurrence of \(v_{n}1^{3m_{1}}v_{n}\) at position \(t_{m_{1}}\) as above, and similarly \(y_{2}\in B_{k}\) be an element with an occurrence of \(v_{n}1^{3m_{2}}v_{n}\) at \(t_{2}\). Since \(\operatorname{diam}(B_{k})<2^{-N-2}\), \(y_{1}\nmid[-N,N]=y_{2}\nmid[-N,N]\). Also, since for all \(0\leq j\leq t_{m_{2}}+|v_{n}|\), \(\sigma^{j}(B_{k})\) is an element of \(\mathcal{P}\), which has diameter \(<2^{-N-2}\), we have that \(y_{1}\nmid[-N+j,j+N]=y_{2}\nmid[-N+j,j+N]\) for all \(0\leq j\leq t_{m_{2}}+|v_{n}|\). In particular \(y_{2}(t_{m_{2}}+|v_{n}|-1)=0=y_{1}(t_{m_{2}}+|v_{n}|-1)\). Since \(t_{m_{2}}+|v_{n}|-1\geq t_{m_{1}}+|v_{n}|\) and \(y_{1}\) has an occurrence of \(1^{3m_{1}}\) at \(t_{m_{1}}+|v_{n}|\), we must have \(t_{m_{2}}+|v_{n}|-1\geq t_{m_{1}}+|v_{n}|+3m_{1}-1\), or \(t_{m_{2}}\geq t_{m_{1}}+3m_{1}\) as in the subclaim. Note that our argument above gives that \[y_{1}\nmid[t_{m_{1}}+|v_{n}|-1,t_{m_{2}}+|v_{n}|-1]=y_{2}\nmid[t_{m_{1}}+|v_{n }|-1,t_{m_{2}}+|v_{n}|-1].\] Since \(t_{m_{2}}\geq t_{m_{1}}+3m_{1}\), the left-hand side is a word of the form \(01^{3m_{1}}u\) where \(u\) is a nonempty prefix of \(v_{n}\). But the right-hand side is a suffix of \(v_{n}\). This contradicts our construction of \(v_{n}\). Thus \(t_{m_{1}}=t_{m_{2}}\). Denote \(t\triangleq t_{m_{1}}=t_{m_{2}}\). Without loss of generality assume \(m_{1}<m_{2}\). By the above argument we again have \(y_{1}\nmid[-N+t+|v_{n}|,t+|v_{n}|+N]=y_{2}\nmid[-N+t+|v_{n}|,t+|v_{n}|+N]\). Since \(3m_{1}<3m_{2}\leq N\), we have in particular \(y_{1}\nmid[t+|v_{n}|,t+|v_{n}|+3m_{2}-1]=y_{2}\nmid[t+|v_{n}|,t+|v_{n}|+3m_{2}-1]\). But the left-hand side is of the form \(1^{3m_{1}}u\) where \(u\) is a nonempty prefix of \(v_{n}\), while the right-hand side is \(1^{3m_{2}}\), a contradiction. This finishes our proof of the claim that whenever \(1\leq m_{1}\neq m_{2}\leq\lfloor N/3\rfloor\), we have \(k_{m_{1}}\neq k_{m_{2}}\). It follows from the claim that \(K\geq\lfloor N/3\rfloor\). This contradicts the arbitrariness of \(N\). The next examples show that the topological rank is not bounded by a function of the symbolic rank alone, thus the extra parameter as in Corollary 6.8 is necessary. **Proposition 6.11**.: _For any \(N>1\), there is a minimal rank-\(1\) subshift whose topological rank is at least \(N\)._ Proof.: Fix \(p\geq 2N\) and \(q\gg N\). Define \(v_{0}=0\) and \[v_{n+1}=(v_{n}1)^{q}v_{n}1^{a_{n,1}}v_{n}1^{a_{n,2}}\cdots 1^{a_{n,p}}v_{n}(1^{N+2 }v_{n})^{q},\] where \(a_{n,1},\ldots,a_{n,p}\) are nonnegative integers satisfying the following: * for any \(1\leq i\leq p\), \(2\leq a_{n,i}\leq N+1\); 2. for any \(2\leq m\leq N+1\), there is \(1\leq i\leq p\) such that \(a_{n,i}=a_{n,i+1}=m\). Let \(V=\lim_{n}v_{n}\). By an easy induction we have that for all \(n\geq 1\) and \(1\leq m\leq N+1\), if \(u\) is a nonempty prefix of \(v_{n}\), then \(01^{m}u\) is not a suffix of \(v_{n}\). Consider a Kakutani-Rohlin partition \(\mathcal{P}\) of \(X_{V}\) such that 1. \(\mathcal{P}\) has \(K\) many towers, with bases \(B_{1},\cdots,B_{K}\); 2. \(\operatorname{diam}(B(\mathcal{P}))<2^{-N-4}\); 3. \(\operatorname{diam}(A)<2^{-N-4}\) for all \(A\in\mathcal{P}\). Since every \(A\in\mathcal{P}\) is clopen, there exists \(k_{0}>N+6\) such that for every \(A\in\mathcal{P}\), there exists \(U_{A}\subset\{0,1\}^{2k_{0}+1}\) with \[A=\{x\in X_{V}\,:\,x\,\upharpoonright[-k_{0},k_{0}]\in U_{A}\}.\] Let \(n\gg k_{0}+3N\). Similar to the proof of Proposition 6.10, we can define, for each \(2\leq m\leq N+1\), numbers \(t_{m}\) where \(-|v_{n}|\leq t_{m}\leq m\), \(k_{m}\) where \(1\leq k_{m}\leq K\), and an element \(y\in B_{k_{m}}\) such that \(v_{n}1^{m}v_{n}\) occurs in \(y\) at position \(t_{m}\) and for all \(0\leq 1\leq t_{m}+|v_{n}|\), \(\sigma^{q}(B_{k_{m}})\) is an element of \(\mathcal{P}\). As in the proof of Propositin 6.10, we have that for all \(2\leq m_{1},m_{2}\leq N+1\), if \(m_{1}\neq m_{2}\), then \(k_{m_{1}}\neq k_{m_{2}}\). This implies that \(K\geq N\). In [2] Corollary 4.9, the authors calculated the topological rank of an arbitrary minimal rank-1 subshift (which are called Ferenczi subshift there) using the \(\mathcal{S}\)-adic representations of all minimal rank-1 subshifts. Our Proposition 6.11 above would be a consequence of this result. ### From finite alphabet rank to finite symbolic rank In this subsection we explore some connections between subshifts of finite symbolic rank and \(\mathcal{S}\)-adic subshifts of finite alphabet rank considered by various authors, e.g. [6] and [12]. We first recall the basic definition of \(\mathcal{S}\)-adic subshifts and related notions following [12]. For a finite alphabet \(A\), let \(A^{*}\) be the set of all finite words on \(A\). If \(A,B\) are finite alphabets, a _morphism_\(\tau:A^{*}\to B^{*}\) is a map satisfying that \(\tau(\varnothing)=\varnothing\) and for all \(u,v\in A^{*}\), \(\tau(uv)=\tau(u)\tau(v)\). A _directive sequence_ is a sequence of morphisms \(\boldsymbol{\tau}=(\tau_{n}\,:\,A^{*}_{n+1}\to A^{*}_{n})_{n\geq 0}\). For \(0\leq n<N\), denote by \(\tau_{[n,N)}\) the morphism \(\tau_{n}\circ\tau_{n+1}\circ\cdots\circ\tau_{N-1}\). For any \(n\geq 0\), define \[L^{(n)}(\boldsymbol{\tau})=\{w\in A^{*}_{n}\,:\,w\text{ occurs in }\tau_{[n,N)}(a) \text{ for some }a\in A_{N}\text{ and }N>n\}\] and \[X^{(n)}_{\boldsymbol{\tau}}=\{x\in A^{\mathbb{Z}}_{n}\,:\,\text{every finite subword of }x\text{ is a subword of some }w\in L^{(n)}(\boldsymbol{\tau})\}.\] \(X^{(n)}_{\boldsymbol{\tau}}\) is a subshift on the alphabet \(A_{n}\), and we denote the shift map by \(\sigma\). Now let \(X_{\boldsymbol{\tau}}=X^{(0)}_{\tau}\). Then \((X_{\boldsymbol{\tau}},\sigma)\) is the \(\mathcal{S}\)_-adic subshift_ generated by the directive sequence \(\boldsymbol{\tau}\). The _alphabet rank_ of \(\boldsymbol{\tau}\) is defined as \[\operatorname{AR}(\boldsymbol{\tau})=\liminf_{n\to\infty}|A_{n}|\] and the _alphabet rank_ of a subshift \((X,\sigma)\) as \[\operatorname{AR}(X)=\inf\{\operatorname{AR}(\boldsymbol{\tau})\,:\,X_{ \boldsymbol{\tau}}=X\}.\] As a convention, \(\inf\varnothing=+\infty\). There is a similar notion of _telescoping_ for directive sequence \(\boldsymbol{\tau}\) which does not change the \(\mathcal{S}\)-adic subshift generated by \(\boldsymbol{\tau}\). An \(\mathcal{S}\)-adic subshift \(X_{\boldsymbol{\tau}}\) is _primitive_ if for any \(n\geq 0\) there exists \(N>n\) such that \(\tau_{[n,N)}(a)\) contains all letters in \(A_{n}\) for all \(a\in A_{N}\). If \(\tau:A^{*}\to B^{*}\) is a morphism, \(x\in B^{\mathbb{Z}}\), and \(Y\subseteq A^{\mathbb{Z}}\) is a subshift, then a _\(\tau\)-representation_ of \(x\) in \(Y\) is a pair \((k,y)\in\mathbb{Z}\times Y\) such that \(x=\sigma^{k}(\tau(y))\). Moreover, \((k,y)\) is a _centered_\(\tau\)-representation if \(0\leq k<|\tau(y(0))|\) in addition. Now \(\tau\) is _recognizable in \(Y\)_ if each \(x\in B^{\mathbb{Z}}\) has at most one centered \(\tau\)-representation in \(Y\), and a directive sequence \(\boldsymbol{\tau}=(\tau_{n}:A_{n+1}^{*}\to A_{n}^{*})_{n\geq 0}\) is _recognizable_ if for each \(n\geq 0\), \(\tau_{n}\) is recognizable in \(X_{\boldsymbol{\tau}}^{(n+1)}\). An \(\mathcal{S}\)-adic subshift \(X_{\boldsymbol{\tau}}\) is _recognizable_ if \(\boldsymbol{\tau}\) is recognizable. **Theorem 6.12**.: _Let \(X_{\boldsymbol{\tau}}\) be a primitive, recognizable \(\mathcal{S}\)-adic subshift of finite alphabet rank \(K\). Then \((X_{\boldsymbol{\tau}},\sigma)\) is conjugate to a subshift of finite symbolic rank \(\leq K\). Moreover, there exists a proper rank-\(K\) construction for a uniquely readable rank-\(K\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq K}\) such that \((X_{\boldsymbol{\tau}},\sigma)\) is conjugate to \((X_{V},\sigma)\), where \(V=\lim_{i}v_{i,1}\)._ Proof.: This is similar to the proof of Theorem 6.9. By telescoping if necessary, we assume without loss of generality that the following properties hold for \(\boldsymbol{\tau}\): 1. for each \(i\geq 0\), \(a\in A_{i}\) and \(b\in A_{i+1}\), \(\tau_{i}(b)\) contains the letter \(a\); 2. for each \(i\geq 1\), \(|A_{i}|=K\); 3. for each \(a\in A_{1}\), \(|\tau_{0}(a)|\gg K\). Since each \(A_{i}\) is finite, a finite splitting argument similar to the proof of Proposition 5.1 shows that we can enumerate each \(A_{i}\) as \(a_{i,1},\ldots,a_{i,n_{i}}\) such that \(n_{i}=K\) for all \(i>1\) and for each \(i\geq 0\), \(\tau_{i}(a_{i+1,1})\) starts with \(a_{i,1}\). Now, as in the proof of Theorem 6.9, define \[v_{1,j}=0(01)^{j}0^{|\tau_{0}(a_{1,j})|-2n-4j-2}(10)^{j+n}0\] for \(1\leq j\leq K\). For \(i\geq 1\) and \(1\leq j\leq K\), if \[\tau_{i}(a_{i+1,j})=a_{i,j_{1}}a_{i,j_{2}}\cdots a_{i,j_{k}},\] then let \[v_{i+1,j}=v_{i,j_{1}}v_{i,j_{2}}\cdots v_{i,j_{k}}.\] This gives a proper rank-\(K\) construction for \(V=\lim_{i}v_{i,1}\). Clearly the recognizability of \(\boldsymbol{\tau}\), together with our definition of \(v_{1,j}\), imply the unique readability of \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq K}\). Now \((X_{\boldsymbol{\tau}},\sigma)\) and \((X_{V},\sigma)\) are conjugate by the substitution \(\tau_{0}(a_{1,j})\mapsto v_{1,j}\). With Theorem 6.12, our Theorem 6.9 becomes a consequence of the main theorem of [12] which states that every minimal Cantor system is either an odometer or conjugate to a primitive, recognizable \(\mathcal{S}\)-adic subshift of finite alphabet rank. ## 7. Density and genericity of subshifts of finite symbolic rank It is known that the set of rank-\(1\) measure-preserving transformations is a dense \(G_{\delta}\) subset of the Polish space of all measure-preserving transformations (see [20]). Here in the topological setting, we show that the situation is different. In fact we consider various different spaces of Cantor systems and subshifts and show that the class of all rank-\(1\) subshifts is dense in all but one of them but generic in none. On the other hand, we note that subshifts of symbolic rank \(2\) are generic in the spaces for all transitive and totally transitive subshifts. We start with the coding space for all minimal Cantor systems. **Proposition 7.1**.: _The set of all minimal Cantor systems conjugate to a rank-\(1\) subshift is dense but not generic in the space of all minimal Cantor systems._ Proof.: By Proposition 3.8 and Lemma 2.4, the set of all subshifts is meager, and not generic, in the space of all minimal Cantor systems. For the density, in view of Proposition 3.8, it suffices to show that nondegenerate minimal rank-\(1\) subshifts can approximate any infinite odometer. To be precise, we need to show that for all \(k\geq 2\) there is a nondegenerate rank-\(1\) subshift \(X_{V}\) and a clopen subset \(A\) of \(X_{V}\) such that \(\sigma^{k}(A)=A\) and \(\{A,\sigma(A),\ldots,\sigma^{k-1}(A)\}\) form a partition of \(X_{V}\). Fix \(k\geq 2\). We define the following _Chacon-like_ rank-\(1\) generating sequence: \[\begin{array}{rcl}v_{0}&=&0\\ v_{1}&=&0^{2k}1^{k}0^{k}\\ v_{n+1}&=&v_{n}v_{n}1^{k}v_{n}\text{ for }n\geq 1.\end{array}\] Let \(V=\lim_{n}v_{n}\). Then \(X_{V}\) is nondegenerate. Let \(A\) be the set of all \(x\in X_{V}\) such that \[x\,\lceil\,[0,3k-1]\in\{0^{3k},0^{2k}1^{k},0^{k}1^{k}0^{k},1^{k}0^{k}1^{k},1^{ k}0^{2k}\}.\] Then \(A\) is a clopen subset of \(X_{V}\) with the required property. To investigate the density and the genericity of subshifts of finite symbolic rank, we consider some spaces of subshifts as defined in [35]. First, let \(\mathcal{S}_{2}\) be the space of all \(\sigma\)-invariant closed subsets of \(2^{\mathbb{Z}}\). \(\mathcal{S}_{2}\) is a \(G_{\delta}\) subspace of \(K(2^{\mathbb{Z}})\), and hence is a Polish space. The Hausdorff metric on \(\mathcal{S}_{2}\) is equivalent to the following metric which is easier to work with in our setting. For \(X\in\mathcal{S}_{2}\) and integer \(n\geq 0\), let \(L_{n}(X)\) be the set of all finite words of length \(n\) which occurs in some element of \(X\). Let \(L(X)=\bigcup_{n}L_{n}(X)\). For \(X,Y\in\mathcal{S}_{2}\), let \[d_{L}(X,Y)=2^{-\inf\{n\,:\,L_{n}(X)\neq L_{n}(Y)\}}.\] However, \(\mathcal{S}_{2}\) is not a perfect space; in particular, the finite (denegerate) subshifts are isolated points in this space. Thus, following [35] we consider the following perfect subspace, which in particular includes all nondenergate rank-\(1\) subshifts. Let \(\mathcal{S}_{2}^{\prime}\) be the subspace of all elements of \(\mathcal{S}_{2}\) which are not isolated (the notation is inspired by the Cantor-Bendixson derivative; see [31]). \(\mathcal{S}_{2}^{\prime}\) is a perfect subspace of \(\mathcal{S}_{2}\), and hence a Polish space. Recall that a Cantor system \((X,T)\) is _(point) transitive_ if there exists \(x\in X\) such that the orbit of \(x\) is dense in \(X\); it is _totally (point) transitive_ if for all integer \(n\geq 1\) there exists \(x\in X\) such that \(\{T^{nk}x\,:\,k\in\mathbb{Z}\}\) is dense in \(X\). Let \(\mathcal{T}_{2}^{\prime}\) be the subspace of all transitive subshifts in \(\mathcal{S}_{2}^{\prime}\). Let \(\overline{\mathcal{T}_{2}^{\prime}}\) be the closure of \(\mathcal{T}_{2}^{\prime}\) in \(\mathcal{S}_{2}^{\prime}\). Moreover, let \(\mathcal{TT}_{2}^{\prime}\) be the subspace of all totally transitive subshifts in \(\mathcal{S}_{2}^{\prime}\). Let \(\overline{\mathcal{TT}_{2}^{\prime}}\) be the closure of \(\mathcal{TT}_{2}^{\prime}\) in \(\mathcal{S}_{2}^{\prime}\). Then \(\overline{\mathcal{TT}_{2}^{\prime}}\subseteq\overline{\mathcal{T}_{2}^{\prime}}\) are both closed subspaces of \(\mathcal{S}_{2}^{\prime}\), hence are Polish spaces, and the metric \(d_{L}\) remains a compatible metric on these subspaces. The following theorem shows that minimal rank-\(1\) subshifts can approximate minimal nondegenerate subshifts of topological rank \(2\) in the sense of \(d_{L}\). **Theorem 7.2**.: _Let \(n\geq 1\) and let \((X,\sigma)\) be a minimal nondegenerate subshift of topological rank \(2\). Then there exists a minimal nondegenerate subshift \((Y,\sigma)\) such that \(L_{n}(X)=L_{n}(Y)\) and \((Y,\sigma)\) is conjugate to \((X_{V},\sigma)\) for some infinite rank-\(1\) word \(V\). Moreover, if \((X,\sigma)\) is totally transitive, then we can find \((Y,\sigma)\) which is also totally transitive._ Proof.: By the main theorem of [12], \((X,\sigma)\) is conjugate to a primitive, recognizable \(\mathcal{S}\)-adic subshift of alphabet rank \(2\). By the proof of Theorem 6.12, there exists a proper rank-\(2\) construction for an infinite word \(W\) with the following properties: 1. the associated rank-\(2\) generating sequence \(\{w_{i,j}\}_{i\geq 0,1\leq j\leq 2}\) has unique readability; 2. for all \(i\geq 1\) and \(j=1,2\), the spacer parameter in the building of \(w_{i+1,j}\) from \(\{w_{i,1},w_{i,2}\}\) is bounded by \(0\); 3. \((X_{W},\sigma)\) is conjugate to \((X,\sigma)\). Let \(f\) be a conjugacy map from \((X_{W},\sigma)\) to \((X,\sigma)\). Fix \(n_{1}\geq 1\) such that for any \(x,y\in X_{W}\) and \(k\in\mathbb{Z}\), whenever \(x\upharpoonright[k-n_{1},k+n_{1}]=y\upharpoonright[k-n_{1},k+n_{1}]\), we have \(f(x)(k)=f(y)(k)\). For any \(v\in L(X_{W})\), if \(|v|>2n_{1}\) and for some \(x\in X_{W}\) and \(k\in\mathbb{Z}\), we have \(x\upharpoonright[k,k+|v|-1]=v\), then define \(\Phi(v)=f(x)\upharpoonright[k+n_{1},k+|v|-n_{1}-1]\). Clearly \(\Phi(v)\) is well defined and does not depend on the choice of \(x\). For any finite or infinite word \(u\) and \(m\leq|u|\), let \(L_{m}(u)\) denote the set of all subwords of \(u\) of length \(n\). Let \(i_{0}\geq 1\) be sufficiently large such that for \(j=1,2\), \(|w_{i_{0},j}|>2n+4n_{1}\) and \(L_{n+2n_{1}}(W)=L_{n+2n_{1}}(w_{i_{0},j})\). Since \(X\) is nondegenerate, \(W\) is aperiodic, and it follows that there is \(j_{0}\in\{1,2\}\) such that for any \(j=1,2\), both \(w_{i_{0},j_{0}}w_{i_{0},j}\in L(W)\) and \(w_{i_{0},j}w_{i_{0},j_{0}}\in L(W)\). For the same reason, there exists \(i_{1}>i_{0}\) such that \(\Phi(w_{i_{1},1})\) does not have a period \(t\) for any \(t\leq|w_{i_{0},1}|+|w_{i_{0},2}|\), i.e., there are \(0\leq a<a+kt<|\Phi(w_{i_{1},1})|\) such that \(\Phi(w_{i_{1},1})(a)\neq\Phi(w_{i_{1},1})(a+kt)\). Define a rank-1 generating sequence by letting \[v_{1}=w_{i_{0},j_{0}}w_{i_{1},1}w_{i_{0},j_{0}}\] and for any \(i\geq 1\), \[v_{i+1}=v_{i}v_{i}1^{|w_{i_{0},j_{0}}|}v_{i}.\] As usual, let \(V=\lim_{i}v_{i}\). Then \(V\) is a minimal nondegenerate infinite rank-1 word. Define a map \(g\) from \(X_{V}\) to \(2^{\mathbb{Z}}\) as follows. For \(x\in X_{V}\), if \(k\) is a part of an expected occurrence of \(v_{1}\) in \(x\), then set \(g(x)(k)=x(k)\); if not, let \(k^{\prime}\) be the starting position of the next expected occurrence of \(v_{1}\) in \(x\), and set \(g(x)(k)=w_{i_{0},j_{0}}(|w_{i_{0},j_{0}}|+k-k^{\prime})\). Let \(Z=g(X_{V})\). Then \((Z,\sigma)\) is a subshift and \(g\) is a factor map. By our definition, \(L_{n+2n_{1}}(Z)=L_{n+2n_{1}}(W)=L_{n+2n_{1}}(X_{W})\). For \(x\in Z\) and \(k\in\mathbb{Z}\), define \(h(x)(k)=\Phi(x\!\upharpoonright\![k-n_{1},k+n_{1}])\). Let \(Y=h(Z)\). Then \((Y,\sigma)\) is a subshift and \(h\) is a factor map. By our definition, \(L_{n}(Y)=L_{n}(X)\). It also follows that there exists \(y\in Y\) such that \(y\) does not have a period \(t\) for any \(t\leq|w_{i_{0},1}|+|w_{i_{0},2}|\). By Theorem 1.5 of [24], the maximal equicontinuous factor of \(X_{V}\) is a finite cycle of length \(p\), where \(p\) is the maximum such that for sufficiently large \(i\), \(p\) divides both \(|v_{i}|\) and \(|v_{i}|+|w_{i_{0},j_{0}}|\). It follows that \(p\) is a factor of \(|w_{i_{0},j_{0}}|\). However, since \(Y\), a factor of \(X_{V}\), contains an element which does not have a period \(t\) for any \(t\leq|w_{i_{0},j_{0}}|\), we conclude that \(Y\) is an infinite set. By the main theorem of [25], any nontrivial factor of \(X_{V}\) is conjugate to \(X_{V}\). Thus \(Y\) is conjugate to \(X_{V}\). This finishes the proof of the main conclusion of the theorem. Suppose \((X,\sigma)\) is totally transitive. We define \(W\), \(\{w_{i,j}\}_{i\geq 0,1\leq j\leq 2}\), \(n_{1}\), \(i_{0}\), and \(i_{1}\) as before. We claim that \(|w_{i_{0},1}|\) and \(|w_{i_{0},2}|\) are relatively prime. To see this, let \(a=\gcd(|w_{i_{0},1}|,|w_{i_{0},2}|)\) and assume \(a>1\). Then by property (2), the set of all \(x\in X_{W}\) such that there exists an expected occurrence of \(w_{i_{0},1}\) or \(w_{i_{0},2}\) starting at some multiple of \(a\) is a clopen, \(\sigma^{a}\)-invariant, proper subset of \(X\), contradicting the assumption that \((X,\sigma)\) is totally transitive. Let \(p=|w_{i_{0},j_{0}}|\) and \(q=|w_{i_{0},3-j_{0}}|\). Since \(p,q\) are relatively prime, we can find a positive integer \(m\) such that \(|w_{i_{0},j_{0}}w_{i_{1},1}|+(m+1)p\) and \(p-q\) are relatively prime. We inductively define a rank-1 generating sequence as follows. First let \[v_{1}=w_{i_{0},j_{0}}w_{i_{1},1}(w_{i_{0},j_{0}})^{m}.\] For \(i\geq 1\), if \(v_{i}\) has been defined such that \(|v_{i}|+p\) and \(|v_{i}|+q\) are relatively prime, then let \(v_{i+1}\) be defined to satisfy the following properties: 1. \(v_{i+1}\) is built from \(v_{i}\) and the spacer parameters are only selected from \(\{p,q\}\); 2. for any \(0\leq j<i\), there exist \(k_{1}<k_{2}\) such that \(k_{2}-k_{1}-j\) is a multiple of \(i\), and \(k_{1},k_{2}\) are the starting positions of expected occurrences of \(v_{i}\) in \(v_{i+1}\); 3. \(|v_{i+1}|+p\) and \(|v_{i+1}|+q\) are relatively prime. Let \(V=\lim_{i}v_{i}\). Then \(V\) is a minimal nondegenerate infinite rank-\(1\) word. By (ii), \((X_{V},\sigma)\) is totally transitive. The rest of the argument is identical to the above proof. **Corollary 7.3**.: _The set of all minimal subshifts conjugate to a rank-\(1\) subshift is dense in \(\overline{\mathcal{T}_{2}^{\prime}}\) and \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\)._ Proof.: By Theorems 1.3 and 1.4 of [35], a generic subshift in \(\overline{\mathcal{T}_{2}^{\prime}}\) or \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\) is minimal and has topological rank \(2\). Thus the conclusion follows from Theorem 7.2. **Theorem 7.4**.: _The set of all minimal subshifts conjugate to a rank-\(1\) subshift is not generic in either \(\mathcal{S}_{2}^{\prime}\), \(\overline{\mathcal{T}_{2}^{\prime}}\) or \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\). Moreover, it is not \(G_{\delta}\) in either \(\overline{\mathcal{T}_{2}^{\prime}}\) or \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\)._ Proof.: By Corollary 4.9 of [35], the set of all minimal subshifts is nowhere dense in \(\mathcal{S}_{2}^{\prime}\). By Theorem 1.3 of [35], a generic subshift in \(\overline{\mathcal{T}_{2}^{\prime}}\) is a regular Toeplitz subshift which factors onto the universal odometer. In contrast, by Theorem 1.5 of [24], the maximal equicontinuous factor of a rank-\(1\) subshift is finite. Hence the set of all minimal subshifts conjugate to a rank-\(1\) subshift is not generic in \(\overline{\mathcal{T}_{2}^{\prime}}\). Since it is dense in \(\overline{\mathcal{T}_{2}^{\prime}}\) by Corollary 7.3, it is not a \(G_{\delta}\) in \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\). By Theorem 1.4 of [35], a generic subshift in \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\) is topologically mixing. In contrast, by Theorem 1.3 of [24], a minimal rank-\(1\) subshift is never topologically mixing. Hence the set of all minimal subshifts conjugate to a rank-\(1\) subshift is not generic in \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\). Since it is dense in \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\) by Corollary 7.3, it is not a \(G_{\delta}\) in \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\). **Theorem 7.5**.: _The set of all minimal subshifts conjugate to a subshift of symbolic rank \(\leq 2\) is generic in \(\overline{\mathcal{T}_{2}^{\prime}}\) and \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\)._ Proof.: By Theorems 1.3 and 1.4 of [35], a generic subshift in \(\overline{\mathcal{T}_{2}^{\prime}}\) or \(\overline{\mathcal{T}\mathcal{T}_{2}^{\prime}}\) is minimal and has topological rank \(2\). Thus the conclusion follows from Theorem 6.9. ## 8. Factors of subshifts of finite symbolic rank By results of [28], [18] and our Corollary 6.8 and Theorem 6.9, a Cantor system that is a factor of a minimal subshift of finite symbolic rank is conjugate to a minimal subshift of finite symbolic rank. In this final section of the paper we prove some further results about factors of minimal subshifts of finite symbolic rank, and in particular about odometer factors and non-Cantor factors of minimal subshifts of finite symbolic rank. In the following we first show that for any \(N\geq 1\), there exist minimal subshifts of finite symbolic rank which are not factors of minimal subshifts of symbolic rank \(\leq N\). **Lemma 8.1**.: _For any \(N\geq 1\), there exist \(m>N\) and a good rank-\(m\) construction with associated rank-\(m\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq m}\) such that the following hold for all \(i\geq 1\):_ 1. _for any_ \(1\leq j_{1},j_{2}\leq m\) _with_ \(j_{1}\neq j_{2}\)_,_ \(v_{i,j_{1}}\) _is not a subword of_ \(v_{i,j_{2}}\)_;_ 2. _for any_ \(1\leq j\leq m\)_, there is a unique building of_ \(v_{i+1,j}\) _from_ \(\{v_{i,1},\ldots,v_{i,m}\}\) _whose spacer parameter is bounded by 0;_ 3. _there is a positive integer_ \(\ell\geq 1\) _such that, given any two finite sequences_ \((j_{1},j_{2},\cdots,j_{\ell})\) _and_ \((j_{1}^{\prime},j_{2}^{\prime},\cdots,j_{\ell}^{\prime})\) _of elements of_ \(\{1,2,\cdots,m\}\)_, there is at most one element_ \(w\) _of_ \[\{v_{i,j}v_{i,j^{\prime}}\,:\,1\leq j,j^{\prime}\leq m\}\cup\{v_{i,j}\,:\,1 \leq j\leq m\}\] _such that_ \[v_{i,j_{1}}v_{i,j_{2}}\cdots v_{i,j_{\ell}}wv_{i,j_{1}^{\prime}}v_{i,j_{2}^{ \prime}}\cdots v_{i,j_{\ell}^{\prime}}\] _is a subword of_ \(V\triangleq\lim_{n}v_{n,1}\)_;_ 4. \(X_{V}\) _is minimal and_ \(\operatorname{rank}_{\operatorname{symb}}(X_{V})\geq N\)_._ Proof.: Let \((X,T)\) be a minimal Cantor system whose topological rank is \(K<\infty\) where \(K\geq 8N^{2}\). By Theorem 6.9 there exist \(k\leq K\) and a proper rank-\(k\) construction of an infinite word \(W\) such that \((X_{W},\sigma)\) is conjugate to \((X,T)\). It also follows from the proof of Theorem 6.9 that the spacer paramter of \(W\) is bounded by 1. Let \(m=2k\). By the proof of Proposition 6.4, there exists an infinite word \(V\) with a good rank-\(m\) construction such that \(X_{W}\) is a factor of \(X_{V}\). Moreover, the spacer parameter of \(V\) is also bounded by 1 and so \(X_{V}\) is minimal. By analyzing the proof of Proposition 6.4, we can see that this construction satisfies (i), (ii) and (iii). In fact, (i) and (ii) are explicit from the proof. For (iii) we can take \(\ell\) to be larger than the lengths of all buildings of \(v_{i+1,j}\) from \(\{v_{i,1},\ldots,v_{i,m}\}\) for \(1\leq j\leq m\). Then (iii) follows from the argument for the goodness of the construction in the proof of Proposition 6.4. It remains to verify that \(\operatorname{rank}_{\operatorname{symb}}(X_{V})\geq N\). Suppose \(\operatorname{rank}_{\operatorname{symb}}(X_{V})=n\). Then by Corollary 6.8 we have \(\operatorname{rank}_{\operatorname{top}}(X_{V},\sigma)\leq 8n^{2}\). By [18], \(K=\operatorname{rank}_{\operatorname{top}}(X,T)=\operatorname{rank}_{ \operatorname{top}}(X_{W},\sigma)\leq\operatorname{rank}_{\operatorname{top}} (X_{V},\sigma)\leq 8n^{2}\). Since \(K\geq 8N^{2}\), we have \(n\geq N\). **Proposition 8.2**.: _For any \(N\geq 1\), there exists a minimal subshift \(X_{V}\) which is not a factor of any minimal subshift of symbolic rank \(\leq N\). In particular, \(X_{V}\) is not conjugate to any minimal subshift of symbolic rank \(\leq N\)._ Proof.: By Lemma 8.1 there is \(m>4N^{2}+1\) and we have an infinite word \(V\) which has a good rank-\(m\) construction with associated rank-\(m\) generating sequence \(\{v_{i,j}\}_{i\geq 0,1\leq j\leq m}\) satisfying (i), (ii) and (iii) in Lemma 8.1, so that \(X_{V}\) is minimal and \(\operatorname{rank}_{\operatorname{symb}}(X_{V})\geq 4N^{2}+1\). Assume toward a contradiction that \(n\leq N\) and \(W^{\prime}\) has a proper rank-\(n\) construction with bounded spacer parameter such that \(X_{V}\) is a factor of \(X_{W^{\prime}}\). By Proposition 6.4, we have an infinite word \(W\) which has a good rank-\(2n\) construction with associated rank-\(2n\) generating sequence such that \(X_{W}\) is minimal and \(X_{W^{\prime}}\) is a factor of \(X_{W}\). Let \(f\) be a factor map from \((X_{W},\sigma)\) to \((X_{V},\sigma)\). Let \(k_{1}\) be a positive integer such that \(1^{k_{1}}\) is not a subword of \(W\). Let \(k_{2}\) be a positive integer such that for any \(x,y\in X_{W}\) and \(k\in\mathbb{Z}\), whenever \(x\,\lceil\,[k-k_{2},k+k_{2}]=y\,\lceil\,[k-k_{2},k+k_{2}]\), we have \(f(x)(k)=f(y)(k)\). Let \(r\geq 1\) so that \(\min_{1\leq j\leq m}|v_{r,j}|\gg k_{1}+2k_{2}\). Let \(\ell\geq 1\) be given by (iii) in Lemma 8.1, that is, for any two finite sequences \((j_{1},j_{2},\cdots,j_{\ell})\) and \((j_{1}^{\prime},j_{2}^{\prime},\cdots,j_{\ell}^{\prime})\) of elements of \(\{1,2,\cdots,m\}\), there is at most one element \(w\) of \[\{v_{r,j}v_{r,j^{\prime}}\,:\,1\leq j,j^{\prime}\leq m\}\cup\{v_{r,j}\,:\,1 \leq j\leq m\}\] such that \[v_{r,j_{1}}v_{r,j_{2}}\cdots v_{r,j_{\ell}}wv_{r,j_{1}^{\prime}}v_{r,j_{2}^{ \prime}}\cdots v_{r,j_{\ell}^{\prime}}\] is a subword of \(V\). We can also find \(s_{0}\geq 1\) so that \[\frac{\min_{1\leq q\leq 2n}|w_{s_{0},q}|-2k_{2}}{\max_{1\leq j\leq m}|v_{r,j}|} >\ell+2.\] We claim that for any \(s\geq s_{0}\), \(1\leq q,q^{\prime}\leq 2n\), \(a\geq 0\), and \(x\in X_{W}\), if \(w_{s,q}1^{a}w_{s,q^{\prime}}\) occurs in \(x\), where the demonstrated occurrences of \(w_{s,q}\) and \(w_{s,q^{\prime}}\) are expected, then \(a\) is determined by \(q\) and \(q^{\prime}\) only (and in particular \(a\) does not depend on \(x\)). To see this, let \(k\) be the starting position of the assumed occurrence of \(w_{s,q}1^{a}w_{s,q^{\prime}}\) in \(x\), and let \(k^{\prime}\) be the starting position of the demonstrated occurrence of \(w_{s,q^{\prime}}\). Then \(f(x)\,\rceil\,[k+k_{2},k+|w_{s,q}|-k_{2}-1]\) and \(f(x)\,\rceil\,[k^{\prime}+k_{2},k^{\prime}+|w_{s,q^{\prime}}|-k_{2}-1]\) are determined only by \(w_{s,q}\) and \(w_{s,q^{\prime}}\) by our assumption, and since \(s\geq s_{0}\), each of them contains a subword of the form \(v_{r,j_{1}}v_{r,j_{2}}\ldots v_{r,j_{\ell}}\). Since \(a<k_{1}\) and \(\min_{1\leq j\leq m}|v_{r,j}|\gg k_{1}+2k_{2}\), we get that \(f(x)\,\rceil\,[k+k_{2},k^{\prime}+|w_{s,q^{\prime}}|-k_{2}-1]\) contains a subword of the form \[v_{r,j_{1}}v_{r,j_{2}}\cdots v_{r,j_{\ell}}wv_{r,j_{1}^{\prime}}v_{r,j_{2}^{ \prime}}\cdots v_{r,j_{\ell}^{\prime}}\] where \(f(x)\,\rceil\,[k+k_{2},k+|w_{s,q}|-k_{2}-1]\) contains the part \(v_{r,j_{1}}\cdots v_{r,j_{\ell}}\), \(f(x)\,\rceil\,[k^{\prime}+k_{2},k^{\prime}+|w_{s,q^{\prime}}-k_{2}-1]\) contains the part \(v_{r,j_{1}^{\prime}}\cdots v_{r,j_{\ell}^{\prime}}\), and \(w\) is either of the form \(v_{r,j}\) for some \(1\leq j\leq m\) or of the form \(v_{r,j}v_{r,j^{\prime}}\) for \(1\leq j,j^{\prime}\leq m\). By our assumption, there is a unique such \(w\), which implies that there is a unique \(a\) by considering \(|w|\). By telescoping we may assume that the claim holds for any \(s\geq 1\). We may also assume that \(|w_{1,q}|\gg 2k_{2}+k_{1}+k_{0}\) for \(1\leq q\leq 2n\), where \(k_{0}\) is such that \(1^{k_{0}}\) is not a subword of \(V\). For any finite word \(u\), let \(\tilde{u}\in\mathcal{F}\) be the unique subword of \(u\) such that \(u=1^{a}\tilde{u}1^{b}\) for some nonnegative integers \(a,b\). Now we define a set \(T_{s}\) of finite words in \(\mathcal{F}\) for all \(s\geq 0\) as follows. For any \(s\geq 1\) and \(1\leq q,q^{\prime}\leq 2n\), if there are \(x\in X_{W}\), \(k\in\mathbb{Z}\), and \(a\geq 0\) such that the word \(w_{s,q}1^{a}w_{s,q^{\prime}}\) occurs in \(x\), where the demonstrated occurrences of \(w_{s,q}\) and \(w_{s,q^{\prime}}\) are expected, then define a word \(u_{s,q,q^{\prime}}=\tilde{u}\) where \(u=f(x)\,\rceil\,[k+k_{2},k+|w_{s,q}|+a+k_{2}-1]\). Let \(T_{s}\) be the set of all \(u_{s,q,q^{\prime}}\) thus obtained for \(s\geq 1\) and \(1\leq q,q^{\prime}\leq 2n\). Let \(T_{0}=\{0\}\). Then the sequence \(\{T_{s}\}_{s\geq 0}\) satisfies the hypotheses of Proposition 5.1; in particular, every element of \(T_{s+1}\) is built from \(T_{s}\). Also, \(|T_{s}|\leq 4n^{2}\). By Proposition 5.1 we obtain a rank-\(4n^{2}\) construction of an infinite word \(V^{\prime}\). Since each \(u_{s,q,q^{\prime}}\) is a subword of \(V\), we have that \(X_{V^{\prime}}\subseteq X_{V}\). By the minimality of \(X_{V}\), we have \(X_{V^{\prime}}=X_{V}\), and thus \(X_{V}\) has symbolic rank \(\leq 4n^{2}\leq 4N^{2}\), contradicting \(\operatorname{rank}_{\operatorname{symb}}(X_{V})\geq 4N^{2}+1\). Next we show that a nondegenerate subshift factor of a minimal subshift of finite symbolic rank is not just conjugate to a subshift of finite symbolic rank - it is itself a subshift of finite symbolic rank. This is a technical improvement of the result we mentioned at the beginning of this section. The proof of this result is similar to the one for the above proposition. **Theorem 8.3**.: _Let \(X\) be minimal subshift of finite symbolic rank and \(Y\) be a nondegenerate subshift that is a factor of \(X\). Then \(Y\) has finite symbolic rank, i.e., there is an infinite word \(V\) with a finite rank construction such that \(Y=X_{V}\)._ Proof.: By Proposition 6.4 we may assume that \(X=X_{W}\) where \(W\) has a good rank-\(n\) construction for some \(n\geq 2\), with associated rank-\(n\) generating sequence \(\{w_{p,q}\}_{p\geq 0,1\leq q\leq n}\). Let \(f\) be a factor map from \((X_{W},\sigma)\) to \((Y,\sigma)\). Let \(k_{1}\) be a positive integer such that \(1^{k_{1}}\) is not a subword of \(W\). Let \(k_{2}\) be a positive integer such that for any \(x,y\in X_{W}\) and \(k\in\mathbb{Z}\), whenever \(x\mathbin{\upharpoonright}[k-k_{2},k+k_{2}]=y\mathbin{\upharpoonright}[k-k_{2},k +k_{2}]\), we have \(f(x)(k)=f(y)(k)\). \(Y\) is a nondegenerate minimal subshift, let \(k_{3}\) be a positive integer such that \(1^{k_{3}}\) is not a subword of \(x\) for any \(x\in Y\). Without loss of generality we may assume \(|w_{1,q}|\gg 2k_{2}+k_{1}+k_{3}\) for all \(1\leq q\leq n\). Similar to the above proof, for each \(p\geq 1\), if the word \(w_{p,q}1^{s}w_{p,q^{\prime}}\) occurs in some \(x\in X_{W}\) at position \(k\in\mathbb{Z}\), where the demonstrated occurrences of \(w_{p,q}\) and \(w_{p,q^{\prime}}\) are expected, we define a word \(u_{p,q,q^{\prime},s}=\tilde{u}\) where \[u=f(x)\mathbin{\upharpoonright}[k+k_{2},k+|w_{s,q}|+s+k_{2}-1].\] Then it is clear that every \(y\in Y\) is built from \[T_{p}=\{u_{p,q,q^{\prime},s}\,:\,1\leq q,q^{\prime}\leq n,0\leq s<k_{1}\}.\] By Proposition 5.1 we obtain a rank-\(n^{2}k_{1}\) construction of an infinite word \(V\) such that \(X_{V}\subseteq Y\). By the minimality of \(Y\) we must have \(X_{V}=Y\), and thus \(Y\) has finite symbolic rank. A curious example is when \(V\) is an infinite rank-\(1\) word and \(\varphi:X_{V}\to Y\) is the conjugacy map defined by the substitution \(0\mapsto 1\) and \(1\mapsto 0\). \(Y\) is in general no longer a rank-\(1\) subshift but it has finite symbolic rank. The above theorem has the following immediate corollary. **Corollary 8.4**.: _Let \(n\geq 2\) and let \(X\) be a minimal subshift of topological rank \(n\geq 2\). Then \(X\) has finite symbolic rank._ Proof.: By Theorem 6.9\(X\) is conjugate to a minimal subshift of finite symbolic rank. Thus \(X\) has finite symbolic rank by Theorem 8.3. Next we show that any infinite odometer is the maximal equicontinuous factor of a minimal subshift of symbolic rank \(2\). This is in contrast with the result in [24] that any equicontinuous factor of a rank-\(1\) subshift is finite. **Lemma 8.5**.: _Let \((X,T)\) and \((Y,S)\) be topological dynamical systems and let \(f\) be a factor map from \((X,T)\) to \((Y,S)\). Suppose \((Y,S)\) is equicontinuous and suppose for all \(x_{1},x_{2}\in X\), if \(f(x_{1})=f(x_{2})\) then \(x_{1},x_{2}\) are proximal. Then \((Y,S)\) is the maximal equicontinuous factor of \((X,T)\)._ Proof.: It suffices to show that, for any factor map \(g\) from \((X,T)\) to an equicontinuous system \((Z,R)\), and for any \(x_{1},x_{2}\in X\), if \(f(x_{1})=f(x_{2})\) then \(g(x_{1})=g(x_{2})\). Granting this, \(g\circ f^{-1}\) is a well-defined continuous map from \(Y\) to \(Z\) and \(g\circ f^{-1}\circ S=R\circ g\circ f^{-1}\). Hence \((Y,S)\) is the maximal equicontinuous factor of \((X,T)\). So it is enough to show that for any \(x_{1},x_{2}\in X\), if \(x_{1},x_{2}\) are proximal, then \(g(x_{1})=g(x_{2})\). Assume not. Fix a compatible metric \(\rho\) on \(X\) and a compatible metric \(d\) on \(Z\). Since \((Z,R)\) is equicontinuous and \(g(x_{1})\neq g(x_{2})\), there exists \(\epsilon>0\) such that \[d(g(T^{n}x_{1}),g(T^{n}x_{2}))=d(R^{n}g(x_{1}),R^{n}g(x_{2}))>\epsilon\] for any \(n\in\mathbb{Z}\). But \(x_{1},x_{2}\) are proximal, and thus for any \(\delta>0\), there is \(n\in\mathbb{Z}\) such that \(\rho(T^{n}(x),T^{n}(y))<\delta\). This contradicts the continuity of \(g\). **Theorem 8.6**.: _For any infinite odometer \((Y,S)\), there exists a minimal subshift \(X_{V}\) of symbolic rank \(2\) such that \((Y,S)\) is the maximal equicontinuous factor of \((X_{V},\sigma)\)._ Proof.: By a double induction define two sequences \(\{p_{i},q_{i}\}_{i\geq 0}\) of positive integers as follows. Let \(p_{0}=q_{0}=1\). For \(i\geq 0\), let \(p_{i+1}=2p_{i}+2q_{i}\) and \(q_{i+1}=2p_{i}+q_{i}\). It is easy to see that for any \(i\geq 0\), \(q_{i}\) is odd and \(p_{i},q_{i}\) are relatively prime. Let \(B=(W,E,\preceq)\) be a simple Bratteli diagram associated to \((Y,S)\) such that \(|W_{i}|=1\) and \(a_{i+1}\triangleq|E_{i+1}|>1\) for all \(i\geq 0\). By telescoping, we may assume \(a_{i}\gg p_{i}+q_{i}\) for any \(i\geq 1\). Consider the following proper rank-\(2\) construction: \[v_{0,1} =v_{0,2}=0,\] \[v_{1,1} =0^{a_{1}}1^{2a_{1}}0^{a_{1}}, v_{1,2} =0^{a_{1}}1^{a_{1}}0^{a_{1}},\] \[v_{i+1,1} =v_{i,1}{}^{a_{i+1}}v_{i,2}{}^{2a_{i+1}}v_{i,1}{}^{a_{i+1}}, v_{i+1,2} =v_{i,1}{}^{a_{i+1}}v_{i,2}{}^{a_{i+1}}v_{i,1}{}^{a_{i+1}},\text{ for }i\geq 1.\] It is easy to see that the construction is good and that for any \(n\geq 1\), \(|v_{n,1}|=p_{n}\prod_{i=1}^{n}a_{i}\) and \(|v_{n,2}|=q_{n}\prod_{i=1}^{n}a_{i}\). For notational simplicity let \(A_{n}=\prod_{i=1}^{n}a_{i}\) for all \(n\geq 1\) and let \(A_{0}=1\). Let \(V=\lim_{n}v_{n,1}\). For each \(i\geq 1\), enumerate the elements of \(E_{i}\) in the \(\preceq\)-increasing order as \(e_{i,1},\ldots,e_{i,a_{i}}\). Define \(f:X_{V}\to X_{B}\) by letting \(f(x)(i)=e_{i+1,j}\) if there exists an expected occurrence of \(v_{i+1,1}\) in \(x\) starting at position \(k\in\mathbb{Z}\) such that for some \(\ell\in\mathbb{Z}\) we have that \(1\leq j\leq a_{i+1}\) satisfies \[(j-1)A_{i}\leq k+\ell A_{i+1}<jA_{i}.\] \(f\) is well-defined because \(|v_{i+1,1}|\) and \(|v_{i+1,2}|\) are both multiples of \(A_{i+1}\), and thus for any two expected occurrences of \(v_{i+1,1}\) in \(x\), their starting positions differ by a multiple of \(A_{i+1}\). It is clear that \(f\) is a factor map from \((X_{V},\sigma)\) to \((X_{B},\lambda_{B})\). By Lemma 8.5, in order to complete the proof, it suffices to show that for any \(x,y\in X_{V}\), if \(f(x)=f(y)\) then \(x,y\) are proximal. Toward a contradiction, assume \(x,y\) are not proximal but \(f(x)=f(y)\). Thus there exists \(n\geq 1\) such that no \(k\in\mathbb{Z}\) is the starting position of both an expected occurrence of \(v_{n,1}\) in \(x\) and one in \(y\). Let \(n_{0}\) be the least such \(n\). On the other hand, from the assumption \(f(x)=f(y)\), we can verify by induction that for all \(n\geq 0\), if \(k_{1}\) is the starting position of an expected occurrence of \(v_{n+1,1}\) or \(v_{n+1,2}\) in \(x\) and \(k_{2}\) is the starting position of an expected occurrence of \(v_{n+1,1}\) or \(v_{n+1,2}\) in \(y\), then \(k_{1}-k_{2}\) is a multiple of \(A_{n+1}\). We claim that there exist no \(k<h\) such that \(h-k=tA_{n_{0}+1}\) for some \(1\leq t<p_{n_{0}+1}\), \(h\) is the starting position of at least \(p_{n_{0}+1}\) many consecutive expected occurrences of \(v_{n_{0}+1,2}\) in \(x\) (or \(y\)), and \(k\) is the starting position of at least \(q_{n_{0}+1}\) many consecutive expected occurrences of \(v_{n_{0}+1,1}\) in \(y\) (or \(x\), respectively). If not, then from the property that \(p_{n_{0}+1}\) and \(q_{n_{0}+1}\) are relatively prime, we can get positive integers \(a<q_{n_{0}+1}\) and \(b<p_{n_{0}+1}\) such that \(t=ap_{n_{0}+1}-bq_{n_{0}+1}\). Then \(k+a|v_{n_{0}+1,1}|=h+b|v_{n_{0}+1,2}|\). This is the starting position of an expected occurrence of \(v_{n_{0}+1,1}\) in \(y\) (or \(x\)), while at the same time it is also the starting position of an expected occurrence of \(v_{n_{0}+1,2}\) in \(x\) (or \(y\), respectively). Thus it is the starting position of an expected occurrence of \(v_{n_{0},1}\) in both \(x\) and \(y\), contradicting our definition of \(n_{0}\). Now let \(P\) be the \((n_{0}+2)\)-th layer of the reading of \(x\), that is, \((k,j)\in P\) iff there is an expected occurrence of \(v_{n_{0}+2,j}\) in \(x\); let \(Q\) be the \((n_{0}+2)\)-th layer of the reading of \(y\). Suppose \((k,j)\in P\) where \(j=1\) or \(2\). Consider the positions from \(k+a_{n_{0}+2}|v_{n_{0}+1,1}|\) to \(k+a_{n_{0}+2}|v_{n_{0}+1,1}|+(3-j)a_{n_{0}+2}|v_{n_{0}+1,2}|\). If one of these positions is the starting position of an expected occurrence of \(v_{n_{0}+2,1}\) or \(v_{n_{0}+2,2}\) in \(y\), then from \(a_{n_{0}+2}\gg p_{n_{0}+2}+q_{n_{0}+2}\), we get a contradiction to the above claim. So these positions must be contained in the same expected occurrence of \(v_{n_{0}+2,1}\) or \(v_{n_{0}+2,2}\) in \(y\), which gives us a unique \((k^{\prime},j^{\prime})\in Q\). It follows from the above claim and the assumption \(a_{n_{0}+2}>>p_{n_{0}+2}+q_{n_{0}+2}\) that \(j^{\prime}=j\) and \(|k-k^{\prime}|<\frac{1}{4}|v_{n_{0}+2,2}|\). Let \(m=k-k^{\prime}\). Applying this to all \((k,j)\in P\), we obtain corresponding \((k^{\prime},j)\in Q\) and \(m=k-k^{\prime}\). Clearly \(m\) is constant, which implies that \(y=\sigma^{m}(x)\) and that \(f(x)=f(y)\) is periodic, a contradiction. In the last theorem of this paper we analyze non-Cantor factors of subshifts of finite symbolic rank. We show that any irrational rotation is the maximal equicontinuous factor of a minimal subshift of symbolic rank \(2\). The symbolic rank-\(2\) subshifts will be generated by the well-known Sturmian sequences. In [26] it was shown that all Sturmian sequences have a proper rank-\(2\) construction. In the following discussion we follow [3] for the basic definitions and properties of Sturmian sequences. An infinite word \(V\) is a _Sturmian sequence_ if for any \(n\geq 1\), the number of subwords of \(V\) of length \(n\) is \(n+1\). An infinite word \(R\) is a _rotation sequence_ if there is an irrational number \(\alpha\) and a real number \(\beta\) such that either for all \(n\geq 0\), \[R(n)=\lfloor(n+1)\alpha+\beta\rfloor-\lfloor n\alpha+\beta\rfloor\] or for all \(n\geq 0\), \[R(n)=\lceil(n+1)\alpha+\beta\rceil-\lceil n\alpha+\beta\rceil.\] \(\alpha\) is called the _angle_ of \(R\) and \(\beta\) is called the _initial value_ of \(R\). It is known that every rotation sequence is a Sturmian sequence (Proposition 6.1.17 of [3]) and the converse is true as well (Theorem 6.4.22 of [3]). Let \(\mathbb{T}\) denote the circle, identified with \(\mathbb{R}/\mathbb{Z}\), or with the interval \([0,1)\) via the function \(x\mapsto e^{2\pi ix}\). This identification associates the counterclockwise direction as the positive direction for \(\mathbb{T}\). For any \(a,b\in\mathbb{T}\), let \([a,b]\) denote the closed interval in \(\mathbb{T}\) which starts from \(a\) and ends in \(b\) in the counterclockwise direction. Similarly define intervals \([a,b)\), \((a,b]\) and \((a,b)\). The construction given in the proof of the following theorem is a folklore. However, we were not able to find an explicit reference. Therefore we include a detailed proof for the sake of completeness. **Theorem 8.7**.: _For any irrational \(\alpha\), there exists a Sturmian sequence \(V\) such that \((\mathbb{T},+\alpha)\) is the maximal equicontinuous factor of \((X_{V},\sigma)\). Conversely, for any Sturmian sequence \(V\), the maximal equicontinuous factor of \((X_{V},\sigma)\) is a irrational rotation._ Proof.: For any irrational \(\alpha\in[0,1)\), let \[\begin{array}{rcl}Y_{0}&=&\{n\alpha\in\mathbb{T}\,:\,n\in\mathbb{Z}\} \subseteq\mathbb{T},\\ Y_{1}&=&\mathbb{T}-Y_{0}\subseteq\mathbb{T},\\ X_{0}&=&Y_{0}\times\{0\}\subseteq\mathbb{T}\times\{0,1\},\\ X_{1}&=&Y_{0}\times\{1\}\subseteq\mathbb{T}\times\{0,1\},\text{ and }\\ X&=&Y_{1}\sqcup X_{0}\sqcup X_{1}.\end{array}\] We define a topology on \(X\) to make it a Cantor space. For \(m,n\in\mathbb{N}\) where \(m\neq n\), define \[N_{m,n}=\left[(m\alpha,n\alpha)\cap Y_{1}\right]\cup\left[([m\alpha,n\alpha) \cap Y_{0})\times\{0\}\right]\cup\left[((m\alpha,n\alpha]\cap Y_{0})\times\{ 1\}\right].\] Then it is easy to verify that \(\{N_{m,n}\}_{m\neq n\in\mathbb{N}}\) is a topological base for a second countable, Hausdorff, zero-dimensional topology on \(X\). Thus \(X\) is a separable metrizable space. We claim that it is compact, thus it is a Cantor space. It suffices to verify that it is sequentially compact. In fact, consider the map \(f\,:\,X\to\mathbb{T}\) defined as \(f(x)=x\) for \(x\in Y_{1}\) and \(f(x,0)=f(x,1)=x\) for \(x\in Y_{0}\). Since \(Y_{0}\) is dense in \(\mathbb{T}\), we have that the sets of the form \((a,b)\), where \(a,b\in Y_{0}\), form a base for \(\mathbb{T}\). Now for any \(a,b\in Y_{0}\), if we choose \(a_{k}\to a\) from the clockwise direction with \(a_{k}\) sufficiently close but not equal to \(a\) and \(b_{k}\to b\) from the counterclockwise direction with \(b_{k}\) sufficiently close but not equal to \(b\), we have that \(f^{-1}(a,b)=\bigcup_{k}N_{m_{k},n_{k}}\) where \(a_{k}=m_{k}\alpha\) and \(b_{k}=n_{k}\alpha\). This shows that \(f\) is continuous. Now to see that \(X\) is sequentially compact, we consider a sequence \((x_{k})_{k\in\mathbb{N}}\) in \(X\). Since \(\mathbb{T}\) is compact, there is a subsequence \((x_{k_{j}})_{j\in\mathbb{N}}\) such that \(f(x_{k_{j}})\) converges in \(\mathbb{T}\). Without loss of generality, we assume that \(f(x_{k})\) itself converges to \(t\in\mathbb{T}\). Moreover, we assume that \(f(x_{k})\) converges to \(t\) from the counterclockwise direction when \(k\) is sufficiently large. Now if \(t\in Y_{1}\) then \(t\in X\) is a limit of the sequence \(x_{k}\). If \(t\in Y_{0}\), then \((t,1)\) is a limit of the sequence \(x_{k}\). This shows that \(x_{k}\) converges in \(X\), and therefore \(X\) is sequentially compact. We also define \(S:X\to X\) by letting \(S(x)=x+\alpha\) for \(x\in Y_{1}\) and \(S(x,0)=(x+\alpha,0)\), \(S(x,1)=(x+\alpha,1)\) for \(x\in Y_{0}\). It is clear that \(S\) is a homeomorphism of \(X\), and that every orbit of \(S\) is dense. Thus \((X,S)\) is a minimal Cantor system. Now consider \(\varphi:X\to\{0,1\}^{\mathbb{Z}}\) defined as \(\varphi(x)=\operatorname{Ret}_{N_{0,1}}(x)\), and let \(Z=\varphi(X)\). Then \(\varphi\) is a continuous injection and \(\varphi(x+\alpha)(m)=\varphi(x)(m+1)=\sigma(\varphi(x))(m)\) for any \(x\in X\) and \(m\in\mathbb{Z}\). Hence \(\varphi\) is a conjugacy map from \((X,S)\) to \((Z,\sigma)\). Let \(\beta\in[0,1)\) be an irrational so that \(\{\alpha,\beta\}\) is linearly independent. Let \(V\) be the rotation sequence with angle \(\alpha\) and initial value \(\beta\). By the results of Section 6.1.2 of [3], we have \(V=\varphi(\beta)\,\upharpoonright\mathbb{N}\). It follows that \(X_{V}\subseteq Z\). By the minimality of \((Z,\sigma)\), we conclude that \(X_{V}=Z\). It is clear that the map \(f\) defined above is a factor map from \((X,S)\) to \((\mathbb{T},+\alpha)\). Note that for all but countably many \(x\in\mathbb{T}\), \(f^{-1}(x)\) is a singleton. By a well-known criterion (e.g. Proposition 1.1 of [36]), \((\mathbb{T},+\alpha)\) is the maximal equicontinuous factor of \((X,S)\), and so is it of \((X_{V},\sigma)\). Conversely, for any rotation sequence \(V\) with angle \(\alpha\) and initial value \(\beta\), consider the above construction \((X,S)\), which is conjugate to \((X_{V},\sigma)\). We have that \((\mathbb{T},+\alpha)\) is the maximal equicontinuous factor of \((X_{V},\sigma)\).
2310.13140
Blind Evaluation Framework for Fully Homomorphic Encryption and Privacy-Preserving Machine Learning
In the domain of Privacy-Preserving Machine Learning (PPML), Fully Homomorphic Encryption (FHE) is often used for encrypted computation to allow secure and privacy-preserving outsourcing of machine learning modeling. While FHE enables encrypted arithmetic operations, execution of programmatic logic such as control structures or conditional programming have remained a challenge. As a result, progress in encrypted training of PPML with FHE has been relatively stagnant compared to encrypted inference owing to the considerably higher logical complexity required in training. In addition, prior works that have demonstrated encrypted training use Interactive Rounds of Decryption and Evaluation (IRDE), where certain operations are decrypted and evaluated in plaintext using interactive rounds between the untrusted computing party (server) and the trusted private-key owner (client). In decision tree training for example, the current state-of-the-art requires d-rounds of IRDE for tree-depth of d. To address this issue in PPML and FHE, we introduce the Blind Evaluation Framework (BEF), a cryptographically secure programming framework that enables blind, but correct, execution of programming logic without IRDE. This is achieved by deconstructing programming logic into binary circuits and binary arithmetic to find alternative representations of logical statements, and adopting them to FHE for secure logical programming. To the best of our knowledge, this is the first framework to enable both training and inference of PPML models with FHE without decryption rounds. By advancing the state-of-the-art in IRDE efficiency by eliminating IRDE entirely, BEF enables adoption of FHE in use-cases where large amounts of computing services are available without the ability to have trusted clients available to perform decryption rounds.
Hunjae "Timothy" Lee, Corey Clark
2023-10-19T20:33:02Z
http://arxiv.org/abs/2310.13140v4
Privacy Preserving Decision Tree Training and Prediction via Fully Homomorphic Encryption with No Decryption ###### Abstract With data-outsourcing becoming commonplace, there grows a need for secure outsourcing of data and machine learning models. Namely, data and model owners (client) often have a need for their information to remain private and secure against the potentially untrusted computing resource (server) to whom they want to outsource said data and models to. Various approaches to privacy-preserving machine learning (PPML) have been devised with different techniques and solutions introduced in the past. These solutions often involved one of two compromises: (1) client-server interactions to allow intermediary rounds of decryption and re-encryption of data or (2) complex architectures for multi-party computation. This paper devises a paradigm using Fully Homomorphic Encryption (FHE) that minimizes architectural complexity and removes client-side involvement during the training and prediction lifecycle of machine learning models. In addition, the paradigm proposed in this work achieves both model security as well as data security. To remove client-side involvement, the devised paradigm proposes a _no decryption_ approach that allows the server to handle PPML in its entirety without rounds of decryption and re-encryption. To the best of our knowledge, this paradigm is the first to achieve privacy-preserving decision tree training with _no decryption_ while maintaining a simple client-server architecture. ## 1 Introduction Using decision trees as the model of choice, this work introduces a paradigm for performing privacy preserving machine learning (PPML) training and inference using Fully Homomorphic Encryption (FHE). FHE allows for computation on encrypted data without the need for decrypting it first [10]. This property of FHE includes arithmetic operations such as addition and multiplication and can extend to polynomial functions. However, FHE is not without its limitations. Namely, there is significant asymmetry between the advancement of machine learning and what machine learning with FHE is capable of in its current state. Often, compromises have to be made to allow FHE to be compatible with a given machine learning architecture. These compromises include polynomial approximation of non-polynomial functions [19], using lookup tables [4][16], and interaction protocols between client (data owners) and server (untrusted entity where all computation is done with FHE) to allow for some computation to be done on client-side with unencrypted, plaintext data [1][12]. Decision trees are used as the machine learning model of choice in this work because they can be utilized in a diverse array of tasks. Furthermore, decision trees offer a great starting point for further machine learning research with FHE such as with random forests and even with neural networks as neural networks can be represented as decision trees [3]. The paradigm proposed in this work minimizes architectural complexity and removes client-side involvement. This privacy-preserving paradigm is devised such that the client can be offline as soon as encrypted data are sent to the server. The server then trains the decision tree model with both data and model privacy without interaction with the client or any other service. The fully trained privacy-preserving decision tree can then perform inference tasks on the server as is or be sent back to the client. ## 2 Related Work There have been much progress made in the field of PPML and FHE in the last twenty years. Prior works [13, 14] constructed privacy-preserving logistic regression models using FHE with polynomial approximation of activation functions. Cryptonets [11] opened the door for neural network evaluation (inference/prediction) by successfully adopting FHE into the inference process of an already trained neural network. Later, Hesamifard et al. introduced CryptoDL [12], a framework that allows training and evaluation of deep neural networks with client-server interaction for noise reduction. Using TFHE (or FHE over the torus) [6], results indicating the feasibility of deep neural network inference with FHE were shown [7]. The work from [7] uses _programmable bootstrapping_ as part of their FHE scheme as opposed to _leveled mode_ like others mentioned above. Using FHE with _leveled mode_ allows for faster computation compared to _bootstrapped mode_ but can only compute a predetermined number of products. This poses a problem with respect to scalability of models. On the other hand, FHE with _bootstrapped mode_ enables noise reduction whenever noise reaches a certain threshold, allowing for evaluation of complex circuits and functions. It is this _programmable bootstrapping_ technique that is used in this paper. Concerning decision trees, the first attempt at privacy-preserving decision tree training involved secure multi-party computation [17]. The following prior works [8], [9] also considered multi-party computation techniques for decision tree learning. A method for training and inferring decision trees with a twin-cloud architecture using additive secret sharing was also explored in [18]. These approaches generally involve two or more parties adhering to strict security and privacy standards and can be made vulnerable if one or more participating parties collude with each other or break their security parameters. A simple client-server model where a bulk of privacy-preserving computations with FHE are conducted on the server was demonstrated in [1]. This approach greatly minimizes architectural complexity of the model when compared to prior works that employed multi-party computation. [1] also uses a client-server architecture similar to one employed in this work. However, the training protocol from [1] requires \(d\)-rounds of communication between client and server for tree depth of \(d\) where each round of communication with the client entails decryption and re-encryption of data to allow some computation to be handled in plaintext on client-side. ## 3 Preliminaries This section goes over the terminology and concepts used in the process of actualizing the _no decryption_ paradigm proposed in this work. Many of these concepts take advantage of bitwise properties of Boolean FHE and thus not applicable in other FHE schemas such as the CKKS scheme which uses floating-point arithmetic [5]. ### Fully Homomorphic Encryption Mathematical proofs and definitions for FHE and relevant cryptographic functions from [10] are shown below. A homomorphic encryption scheme \(\mathcal{E}\) is equipped with algorithms \(\mathit{KeyGen}_{\mathcal{E}}\), \(\mathit{Encrypt}_{\mathcal{E}}\), \(\mathit{Decrypt}_{\mathcal{E}}\), and \(\mathit{Evaluate}_{\mathcal{E}}\). In addition, the computational complexity of all of these algorithms must be polynomial in security parameter \(\lambda\). The inputs to this scheme are: public key \(\mathit{pk}\), a circuit \(C\) from a permitted set \(C_{\mathcal{E}}\) of circuits, and a tuple of ciphertexts \(\Psi=<\psi_{1},...,\psi_{t}>\). \(\mathcal{E}\) is correct for circuits in \(C_{\mathcal{E}}\) if, for any key-pair (\(\mathit{sk}\), \(\mathit{pk}\)) output by \(\mathit{KeyGen}_{\mathcal{E}}(\lambda)\), any circuit \(C\in C_{\mathcal{E}}\), any plaintexts \(\pi_{1},...,\pi_{t}\), and any ciphertexts \(\Psi=<\psi_{1},...,\psi_{t}>\) with \(\psi_{i}\leftarrow\mathit{Encrypt}_{\mathcal{E}}(\mathit{pk},\pi_{i})\), it is the case that: \(\Psi\leftarrow\mathit{Evaluate}_{\mathcal{E}}(\mathit{pk},C,\Psi)\quad \Rightarrow\quad C(\pi_{1},...,\pi_{t})=\mathit{Decrypt}_{\mathcal{E}}( \mathit{sk},\psi)\) Formal definition for homomorphic encryption is as following: \(\mathcal{E}\)_is homomorphic for circuits in \(C_{\mathcal{E}}\) if \(\mathcal{E}\) is correct for \(C_{\mathcal{E}}\) and \(\mathit{Decrypt}_{\mathcal{E}}\) can be expressed as a circuit \(D_{\mathcal{E}}\) of size \(\mathit{poly}(\lambda)\)_. Furthermore, \(\mathcal{E}\)_is fully homomorphic if it is homomorphic for all circuits_. ### FHE over the Boolean In this work, we use a Boolean FHE scheme called TFHE, or FHE over the Torus [6]. Boolean FHE is a form of FHE that supports bitwise logical operations such as bitwise union and intersection. In addition, Boolean FHE offers a great degree of flexibility allowing for arithmetic as well as complicated logical operations to be constructed with Boolean circuits that are incompatible with integer or floating-point FHE schemes. These properties are necessary to establish a paradigm where the entirety of decision tree training and inference are conducted without any round of decryption. This does not mean that integer and floating-point data cannot be used with Boolean FHE, however. Integer or floating-point datatypes can be converted to binary representations to be used in this paradigm. ### Boolean Circuits for logical and arithmetic operations Boolean FHE supports bitwise AND (union), OR (intersection), and NOT (negation). Arithmetic operations like addition, multiplication, and even division can be represented with Boolean circuits with combinations of these three gates. Likewise, logical operations such as comparisons can also be represented with Boolean circuits. Prior works [2] and [20] outline ways of constructing two-bit and four-bit comparison circuits in digital logic, respectively. In other words, for given inputs \(A\) and \(B\) where both \(A\) and \(B\) are \(n\) bits long, a comparison circuit produces an output \(\gamma\) where \(\gamma\) is a one-bit Boolean indicator denoted \(1\) if \(A\) is greater than \(B\) and \(0\) if \(B\) is greater than \(A\). Figure 1 shows full digital logic of a four-bit comparison circuit that also accounts for input equality. With Boolean FHE, these Boolean circuits can be constructed and used with encrypted data, returning encrypted results. In many prior works, comparisons necessary for decision tree training such as with feature selection and threshold calculation were first sent back to the client to be decrypted and computed in plaintext [1]. The results of those comparisons would then be re-encrypted and sent back to the server for fur ther training. However, with a comparison circuit constructed with Boolean FHE, any comparison operation can be done in FHE space without decryption and re-encryption. The results of those comparisons are encrypted and thus do not violate any cryptographic principles, nor do they leak information about the inputs to the circuit. Because outputs of any circuit with Boolean FHE are encrypted, they cannot be evaluated directly and thus require blind evaluations such as in Equation 3. ### Terminology 1. Client: Refers to the entity that possesses the private key of the encryption scheme, and has the authority to encrypt/decrypt data. The client is typically the data owners. 2. Server: Refers to computing entity that is unsecured and potentially malicious to whom encrypted data is outsourced. The server can perform arithmetic and logical operations on encrypted data with FHE such as decision tree training and inference. The server does not have access to the private key and therefore cannot decrypt or gain information from the data outsourced by the client. All server-side computations and operations are done without compromising the security and privacy of both the data and model. 3. FHE accessible: A particular variable/vector/matrix that is encrypted with FHE so that it can be used in encrypted bitwise operations with other encrypted data. Results of FHE evaluations are FHE accessible. Note that many Boolean FHE schemas only support bitwise operations between encrypted data and therefore it may not be possible to perform bitwise operations between ciphertext and plaintext. ### Algorithmic Concepts 1. Blind Comparison: Refers to performing a comparison operation wherein inputs, results, and any intermediary data are unknown to the computing entity (i.e., the server) due to FHE. This is only made possible with Boolean FHE schemas as comparison circuits can be assembled using only primitive bitwise logical operators as shown below. \[A>B\gets A\wedge\neg B\] (1) \[A<B\leftarrow\neg A\wedge B\] \[A=B\leftarrow(\neg A\wedge\neg B)\vee(A\wedge B)\] Blind comparisons preserve the cryptographic integrity of FHE since the results of comparisons remain encrypted with no way to evaluate any information from said results. Because outputs of blind comparisons are encrypted, they are FHE accessible. Programmatic paradigms can be established to make use of the outputs from blind comparisons without directly evaluating them. 2. Collapsing: Because results from blind comparisons are encrypted and their values cannot be inferred, conditional branching cannot be performed in a traditional sense. However, conditional branching can be simulated by executing all branches of a given condition and nullifying the results of all but the one branch that is correct. This can be achieved by chaining together the results of blind comparisons with desired branch operations using bitwise AND, resulting in the wrong branches collapsing to 0 while the correct branch retains its values. Figure 1: Digital Logic of 4-bit Comparator [21] Figure 2: client-server architecture in FHE A polynomial equation representation of Algorithm 2 is shown below in Equation 2. Equation 3 is a modification of Equation 2 that uses FHE encrypted data using Algorithm 1 returning an encrypted result. \[X\gets X+(1\times(A>B)+(-1\times(B>A)) \tag{2}\] \[\mathcal{E}(X)\leftarrow\mathcal{E}(X)+(1\wedge\textsc{blindcomparison}( \mathcal{E}(A),\mathcal{E}(B)))-\\ (1\wedge\textsc{blindcomparison}(\mathcal{E}(B),\mathcal{E}(A ))) \tag{3}\] Because the evaluation of Equation 3 happens in FHE mode, there is no way to evaluate or even know which branches collapsed and which ones did not until the client uses the private key to decrypt the results. 3. Blind Swapping: Using blind comparison and bit manipulation techniques, one can swap two elements in a list in ascending or descending order. Note that this is done purely with bit manipulation using outputs of blind comparisons and thus does not give away any information about the data. Equation 4 demonstrates a blind swap of two one-bit values \(\alpha\) and \(\beta\) in a list. Both \(\alpha\) and \(\beta\) are modified in place during the swap meaning the resulting \(\alpha\) may contain the value of \(\beta\) and vice versa. It is also true that their values may remain unchanged after the swap depending on the comparison result. Equation 5 is a modification of Equation 4 that allows swapping on multi-bit values. Blind swapping can also be used in iteration in a list, allowing for min/max calculations. \[\mathcal{E}(gt),\mathcal{E}(lt)=\textsc{bitwiseBlindComparison}( \mathcal{E}(\alpha),\mathcal{E}(\beta))\\ \mathcal{E}(\alpha)=(\mathcal{E}(lt)\wedge\mathcal{E}(\beta)) \vee(\mathcal{E}(gt)\wedge\mathcal{E}(\alpha))\\ \mathcal{E}(\beta)=(\mathcal{E}(lt)\wedge\mathcal{E}(\alpha)) \vee(\mathcal{E}(gt)\wedge\mathcal{E}(\beta)) \tag{4}\] \[\mathcal{E}(gt),\mathcal{E}(lt)=\textsc{blindcomparison}( \mathcal{E}(\mathcal{A}),\mathcal{E}(\mathcal{B}))\\ \mathcal{E}(\mathcal{A})=(\mathcal{E}(lt)\wedge\mathcal{E}( \mathcal{B}_{1..n}))\vee(\mathcal{E}(gt)\wedge\mathcal{E}(\mathcal{A}_{1..n})) \\ \mathcal{E}(\mathcal{B})=(\mathcal{E}(lt)\wedge\mathcal{E}( \mathcal{A}_{1..n}))\vee(\mathcal{E}(gt)\wedge\mathcal{E}(\mathcal{B}_{1..n})) \tag{5}\] 4. Blind Sorting: Using blind comparisons and blind swapping techniques, a modified version of insertion sort algorithm can be assembled to sort an entire list of FHE data. Because conditional branching cannot be used on FHE values, all loops must be completely unrolled. Therefore, blind sorting will always perform the insertion sort equivalent of worst-case time-complexity. A simple version of blind sorting implementation is shown in Algorithm 4. Because there is no way to know if array\([j-1]\) is greater than or equal to array\([j]\) in line 5, blind swapping has to be performed with every element pair at each iteration. ``` 1:Output from blind comparison:\(\mathcal{E}(gt)\), \(\mathcal{E}(lt)\) 2:Input: array\([\mathcal{E}(A),\mathcal{E}(B)]\) where both A and B are 4-bit binary numbers 3:Output: array\([\mathcal{E}(?),\mathcal{E}(?)]\) 4: 5:procedureblindSwapping(array) 6:\(\mathcal{E}(gt)\), \(\mathcal{E}(lt)\) = blindcomparison(\(\mathcal{E}(A),\mathcal{E}(B)\)) 7:for\(i\gets 0\) to \(n\)do 8:\(array[0][i]=(\mathcal{E}(lt)\wedge array[1][i])\vee(\mathcal{E}(gt)\wedge array [0][i])\\ \end{array}[1][i]=(\mathcal{E}(lt))\vee(\mathcal{E}(gt)\wedge array[1][i])\) 9:endfor 10: 11: 12:return array 13:endprocedure ``` **Algorithm 3** blind swapping ### Decision Tree Training and Prediction A decision tree is a tree that recursively partitions a given dataset based on input features and their corresponding labels in order to maximize classification or regression performance. Nodes of a decision tree and their corresponding partitions of data are often learned to fit the given dataset. As a result, decision trees are prone to overfitting. A simple outline for a decision tree training procedure is shown in Algorithm 5. Evaluation, or prediction, in a trained decision tree given an input \(x\) is performed by traversing the tree starting from root, using the partitioning rule at each node to select the correct path at each level until a leaf node is reached. The specific label from the leaf node is then returned as the result. A simple outline for a decision tree evaluation protocol is shown in Algorithm 6 ``` 1:X: feature data where \(X^{i}\) refers to column data 2:y: class labels 3:procedureTrainDecisionTree(X, y) 4:if all X belong to the same class then 5:return a leaf node with that class label 6:endif 7:if\(X^{i}\) is empty then 8:return a leaf node with the majority class label among remaining examples 9:endif 10: Choose the \(X^{i}\)\(i\) that best splits the examples 11: Create a new decision tree node \(N\) with attribute \(i\) 12:for all possible value \(V\) of attribute \(i\)do 13:\(X_{v}\leftarrow\) subset of examples where attribute \(i\) has value \(V\) 14:if\(X_{v}\) is empty then 15: Attach a leaf node with the majority class label among remaining examples to node \(N\) 16:else 17: Attach the subtree TrainDecisionTree(\(X_{v}\), \(X^{i}-i\)) to node \(N\) 18:endif 19:endfor 20:return the root node \(N\) of the decision tree 21:endprocedure ``` **Algorithm 5** Decision Tree Training ## 4 Privacy-Preserving Decision Tree Training In order to implement the privacy-preserving paradigm that achieves _no decryption_ and zero-communication with the client during the training procedure, it is helpful to divide the training process into high-level steps. These steps can then be tackled with techniques mentioned above in Algorithms 1 - 3. A flowchart outlining these steps is shown in figure 3. The first step is the feature selection phase, where the feature to be used as the root node is selected from the original dataset. Second, with the ideal feature selected as the root node, the data splitting phase is executed in order to proceed to the left and right child nodes of the root node. Next, the tree growing phase is initiated where the decision tree continues to recursively build and train using aspects of the feature selection phase and data splitting phase. Lastly, the training procedure is stopped and results returned to the client during termination phase when a chosen termination criteria are met. For simplicity, this paper assumes the dataset consists of binary categorical data and labels. However, this paradigm is applicable for any other data type (floating point numbers, integers, etc.) so long as they can be represented in binary format. The approach of _no decryption_ machine learning training protocol with FHE is a novel attempt aimed at demonstrating that this approach is a valid and implementable method of performing decision tree training and inference with privacy-preservation. ### Feature Selection Phase On a high level, the feature selection process works much the same as that of a regular decision tree training algorithm; the Figure 3: Training Flowchart end goal is to select a feature that minimizes expected classification errors. There are many algorithms that are used to help with feature selection. Gini index for one, calculates the expected frequency of incorrect classifications in each feature (called Gini impurity) in order to select the feature with the lowest impurity. Information gain, on the other hand, calculates the level of usefulness in each feature and selects the feature that offers the highest information gain. The paradigm proposed in this work is agnostic to feature selection algorithms. To proceed with the feature selection phase, the dataset and supporting variables must be initialized properly in order to allow for the entire process to be handled without decryption and intermediary communication with the client. First, an FHE-encrypted unique binary identification number (in the form of One-Hot-Encoding, or OHE) is assigned to each feature in the dataset as shown in Figure 4 (here, feature can be thought of as column data represented by column name/index). For instance, the example demonstrated in the results section uses the Phishing Website Detector dataset where each row represents one website entry to be determined as a phishing website or not and each column is a feature that represents a particular property of that website entry such as the existence of valid IP address or DNS record. By using feature IDs to represent feature names in FHE form, the server is able to identify features for the feature selection phase. While column indices can be used to access features and their elements, they are not FHE accessible and cannot interact with other FHE data because indices of vectors and matrices are in plaintext. Next, using the label/target vector in conjunction with feature IDs and their corresponding column vectors, encrypted Gini index algorithm (or any other feature selection algorithm) is performed to compute the encrypted Gini impurity value for each feature. Then, for each feature, it's given ID, column vector, and Gini impurity score are stored in a tuple like structure. These tuples are then stored in a list. Iterative use of blind comparison from Algorithm 1 on the encrypted Gini impurity values in conjunction with tuple-wise blind swapping from Algorithm 3 across the list of tuples correctly identifies the tuple with the lowest Gini impurity. This is why FHE representations of feature IDs are necessary. Without feature IDs being swapped alongside their Gini impurity values with blind swapping, there would be no way to keep track of which Gini impurity values corresponded with which feature once blind min/maxing was done. Finally, selected feature is used as the root node. Algorithm 7 gives a simple outline for this phase. ``` 1:procedureFeatureSelection(dataset) 2:for features in dataset do 3:\(gini_{i}\gets giniImpurity(feature_{i})\) 4:\(tupleList\leftarrow(feature_{i}.ohe,feature_{i}.column,gini_{i})\) 5:endfor 6:blindSorting(\(tupleList\)) 7: remove the feature corresponding to tupleList[0] from the dataset 8:return tupleList[0] 9:endprocedure ``` **Algorithm 7** Feature Selection Phase ### Data Partitioning Phase With a feature selected for the root node, the next step is to split the rest of the dataset into left and right subsets for the left and right child nodes of the root, respectively. The split is based on the root node's selected feature or the threshold assigned to root node. However, due to the fact that comparative evaluations in FHE give encrypted (blind) results, a hard-partition of the dataset cannot be performed. Therefore, the training paradigm of this paper uses a soft-partition technique of the dataset to simulate the splitting of data into left and right branches. Namely, aspects of feature data from the root node calculation are collected into a vector denoted as FLAG and is passed down to the left and right branches along with the entire dataset. A similar approach to the soft-partition technique used in this work is also used in [1]. For each row of the FLAG vector, if it is encrypted false, then that row from the original dataset belongs to the left branch of the tree. If the column data at that row is encrypted true, then that row from the original dataset becomes a part of the right branch. If the dataset had continuous data instead, the split would happen based on the column data's position with respect to a chosen threshold for that node. In other words, to split the dataset into left and right subsets, a comparison operation needs to be performed on each row to determine which subset a given row belongs to. The solution we present, shown in Algorithm 8, is to simulate the splitting of the dataset by passing the original dataset to both left and right subsets identically along with the FLAG vector. Because the FLAG vector can be used to determine which branch a given row belongs to, collapsing technique from Equation 3 can be used to correctly evaluate feature selection calculations for both left and right branches of the root node. With this approach, all nodes on the same level of the tree will contain identical data encrypted with FHE but still compute their feature selection processes correctly using the FLAG vector. Not only does this approach provide a way to perform data splitting operations on the server without decryption, but it also ensures that the Figure 4: feature IDs and column vectors nodes give away absolutely no information about size and imbalances of subsets as well as the structure of the tree since an empty node will still contain the same data as other nodes on the same level. ``` procedureDataPartitioning(dataset, node) \(selectedFeature\leftarrow\textsc{FeatureSelection}(dataset)\) node.threshold\(\leftarrow\) selected threshold for this feature flag\(\leftarrow\)\(selectedFeature.column\) \(node.left\leftarrow(dataset,flag)\) \(node.right\leftarrow(dataset,flag)\) endprocedure ``` **Algorithm 8** Data Partitioning Phase ### Tree Growing Phase With a feature selected for the root node and data splitting phase having been executed, the next step is to continue growing the tree by populating child nodes and branching out until some termination criterion is met. Gini calculations on the child nodes are calculated much the same as the root node. However, because the tree has to simulate working with the left and right subsets of the data for left and right branches, FLAG vectors are used to collapse Gini calculations on the rows that correspond to a different branch. Data splitting phase and feature selection phase that are slightly modified to support subset calculations are recursively performed for each child node at each level of the decision tree. ``` procedureTreeGrowing(dataset, flag, node) if termination criteria is met then return root endif perform feature selection and data partitioning phases TreeGrowing(dataset, flag, node.left) TreeGrowing(dataset, flag, node.right) endprocedure ``` **Algorithm 9** Tree Growing Phase ### Termination Phase Decision trees are prone to overfitting and can overfit perfectly to training data unless some stopping criterion is employed to halt the training process. In the case of privacy-preserving decision tree paradigm proposed in this work, throughout the recursive process of tree growing phases, valid data entries (as determined by the FLAG vector) for each node in successive levels will grow smaller until every node in a given level no longer has any valid data left. Because of the nature of this paradigm, the training process will be blind to the fact that there is no valid data remaining to be trained and that the tree expansion should be stopped. Therefore, stopping mechanism in the proposed FHE decision tree is necessary not only to minimize overfitting, but also to stop the tree from training indefinitely. There are many stopping criteria that can be employed to stop the training of decision trees. One of the most effective and straightforward methods is to simply set a maximum depth for a tree so that the tree stops training at a given depth even if further training can be done on remaining data. This method minimizes overfitting and can be implemented in a straightforward manner in the proposed FHE decision tree. An integer parameter denoting the maximum depth for the tree can be set by the client and passed to the server in plaintext to control the number of tree growing iterations performed during training. While providing this information in plaintext may appear to leak information about the tree, this information would already be known to the server if it was to perform inference with the decision tree, and therefore the solution proposed in this paper does not leak any information beyond what would already be known by the server. ### Integrating the Phases Having defined the different phases of the privacy-preserving decision tree training paradigm, they can be integrated into one unifying algorithm to demonstrate the proposed paradigm in full, as shown in Algorithm 10. ``` root: starting node, entry point for decision tree procedureTrainingAlgorithm(dataset) for features in dataset do \(gini_{i}\leftarrow\)\(giniImpurity(feature_{i})\) \(tupleList\leftarrow(feature_{i}.ohe,feature_{i}.column,gini_{i})\) endforblindSorting(tupleList) selectedFeature\(\gets\)\(tupleList[0]\) remove the feature corresponding to tupleList[0] from the dataset root.threshold\(\leftarrow\) selected threshold for this feature flag\(\leftarrow\)\(selectedFeature.column\) if termination criteria is met then return root endif TreeGrowing(dataset, flag, root.left) TreeGrowing(dataset, flag, root.right) endprocedure ``` **Algorithm 10** Privacy-Preserving Paradigm for Decision Tree Training ## 5 FHE Inference on Trained Tree With the trained tree on the server, inference can be performed to evaluate new, incoming data. Although inference is not the focal point of this paper and indeed many papers in the past have demonstrated decision tree inference with FHE, discussing inference in this paper is necessary to showcase how the decision tree trained using novel methodologies can be seamlessly transitioned for prediction. The prediction protocol takes a similar approach taken by [1] and converts the trained tree into a polynomial equation. For prediction over binary, categorical data, threshold is also binary as it is in the case of Algorithm 11. In other words, For given \(X^{i}\in X\), taking the right branch is correct if \(X^{i}\) is \(\mathcal{E}(True)\) and taking the left branch is correct if \(X^{i}\) is \(\mathcal{E}(False)\). Therefore, line 5 in Algorithm 11 ensures that the correct path of the tree continues with the prediction while the wrong path is collapsed in a manner similar to our collapsing algorithm shown in Algorithm 3. ``` 1:predict(\(node\), \(X\)): where \(node\) refers to an FHE decision tree node containing feature information and \(X\) denoting the incoming validation data 2:if\(node\) contains leaf value then 3:return\(node\_leaf\_value\) 4:else 5:return\(X^{node}\cdot predict(node.right,X)\) + 6:\(X^{node}\cdot predict(node.left,X)\) 7:endif 8: ``` **Algorithm 11** Prediction Protocol: Binary For prediction in cases where it is necessary to have continuous or non-binary threshold values, one would need to employ Blind comparison from Algorithm 1 to compare the appropriate feature value from input data with the threshold value at any given node of the tree. This process is shown in Algorithm 12 where the results of blind comparisons are used for collapsing the prediction of wrong paths in the tree. This methodology of converting the trained decision tree into a polynomial equation representation and using the collapsing technique from Algorithm 3 ensures that correct result is obtained in prediction while maintaining privacy of data and even ensuring that the correct path is hidden to the server during runtime. This prediction protocol is independent of training protocol and can be used for any decision tree, plaintext or encrypted. ## 6 Results Preliminary results were collected using the Phishing Website Detector dataset to showcase the validity of the training and prediction paradigm introduced in this paper. Rust library for TFHE from [22] is used for the code implementation of the training and prediction paradigm presented in this paper [15]. As of March 2023, [22] began supporting encrypted comparison and min/max operations using similar underlying techniques as Algorithm 1 and Algorithm 3 and these functions from [22] are used for implementation of this work. In addition, a completely plaintext version of the proposed paradigm is implemented in Python to benchmark the performance of the paradigm before introducing FHE. In order to alleviate computational complexity and costs, a primitive feature selection algorithm that measures class loss was implemented in place of Gini impurity at the cost of performance. Future work in this direction will focus on optimization of performance metrics of this privacy-preserving paradigm as well as efficiency in server-side computing as this paradigm is highly parallelizable. The preliminary results are shown in Table 1. Validation accuracy for both plaintext and encrypted models were averaged after performing prediction on 30 batches of data as shown in Figure 5 and Figure 6, respectively. In addition, t-stat of \(2.97\times 10^{-15}\) indicate that the plaintext and encrypted models are statistically consistent. This shows that this paradigm is a valid approach to decision tree training and prediction. feasible approach to privacy-preserving decision trees as well as privacy-preserving machine learning as a whole. The aim for future work is two-pronged. First, this research effort aims to address computational complexities and costs of this paradigm with parallelization techniques and more powerful computing units to make this paradigm trainable at a larger scale. Second objective is to introduce optimization efforts for performance metrics such as replacing primitive feature selection techniques with state-of-the-art algorithms and introducing cross-validation techniques into the paradigm. ## Acknowledgment This work was funded by BALANCED Media[Technology (BMT), a company that may potentially benefit from the research results. Corey Clark has an equity interest in BMT and also serves as the company's chief technology officer. The terms of this arrangement have been reviewed and approved by the Southern Methodist University in accordance with its conflict of interest policies.
2301.05063
Finite-size excess-entropy scaling for simple liquids
We introduce and validate a finite-size two-body excess entropy integral equation. By using analytical arguments and computer simulations of prototypical simple liquids, we show that the excess entropy $s_2$ exhibits a finite-size scaling with the inverse of the linear size of the simulation box. Since the self-diffusivity coefficient $D^*$ displays a similar finite-size effect, we show that the scaling entropy relation $D^*=A\exp(\alpha s_2)$ also depends on the simulation box size. By extrapolating to the thermodynamic limit, we report values for the coefficients $A$ and $\alpha$ that agree well with values available in the literature. Finally, we find a power law relation between the scaling coefficients for $D^*$ and $s_2$, suggesting a constant viscosity to entropy ratio.
Mauricio Sevilla, Atreyee Banerjee, Robinson Cortes-Huerto
2023-01-12T14:59:54Z
http://arxiv.org/abs/2301.05063v1
# Finite-size excess-entropy scaling for simple liquids ###### Abstract We introduce and validate a finite-size two-body excess entropy integral equation. By using analytical arguments and computer simulations of prototypical simple liquids, we show that the excess entropy \(s_{2}\) exhibits a finite-size scaling with the inverse of the linear size of the simulation box. Since the self-diffusivity coefficient \(D^{*}\) displays a similar finite-size effect, we show that the scaling entropy relation \(D^{*}=A\exp(\alpha s_{2})\) also depends on the simulation box size. By extrapolating to the thermodynamic limit, we report values for the coefficients \(A\) and \(\alpha\) that agree well with values available in the literature. Finally, we find a power law relation between the scaling coefficients for \(D^{*}\) and \(s_{2}\), suggesting a constant viscosity to entropy ratio. ## I Introduction Excess entropy (\(s_{\rm exc}\)), the difference between the entropy of a system and its ideal gas counterpart at the same temperature and density, is connected to the dynamical properties of simple liquids (See Ref. [1] for a recent review). This observation was first reported by Rosenfeld [2], who showed that, for simple model liquids, reduced transport properties such as diffusivity, viscosity and thermal conductivity scale with the excess entropy as \[X^{*}=A\exp\left(\alpha s_{\rm exc}\right)\,, \tag{1}\] with \(X^{*}\) a dimensionless transport property and \(A\) and \(\alpha\) parameters, independent of the thermodynamic state, determined by the interparticle potential. Following similar physical arguments and assuming that the major contribution to \(s_{\rm exc}\) comes from two-body terms, Dzugutov proposed a similar scaling relation between self-diffusivity and a two-body approximation to the excess entropy \(s_{2}\), namely [3] \[D^{*}=A\exp(\alpha s_{2})\,, \tag{2}\] with \(D^{*}=\frac{D}{\Gamma\sigma_{r}^{2}}\) where \(D\) is the self-diffusion coefficient, \(\sigma_{r}\) measures the linear size of the particles and \(\Gamma=4\sigma^{2}g(\sigma_{r})\rho\sqrt{\frac{\pi k_{\rm B}T}{m}}\) the collision frequency given by the Enskog theory [4] where \(g(\sigma_{r})\) is the value of the radial distribution function at a distance \(\sigma_{r}\). In this case, a large variety of simple liquids satisfy Eq. (2) with the _universal_ choice of parameters \(A=0.049\) and \(\alpha=1\)[3]. This excess entropy scaling has been widely validated for a large variety of simple [5; 6; 7; 8; 9; 10; 11] and molecular liquids[12; 13; 14; 15], including specially water [16; 17; 18; 19]. We also highlight that experimental studies have tested entropy scaling in somewhat challenging scenarios [20; 21], and the fact that Rosenfeld and Dzugutov relations are empirical but have been justified on theoretical grounds [22; 23]. Furthermore, the structure-dynamics connection in Eq. (2) has been proposed as a tool to investigate the relation between dynamical properties of computational models at different resolutions, [24], which is now routinely considered in the context of coarse-grained models [25; 26]. Transport properties exhibit implicit size effects due to the finite size of the simulation box and the use of periodic boundary conditions (PBC) [27; 28; 29]. In the particular case of the reduced self-diffusion coefficient \(D^{*}\), given a cubic simulation box of linear size \(L\), \(D^{*}\equiv D^{*}(L)\) takes the form [27; 30; 31; 32; 33] (See Figure 1) \[D^{*}(L)={D^{*}}^{\infty}-\frac{\delta}{L}\,, \tag{3}\] with \(\delta=\frac{k_{\rm B}T\zeta}{6\pi\eta\Gamma\sigma_{r}^{2}}\) with \(\zeta\approx 2.837297\) and \(\eta\) the system's viscosity. In the thermodynamic limit (TL), namely, in the limit \(L\to\infty\), the self-diffusion coefficient takes the value \(D^{*\infty}\). Given the finite-size scaling of \(D^{*}\), we expect that Eq. (2) also depends on the size of the simulation box. Recent computational studies investigating entropy scaling for liquid water using ab initio molecular dynamics simulations [34] emphasise the relevance of this remark. In this case, the systems under consideration are rather small, and finite-size effects become Figure 1: Reduced self-diffusion coefficient \(D^{*}\) as a function of the inverse of the box linear size \(1/L\) for a Lennard-Jones liquid with density \(\rho\sigma_{\rm LJ}^{2}=0.864\) in the range of temperatures \(k_{\rm B}T=[0.7\epsilon,7\epsilon]\). increasingly important. In this paper, we investigate the finite-size scaling of Eq. (2) by focusing on implicit and explicit finite-size effects present on the two-body excess entropy \(s_{2}\). We find that \(s_{2}\) obeys a finite-size scaling relation similar to \(D^{*}\), which implies that the _universal_ parameters \(A\) and \(\alpha\) in Eq. (2) also depends on the size of the simulation box. Finally, and perhaps more interestingly, our results indicate that a power law relates the finite-size scaling coefficients of \(D^{*}\) and \(s_{2}\), suggesting a constant viscosity/entropy ratio [35; 36; 37; 38]. The paper is organised as follows: In Section II we present the model and computational details. We show that \(s_{2}\) is ensemble invariant and that the only relevant finite-size effect comes from using finite integration domains in Section III. In Section IV, we introduce and validate a finite-size version of \(s_{2}\). We then present the finite-size scaling of the Dzugutov relation (Eq. (2)) in Section V. Finally, we conclude and provide our outlook in Section VI. ## II Computational details We investigate the excess entropy scaling for liquids whose potential energy is described by a 12-6 Lennard-Jones potential truncated, with cutoff radius \(r_{c}/\sigma_{\rm LJ}=2.5\), and shifted. The parameters \(\epsilon\), \(\sigma_{\rm LJ}\) and \(m\), define the energy, length and mass units, respectively. All the results are expressed in LJ units with time \(\sigma_{\rm LJ}(m/\epsilon)^{1/2}\), temperature \(\epsilon/k_{\rm B}\) and pressure \(\epsilon/\sigma_{\rm LJ}^{3}\). In the following, we identify \(\sigma_{r}\) of Eq. (3) with \(\sigma_{\rm LJ}\). We consider cubic simulation boxes with linear sizes in the interval \(L/\sigma_{\rm LJ}=[5,50]\), with fixed density \(\rho\sigma_{\rm LJ}^{3}=0.864\). The systems are equilibrated at temperatures in the interval \(k_{\rm B}T=[0.7\epsilon,7.0\epsilon]\), enforced with a Langevin thermostat with damping coefficient \(\gamma(\sigma(m/\epsilon)^{1/2})=1.0\). We equilibrate the samples for \(10\times 10^{6}\) molecular dynamics (MD) steps using a time step of \(\delta t/(\sigma_{\rm LJ}(m/\epsilon)^{1/2})=10^{-3}\), followed by additional \(10\times 10^{6}\) MD steps on the NVE ensemble to verify that the temperature does not deviate substantially from the target value. Production runs span \(10\times 10^{6}\) MD steps. All the simulations have been performed with the LAMMPS simulation package [39]. ## III Implicit and explicit finite-size effects In this section, we identify which finite-size effects are expected to affect the calculation of the excess entropy. We start with the definition of excess entropy for an \(N\)-particle system with respect to the ideal gas: \[s_{\rm exc}=\frac{S-S_{\rm IG}}{Nk_{\rm B}}=\frac{S_{2}+S_{3}+\cdots}{Nk_{ \rm B}}\,, \tag{4}\] with \(k_{\rm B}\) the Boltzmann constant. In the following, we focus on two-body contributions, which mostly amount to 80-90% of the overall value of the excess entropy for simple liquids. [40; 41] In particular, we have [42; 43] \[s_{2}=-\frac{\rho}{2V}\int_{V}\int_{V}d{\bf r}_{1}\,d{\bf r}_{2}\,\left[g({\bf r })\ln g({\bf r})-(g({\bf r})-1)\right]\,, \tag{5}\] with \(s_{2}=\frac{S_{2}}{Nk_{\rm B}}\) the two-body excess entropy per particle. By taking the thermodynamic limit and assuming that the liquid is homogeneous and isotropic, we obtain the familiar expression \[s_{2}^{\infty}=-2\pi\rho\int_{0}^{\infty}dr\,r^{2}\left[g(r)\ln g(r)-(g(r)-1) \right]\,. \tag{6}\] When performing molecular dynamics simulations, we usually consider systems with a finite number of particles, typically not large enough to reach the thermodynamic limit. Furthermore, when evaluating the double integral in Eq. (5) we need to consider that the volume \(V\) is finite. For such a reason, and following the strategy used to compute the compressibility equation [44; 45] and the Kirkwood-Buff integrals [46; 47; 48] in computer simulations, we define a finite-size two-body excess entropy evaluated in a subvolume \(V\) of a system with a total number of particles \(N_{0}\) and the volume \(V_{0}\) \[\begin{split} s_{2}(V;N_{0})=-\frac{\rho}{2V}\int_{V}\int_{V}d{ \bf r}_{1}\,d{\bf r}_{2}\,\left[g({\bf r};N_{0})\ln g({\bf r};N_{0})\right. \\ \left.-(g({\bf r};N_{0})-1)\right]\,,\end{split} \tag{7}\] with \(g({\bf r};N_{0})\) the finite-size RDF. The asymptotic correction to the finite-size RDF, given by the difference in the thermodynamic ensemble, gives [49; 50; 51; 52; 53; 54] \[g({\bf r};N_{0})=g({\bf r})-\frac{\chi_{T}^{\infty}}{N_{0}} \tag{8}\] with \(\chi_{T}^{\infty}=\rho k_{\rm B}T\kappa_{T}\), and \(\kappa_{T}\) being the isothermal compressibility in the thermodynamic limit. We write the integrand in Eq. (7) as \[\begin{split} g({\bf r};N_{0})\ln g({\bf r};N_{0})\approx g({\bf r })\ln g({\bf r})\\ -\frac{\chi_{T}^{\infty}}{N_{0}}(1+\ln g({\bf r}))\\ g({\bf r};N_{0})-1=g({\bf r})-1-\frac{\chi_{T}^{\infty}}{N_{0}} \,,\end{split} \tag{9}\] where in the first line in the previous expression, we have neglected terms of the order \(O\left(\frac{1}{N_{0}^{2}}\right)\). The two contributions \(\frac{\chi_{T}^{\infty}}{N_{0}}\) cancel out exactly. The contribution \(\frac{\chi_{T}^{\infty}}{N_{0}}\ln g({\bf r})\) can be neglected by assuming a large number of particles (there is no \(V/V_{0}\) contribution, only \(1/V_{0}\), hence, we can neglect it). This indicates that the two-body excess entropy is ensemble invariant, consistent with the result reported Ref. [55; 56]. We thus rewrite Eq. (7) as \[\begin{split} s_{2}(V)=-\frac{\rho}{2V}\int_{V}\int_{V}d{\bf r}_{1 }\,\,\,d{\bf r}_{2}\,\left[g({\bf r})\ln g({\bf r})\right.\\ \left.-(g({\bf r})-1)\right]\,.\end{split} \tag{10}\] The volume \(V\) is finite and embedded into the volume \(V_{0}\). The integration domains can be rearranged as \(\int_{V}\int_{V}(\cdots)=\int_{V}\int_{V_{0}}(\cdots)-\int_{V}\int_{V_{0}-V}(\cdots)\). Using a similar argument as the one used to calculate the finite-size compressibility [57] and Kirkwood-Buff integrals [46], the term \(\int_{V}\int_{V_{0}}(\cdots)\) gives \(s_{2}^{\infty}\) and the term \(\int_{V}\int_{V_{0}-V}(\cdots)\) scales as \(1/L\) with \(L=V^{1/3}\) the linear size of the cubic simulation box. Thus, \[s_{2}(L)=s_{2}^{\infty}+\frac{\sigma}{L}\,, \tag{11}\] with \(\sigma\) a constant that depends on intensive thermodynamic quantities only. In the following section, we introduce a method to compute \(s_{2}(L)\) and verify its scaling behaviour with the linear size of the simulation box. To finish this section, we compare our results with the usual truncation of Eq. (6) up to a cutoff radius \(R\), namely \[\begin{split} s_{2}^{R}=-2\pi\rho\int_{0}^{R}dr\,r^{2}\left[g(r; N_{0})\ln g(r;N_{0})\right.\\ \left.-(g(r;N_{0})-1)\right]\,.\end{split} \tag{12}\] We use this truncated integral to verify numerically that ensemble finite-size contributions cancel out almost exactly. [11] For a system of size \(L/\sigma_{\rm LJ}=35\) at \(k_{\rm B}T=2.0\epsilon\), we separate the \(g(r;N_{0})-1\), _Kirkwood-Buff_, and the \(g(r;N_{0})\ln g(r;N_{0})\), _Information_, contributions and plot them as a function of the truncation radius \(R\) (See Figure 2). Both integrals diverge for large values of \(R\), Kirkwood-Buff to infinity and Information to minus infinity, which signals a clear ensemble finite-size effect. However, these two finite-size contributions balance each other, and the sum of the two integrals converges to \(s_{2}^{\infty}\) for \(R>>1\). Due to this error cancellation, the truncation Eq. (12) gives \(s_{2}^{\infty}\) even for relatively small simulation boxes, and its finite-size dependence has been commonly overlooked in the literature. ## IV Finite-volume excess entropy Based on previous work on finite-size isothermal compressibility [58] and Kirkwood-Buff integrals [59], we define a finite-volume two-body excess entropy as follows. \[s_{2}(V)=-\frac{\rho}{2V}\int\int d\mathbf{r}_{1}\,d\mathbf{r}_{2}\,R( \mathbf{r}_{1})\,R(\mathbf{r}_{2})\,h(\mathbf{r})\,, \tag{13}\] with \(R(\mathbf{r})\) a step function that defines the finite integration subdomain, being equal to one inside and to zero outside the volume \(V\)[58]. The function \(h(\mathbf{r})\) is defined as \[h(\mathbf{r})=g(\mathbf{r})\ln g(\mathbf{r})-(g(\mathbf{r})-1)\,. \tag{14}\] We write the double integral of \(s_{2}(V)\) in Fourier space and include the periodicity of the simulation of the box in \(h(\mathbf{r})\) explicitly. Thus \[s_{2}(V)=-\frac{\rho}{2(2\pi)^{3}V}\int d\mathbf{k}\,\tilde{R}(\mathbf{k})\, \tilde{R}(-\mathbf{k})\,\tilde{h}^{\rm PBC}(\mathbf{k})\,, \tag{15}\] where [58] \[\tilde{h}^{\rm PBC}(\mathbf{k})=\sum_{n_{x},n_{y},n_{z}}e^{-\mathbf{k}\cdot \mathbf{s}_{n_{x},n_{y},n_{z}}}\tilde{h}(\mathbf{k})\,, \tag{16}\] with \(\tilde{h}(\mathbf{k})\) the Fourier transform of \(h(\mathbf{r})\) and \(\mathbf{s}_{n_{x},n_{y},n_{z}}=(n_{x}\,L_{x},n_{y}\,L_{y},n_{z}\,L_{x})\) a vector specifying the system's periodic images such that \(n_{x,y,z}\) takes integer values. In the following, we consider a cubic simulation box with \(L_{x}=L_{y}=L_{z}=L\). As before [59], we choose \(|n_{x}|\leq 1\), \(|n_{y}|\leq 1\) and \(|n_{z}|\leq 1\) to compute Eq. (16). Finally, we assume a homogeneous and isotropic fluid such that \(\tilde{h}(\mathbf{k})=\tilde{h}(k)\) with \(k=\sqrt{\mathbf{k}\cdot\mathbf{k}}\). To validate our approach, we verify that Eqs (15) and (12) converge to the same value in the thermodynamic limit. To this aim, we consider a system with linear size \(L/\sigma_{\rm LJ}=50\) at \(k_{\rm B}T=2.0\epsilon\), compute the RDF and evaluate the truncated integral Eq. (12). According to Eq. (11), implicit finite-size effects are the most relevant in this case. Hence, by considering a sufficiently large simulation box, the large \(R\) limit of Eq. (12) converges to the TL value. We present this result in Fig. 3 (black solid curve). To evaluate Eq. (15), we take the RDF from the simulation box with linear size \(L/\sigma_{\rm LJ}=20\) and perform the Fourier transform procedure described above to obtain \(\tilde{h}(\mathbf{k})\). It is apparent, as expected, that with explicit PBC, the finite-size \(s_{2}\) gives the TL value (red dashed curve). Instead, by removing PBC, there is a significant deviation from the TL value that we attribute to the \(1/L\) dependence in Eq. (11) (blue solid curved). We verify this \(1/L\) dependence in the finite-size \(s_{2}\). In Figure 4, we plot the result of \(s_{2}^{R}\), Eq. (12), as a Figure 2: Plot of the two contributions, _Kirkwood-Buff_ (\(g(r;N_{0})-1\)) and _Information_ (\(g(r;N_{0})\ln g(r;N_{0})\)), to the truncated integral \(s_{2}^{R}\) for a system of linear size \(L/\sigma_{\rm LJ}=35\) at \(k_{\rm B}T=2.0\epsilon\). It is apparent that the two terms oscillate out-of-phase for small values of \(R\), and their sum converges to \(s_{2}^{\infty}\) when \(R\to\infty\). function of \(1/L\) (black inverted triangles). There, it is apparent that the integral converges when the linear size of the system is \(L/\sigma_{\rm LJ}>10\). The result of using \(s_{2}(V)\), Eq. (15), with explicit PBC, always converges to the TL value (red triangles), regardless of the linear size of the system. More interestingly, by removing PBC from Eq. (15), we observe a clear linear dependence with \(1/L\) (blue squares). Furthermore, by extrapolating this behaviour (blue dashed line) to the axis \(1/L=0\), we obtain a linear extrapolation to \(s_{2}^{\infty}\). This result completes the validation of both, Eqs (11) and (15). ## V Finite-size excess-entropy scaling In this section, we investigate the finite-size effects of the self-diffusivity entropy scaling, Eq. (2). To this aim, we verify that the scaling of \(s_{2}\) with \(1/L\) is valid in a wide temperature range. We present these results for a LJ system with density \(\rho\sigma_{\rm LJ}^{3}=0.864\) in the range of temperatures \(k_{\rm B}T=[0.7\epsilon,7\epsilon]\). The results in Figure 5 indicate that the \(1/L\) scaling is apparent for all temperatures considered here. We now collect all our data to investigate the scaling of Eq. (2) with the simulation box size. The result is presented in Figure 6 where the diffusion constant \(D^{*}\) is plotted against \(-s_{2}\). A clear trend with system size emerges, indicating that Eq. (2) remains valid even for the smallest simulation boxes considered and showing that the parameters \(A\) and \(\alpha\) are also size dependent. By extrapolating \(D^{*}\) and \(-s_{2}\) to the limit \(1/L\to 0\), we obtain the TL values given by the black empty triangles that well agree with the reference scaling provided by Eq. (2) (black dashed line). Indeed, we report \(A^{\infty}=0.048\pm 0.001\) and \(\alpha^{\infty}=1.000\pm 0.013\) in the TL, in good agreement with the value originally estimated in Ref. [3]. Finally, we investigate the relation between the coefficients \(\delta\) and \(\sigma\) of the finite-size scaling of \(D^{*}\) and \(s^{2}\), respectively. In Figure 7, we plot \(\sigma\) as a function of \(\delta\) and observe a power law relation of the form \(\sigma=a\delta^{b}\) with \(a=1.256\pm 0.118\) and \(b=-0.513\pm 0.020\). ## VI Summary and outlook We define a finite-size two-body excess entropy \(s_{2}(L)\) integral equation with \(L\) the linear size of the simulation box. Using analytical arguments and simulations of a prototypical Lennard-Jones liquid at different densities and temperatures, we show that \(s_{2}(L)=s_{2}^{\infty}+\sigma/L\) with Figure 4: \(s_{2}\) as a function of the inverse of the simulation box size \(L\) for systems at \(k_{\rm B}T=2.0\epsilon\). The black triangles are calculated with the truncated integral (Eq. (12)), the red triangles and blue squares were calculated with the double integral (Eq. (15)) including and excluding PBC, respectively. Figure 5: -\(s_{2}\) as a function of \(1/L\) for a LJ system at \(\rho\sigma_{\rm LJ}^{3}=0.864\) and different temperatures. All data points were obtained with the RDF for the system of linear size \(L/\sigma_{\rm LJ}=20\) and using Eq. (15) without PBC. Figure 3: Running \(s_{2}\) as a function of the ratio \(R/L\) for the case \(L/\sigma_{\rm LJ}=5\) at \(k_{\rm B}T=2.0\epsilon\). The black line corresponds to the truncation Eq. 12, and the red and blue curves are the result of Eq. (15) including (\(|n_{x}|\leq 1\), \(|n_{y}|\leq 1\) and \(|n_{z}|\leq 1\)) and not including (\(|n_{x}|=|n_{y}|=|n_{z}|=0\)) PBC, respectively. By including PBC, the integral Eq. (15) converges to the thermodynamic limit. \(\sigma\) a constant that depends on intensive thermodynamic quantities. Given the well-know finite-size scaling of the self-diffusivity, \(D^{*}(L)=D^{*\infty}-\delta/L\), we show that the universal scaling relation between entropy and diffusion \(D^{*}=A\exp{(\alpha s_{2})}\) also exhibits a finite-size dependence and, by extrapolating to the TL, report \(A=0.048\pm 0.001\) and \(\alpha=1.000\pm 0.013\), in good agreement with values reported in the literature. Finally, and perhaps more interestingly, we show that the scaling coefficients \(\sigma\) and \(\delta\) of \(s_{2}\) and \(D^{*}\), respectively, are related by a somewhat simple power law \(\sigma=a\delta^{b}\) with \(a=1.256\pm 0.118\) and \(b=-0.513\pm 0.020\). The finite-size scaling of \(s_{2}\) can be rationalised in terms of the thermodynamics of small systems [60; 61]. In particular, the statistical mechanics of a few model small systems in confinement has been derived recently [62]. The authors have shown that given the high surface area-to-volume ratio of small systems, thermodynamic properties include surface contributions. In the case of entropy, these contributions include \(1/L\) terms with \(L\), the linear size of the system. In this context, we feel that the finite-size entropy scaling investigated here might play a role in understanding the non-equilibrium thermodynamics of confined, small systems [63]. The power law relation between the scaling coefficients of self-diffusion and two-body excess entropy is somewhat intriguing. On the one hand, the size scaling in the self-diffusion appears as a consequence of the conservation of linear momentum [30]. On the other hand, the finite-size scaling in the two-body entropy results from a surface contribution due to the confinement of the system [62]. Admittedly, we do not have a satisfactory explanation for this connection. Nevertheless, we point out that the ratio \(\delta^{b}/\sigma=1/a\) might be related to a constant viscosity/entropy ratio. Indeed, \(\delta\) is inversely proportional to the system's viscosity, and a simple dimensional analysis tells us that \(\sigma\) has units of entropy times length. Interestingly, string theory methods have been used to conjecture that, for fluids in equilibrium, the viscosity to entropy density ratio has a lower bound at \(\hbar/4\pi k_{\rm B}\)[35] with \(\hbar\) the reduced Planck constant. This relation, tested for various fluid systems [36; 37; 38], has been originally derived by considering that the entropy density of a black hole is proportional to the surface to volume ratio of its event horizon, i.e. a \(1/L\) contribution. We find this connection fascinating, and, in our opinion, it deserves further investigation. ###### Acknowledgements. We are grateful to Kurt Kremer for his insightful discussions. We also thank Denis Andrienko for his critical reading of the manuscript. R.C.-H. gratefully acknowledges funding from SFB-TRR146 of the German Research Foundation (DFG). Simulations have been performed on the THINC cluster at the Max Planck Institute for Polymer Research and the COBRA cluster at the Max Planck Computing and Data Facility.
2306.09826
Pose Graph Optimization for a MAV Indoor Localization Fusing 5GNR TOA with an IMU
This paper explores the potential of 5G new radio (NR) Time-of-Arrival (TOA) data for indoor drone localization under different scenarios and conditions when fused with inertial measurement unit (IMU) data. Our approach involves performing graph-based optimization to estimate the drone's position and orientation from the multiple sensor measurements. Due to the lack of real-world data, we use Matlab 5G toolbox and QuaDRiGa (quasi-deterministic radio channel generator) channel simulator to generate TOA measurements for the EuRoC MAV indoor dataset that provides IMU readings and ground truths 6DoF poses of a flying drone. Hence, we create twelve sequences combining three predefined indoor scenarios setups of QuaDRiGa with 2 to 5 base station antennas. Therefore, experimental results demonstrate that, for a sufficient number of base stations and a high bandwidth 5G configuration, the pose graph optimization approach achieves accurate drone localization, with an average error of less than 15 cm on the overall trajectory. Furthermore, the adopted graph-based optimization algorithm is fast and can be easily implemented for onboard real-time pose tracking on a micro aerial vehicle (MAV).
Meisam Kabiri, Claudio Cimarelli, Hriday Bavle, Jose Luis Sanchez-Lopez, Holger Voos
2023-06-16T13:10:23Z
http://arxiv.org/abs/2306.09826v1
# Pose Graph Optimization for a MAV Indoor Localization Fusing 5GNR TOA with an IMU ###### Abstract This paper explores the potential of 5G new radio (NR) Time-of-Arrival (TOA) data for indoor drone localization under different scenarios and conditions when fused with inertial measurement unit (IMU) data. Our approach involves performing graph-based optimization to estimate the drone's position and orientation from the multiple sensor measurements. Due to the lack of real-world data, we use Matlab 5G toolbox and QuaDRiGa (quasi-deterministic radio channel generator) channel simulator to generate TOA measurements for the EuRoC MAV indoor dataset that provides IMU readings and ground truths 6DoF poses of a flying drone. Hence, we create twelve sequences combining three predefined indoor scenarios setups of QuaDRiGa with 2 to 5 base station antennas. Therefore, experimental results demonstrate that, for a sufficient number of base stations and a high bandwidth 5G configuration, the pose graph optimization approach achieves accurate drone localization, with an average error of less than 15 cm on the overall trajectory. Furthermore, the adopted graph-based optimization algorithm is fast and can be easily implemented for onboard real-time pose tracking on a micro aerial vehicle (MAV). 5G TOA, IMU, QuaDRiGa, Indoor Localization, Pose Graph Optimization, Sensor Fusion, Micro Aerial Vehicles. ## I Introduction Drones, such as micro aerial vehicles (MAVs), have become increasingly prevalent in indoor environments due to their potential in various applications, from surveillance to delivery services. For many of these applications, accurate drone positioning and orientation are essential. Global Navigation Satellite Systems (GNSS), the most widely used positioning technology, encounters challenges in penetrating indoor environments due to signal attenuation and multipath effects. Inertial navigation systems (INSs) are another widely used method for indoor localization, but they can accumulate noise over time, resulting in significant position errors if left uncorrected. Alternative indoor localization techniques, including Wi-Fi, Bluetooth, and Ultra-Wideband (UWB), exhibit limitations in accuracy, scalability, energy efficiency, and cost [1, 2]. For instance, Wi-Fi is highly susceptible to noise, Bluetooth has restricted range and accuracy, and UWB faces slow progress in standard development. Moreover, Zigbee's emphasis on low-power communication and its limited range further constrain its localization capabilities. Consequently, a need for alternative indoor positioning technologies arises that can offer high accuracy and reliability without relying on GNSS. Recent advances in wireless communication technologies have paved the way for developing location-based services and applications that rely on accurate localization. In particular, deploying fifth-generation (5G) cellular networks has opened up new possibilities for indoor localization due to their high bandwidth, low latency, and improved coverage [3], with small cell technologies, like femtocells and picocells, facilitating indoor coverage. For downlink positioning, 5G utilizes a dedicated pilot signal called Positioning Reference Signal (PRS), which measures signal delay by correlating the received PRS with a locally generated PRS. The delay, also called Time-of-Arrival (TOA), is then calculated by identifying the peak correlation value between the two signals. Accurate and reliable indoor localization remains challenging due to the complex and dynamic nature of indoor environments. In this context, our work proposes a novel approach for indoor localization using 5G TOA measurements. However, 5G TOA alone may not provide sufficient information for reliable indoor localization. Thus, our work aims to fuse 5G TOA with an inertial measurement unit (IMU), to improve the real-time pose estimation of a flying MAV. IMUs can measure angular velocities, and linear acceleration, providing complementary information to TOA measurements. To accomplish this, we extract distances from the PRS correlation profiles and use them to formulate a range error function tightly integrated with the IMU measurements in a graph-based optimization technique. Due to the lack of real 5G data recorded from MAVs, we utilize the QuaDRiGa simulator [4] to create accurate 5G TOA measurements based on ground truth 6 Degrees of Freedom (DoF) pose data to ensure precise channel modeling. Hence, our experimental simulations include three indoor configurations with LOS and varying base stations (BSs) numbers virtually placed inside the Euroc MAV dataset environment [5]. To summarize, the contributions provided by this paper are the following: * Formulation of a factor graph model that tightly optimizes 5G TOA ranges with IMU measurements. * Evaluation in a state-of-the-art dataset shows accuracy close to centimeter precision while running an efficient real-time algorithm. * Simulation of 5G TOA comparing multiple antenna and communication settings in an indoor scenario to find the best configuration for precise localization. ## II Related Works The literature on localization using 5G is relatively limited, especially considering the mechanical aspects and sensor fusion framework. The existing literature often relies on simplistic methodologies and scenarios. Ferre et al. [6] compared localization accuracy for different combinations of the 5G network configurations (center frequency, sub-carrier spacing, and PRS comb size) in terms of the Root Mean Square Error (RMSE). Also, their study considered a fixed target and employed multilateration based on PRS-derived TOA from multiple BSs. A study by del Peral-Rosado et al. [7] explored the impact of positioning performance when BSs are linearly placed along a straight roadside 5G network. They utilized Gauss-Newton optimization and simulated a 100 km/h vehicle on a highway. The study revealed an accuracy of less than 20-25 cm for a communication bandwidth of 50-100 MHz. Additionally, the researchers calculated the TOA by determining the first correlation peak between the PRS and the received signal. Saleh et al. [8] proposed a time-based positioning by combining vehicle velocity information and 5G measurements. They evaluated their approach in a simulated urban canyon using Siradel's S_5GChannel simulator and employed an EKF with a constant velocity model for sensor fusion. The study also analyzed the impact of the 5G geometrical setup on EKF positioning estimation. Another EKF-based positioning framework is proposed by Menta et al. [9]. The authors leveraged the 5G Angle of Arrival (AOA) extracted from the communication signal of BSs equipped with multi-array antennas. By utilizing this information, they achieved sub-meter accuracy in localization. Sun et al. [10] studied localization by combining AOA estimates from 5G BSs with TOA measurements from GNSS satellites. The authors utilized the Taylor series to linearize the mathematical model. As post-processing, they applied a moving averaging to the raw position estimates to minimize errors. Finally, [11] explored the fusion of beamformed RSS information with GNSS data using Neural Networks (NN), achieving meter accuracy. Unlike previous approaches, we address indoor localization using the factor graph to model the relation among non-homogenous sensor measurements. Furthermore, we leverage the advanced IMU pre-integration factor to propagate the MAV's 6 DoF pose between two lower frequency TOA measurements, obtaining 6DoF pose estimates at a higher frequency. ## III Methodology This section describes the proposed drone localization approach in 5G networks. This involves the fusion of range measurements, which we obtain from the TOA of the signal from multiple BSs, with IMU data, _i.e.,_ angular velocity and linear acceleration measurements. ### _5G ToA Estimation_ To estimate the distance to 5G BSs, we must first obtain the ToA values along the robot's trajectory. Since we do not possess actual data on 5G communication, this step first requires generating the 5G signal transmitted by each base station, including PRS symbols. Then, a channel simulator creates an impulse response that emulates the wireless channel characteristics based on specific network configurations and given receiver and transmitter positions. Convolving the transmitted signal with the impulse response replicates the effects of the transmission environment, generating the received signal. At the receiver, the signal is correlated with the corresponding PRS of each base station resulting in a PRS correlation profile. By analyzing it, we identify the TOA as one of the peak or local maxima. Notably, the real TOA is a value close to the best peak but often not matching it. We apply a heuristic selection of the first peak surpassing a global threshold that we find experimentally on the data. ### _Pose Estimation_ To accurately estimate the drone's 6DoF pose, we use a graph-based optimization technique that models the relationships between the pose variables using sensor measurements and optimizes the estimation using non-linear least squares. This approach involves creating a factor graph model [12] (see the one abstracting our problem formulation in Figure 1) where the nodes represent the state variables to be estimated and the edges, called factors, represent residual error functions that compare predicted states and observed measurements. Therefore, the factor graph models the posterior probability density \(p(\mathcal{X}|\mathcal{Z})\) of the state variables \(\mathcal{X}\) given a set of measurements \(\mathcal{Z}\), and, assuming independent measurements, it can be factorized into likelihoods \(p(\mathcal{Z}_{t}^{f}|\mathcal{X}_{t})\) and prior \(p(\mathcal{X}_{0})\): \[p(\mathcal{X}|\mathcal{Z})\propto p(\mathcal{X}_{0})p(\mathcal{Z}|\mathcal{X} )=p(\mathcal{X}_{0})\prod_{\forall t\in\mathcal{T},\forall f\in\mathcal{F}}p( \mathcal{Z}_{t}^{f}|\mathcal{X}_{t})\,, \tag{1}\] where \(\mathcal{F}\) defines the set of factors type of functions that can replace the likelihoods. We denote the set of measurements the factor \(f\) uses for computing the residual at time \(t\) with \(\mathcal{Z}_{t}^{f}\), where \(\mathcal{T}\) is the set of tracked time frames. #### Iii-B1 State Variables We aim to determine the 3D location and orientation of the MAV's body center, which we align with the IMU frame. As we track the full history of 6DoF poses, with a full-smoothing approach, the set of state variables \(\mathcal{X}\) contains all the poses from the start of the trajectory \(\mathtt{T}_{1}\) to the end \(\mathtt{T}_{\mathrm{N}}\), where \(\mathrm{N}\) is the total number of pose nodes added to the graph. Especially, each \(\mathtt{T}_{t}\doteq(\mathtt{R},\mathtt{p})\in\mathrm{SE}(3),\forall t\in \mathcal{T}=\{1,\ldots,\mathrm{N}\}\) is composed of a rotation \(\mathtt{R}_{t}\in\mathrm{SO}(3)\) and a translation \(\mathtt{p}_{t}\in\mathbb{R}^{3}\) that transforms the body frame \(\mathtt{B}\) to the world frame \(\mathtt{W}\), where the BSs are placed, at time \(t\). Additional to the 6DoF transformations, \(\mathcal{X}\) comprises the MAV's linear velocity \(\mathbf{v}_{t}\in\mathbb{R}^{3}\). We also need to estimate the time-variant biases of the IMU's gyroscope \(\mathbf{b}_{t}^{g}\in\mathbb{R}^{3}\) and accelerometer \(\mathbf{b}_{t}^{a}\in\mathbb{R}^{3}\) to keep track of the IMU noise drift. Lastly, we include the 3D positions of the 5G antennas \(\mathbf{L}_{k}\in\mathbb{R}^{3},\forall k\in\{1,\ldots,\mathrm{K}\}\), where \(\mathrm{K}\) is the total number of BSs. #### Iii-B2 IMU Factor Our approach involves a 6-axis IMU that measures the body B linear acceleration \({}_{\mathrm{s}}\tilde{\mathbf{a}}_{t}\) and angular velocity \({}_{\mathrm{n}}\tilde{\mathbf{\omega}}_{t}\) expressed in the W frame. The IMU real motion state \(\{{}_{\mathrm{s}}\mathbf{\omega}_{t}\}\) is altered by additive Gaussian white noise \(\{\mathbf{\eta}_{t}^{a},\mathbf{\eta}_{t}^{g}\}\) and slowly time-varying biases \(\{\mathbf{b}_{t}^{a},\mathbf{b}_{t}^{g}\}\) affecting respectively the accelerometer and gyroscope as defined by the following IMU model: \[{}_{\mathrm{s}}\tilde{\mathbf{\omega}}_{t} = {}_{\mathrm{s}}\mathbf{\omega}_{t}+\mathbf{b}_{t}^{g}+\mathbf{\eta}_{t}^{g} \tag{2}\] \[{}_{\mathrm{s}}\tilde{\mathbf{a}}_{t} = {}_{\mathrm{s}}\mathbf{a}_{t}-\mathbf{R}_{t}^{\mathsf{T}}\mathbf{ g}+\mathbf{b}_{t}^{a}+\mathbf{\eta}_{t}^{a}\,, \tag{3}\] where \(\mathbf{g}\) is the Earth's gravity vector in the world frame \(\mathbf{W}\). Due to the IMU's higher sampling frequency than other sensors, it typically captures multiple measurements between two TOA instances. The IMU factor is constructed utilizing a _preintegrated measurement_[13] constraining the relative motion increments. Especially, we obtain the condensed measurements \(\Delta\tilde{\mathbf{R}}_{ij}\) of rotation, \(\Delta\tilde{\mathbf{p}}_{ij}\) of position, and \(\Delta\tilde{\mathbf{v}}_{ij}\) of velocity by integrating multiple IMU readings \(\{{}_{\mathrm{s}}\tilde{\mathbf{a}}_{t},{}_{\mathrm{s}}\tilde{\mathbf{\omega}}_{t }:\forall t\in[i,j]\}\). So, we can define the residual terms \(\mathbf{r}\) for the states \(\{\mathbf{b}_{ij},\mathbf{p}_{ij},\mathbf{v}_{ij}\}\): \[\mathbf{r}_{ij}^{\mathrm{s}} \doteq \mathrm{Log}\left(\Delta\tilde{\mathbf{R}}_{ij}^{\mathsf{T}} \mathbf{r}_{i}^{\mathsf{T}}\mathbf{r}_{j}\right)\,, \tag{4}\] \[\mathbf{r}_{ij}^{\mathrm{p}} \doteq \mathbf{R}_{i}^{\mathsf{T}}\big{(}\mathbf{p}_{j}-\mathbf{p}_{i}- \mathbf{v}_{i}\Delta t_{ij}-\tfrac{1}{2}\mathbf{g}\Delta t_{ij}^{2}\big{)}- \Delta\tilde{\mathbf{p}}_{ij}\,,\] (5) \[\mathbf{r}_{ij}^{\mathrm{r}} \doteq \mathbf{R}_{i}^{\mathsf{T}}\left(\mathbf{v}_{j}-\mathbf{v}_{i}- \mathbf{g}\Delta t_{ij}\right)-\Delta\tilde{\mathbf{v}}_{ij}\,, \tag{6}\] where \(\Delta t_{ij}\) is the total time interval. Also, \(\mathrm{Log}:\mathrm{SO}(3)\rightarrow\mathbb{R}^{3}\) defines the logarithm map that associates elements of the rotation manifold \(\mathrm{SO}(3)\) to vectors on the Euclidean tangent space \(\mathbb{R}^{3}\) representing rotation increments. Regarding the biases, the total residual \(\mathbf{r}_{ij}^{\mathrm{b}}\) between time \(t=i\) and \(t=j\) and \(i<j\) is set as follows: \[\mathbf{r}_{ij}^{\mathrm{b}}\doteq\ \mathbf{b}_{j}^{g}-\mathbf{b}_{i}^{g}+ \mathbf{b}_{j}^{a}-\mathbf{b}_{i}^{a}\,. \tag{7}\] #### Iii-B3 TOA Range Factor By multiplying the estimated TOA values \(\delta_{sk}\) by the speed of light \(c\), _i.e.,_\(d_{sk}=\delta_{sk}\cdot c\), we obtain \(\mathrm{K}\) metric distance measurements \(d_{sk}\in\mathbb{R},\forall s\in\mathcal{S}\subseteq\mathcal{T}\) of the drone to the \(k\)-th landmark \(\mathbf{L}_{k}\) at time \(s=i\). Notably, we explicitly express the possibility of having fewer TOA measurements than the number of tracked poses. The residual \(\mathbf{r}_{ik}^{\delta}\) of the TOA factor at time \(s=i\) with the BS \(\mathbf{L}_{k}\) is defined as: \[\mathbf{r}_{ik}^{\delta}\doteq d_{ik}-\|\mathbf{p}_{i}-\mathbf{L}_{k}\|_{2}\,. \tag{8}\] #### Iii-B4 Optimization The optimization problem is formulated as Maximum a Posteriori Estimation (MAP) estimation that involves finding the state \(\mathcal{X}^{*}\) that maximizes the posterior: \[\mathcal{X}^{*}=\operatorname*{arg\,max}_{\mathcal{X}}p(\mathcal{X}|\mathcal{ Z}). \tag{9}\] Considering the proportional relationship in Equation 1, Equation 9 is equivalent to the maximization of the problem factorized through likelihood functions: \[\mathcal{X}^{*}=\operatorname*{arg\,max}_{\mathcal{X}}p(\mathcal{X}_{0})\prod_ {\forall t\in\mathcal{T},\forall f\in\mathcal{F}}p(\mathcal{Z}_{t}^{f}| \mathcal{X}_{t})\,. \tag{10}\] Notably, with factor graphs, likelihoods can be expressed by the more general factors, which we have defined with \(f\in\mathcal{F}=\{\mathbf{R},\mathbf{p},\mathbf{v},\mathbf{b},\delta\}\) referring to the related residual functions. By assuming that the measurements' errors are zero-mean Gaussian distributed, Equation 10 is analogous to the minimization of the negative log-likelihood: \[\mathcal{X}^{*}=\ \operatorname*{arg\,min}_{\mathcal{X}}\|\mathbf{r}_{0}\|_{ \mathbf{\Sigma}_{0}}^{2}+\sum_{t=1}^{\mathrm{N}}\sum_{\forall f\in\mathcal{F}} \left\|\mathbf{r}_{t}^{f}\right\|_{\mathbf{\Sigma}_{t}^{f}}^{2}\,, \tag{11}\] where \(\left\|\mathbf{r}\right\|_{\mathbf{\Sigma}}^{2}=\mathbf{r}^{\mathsf{T}}\mathbf{ \Sigma}^{-1}\mathbf{r}\) is the squared Mahalanobis norm, and \(\mathbf{r}_{t}^{f}\) are the residual functions of the aforementioned factors \(f\) computed at time \(t\) with covariance matrix \(\mathbf{\Sigma}_{t}^{f}\). We denote with \(\mathbf{r}_{0}\) the residual derived from the prior on the initial pose with \(\mathbf{\Sigma}_{0}\) being its covariance matrix. To efficiently solve the MAP optimization problem, we utilize the iSAM2 iterative optimization algorithm [14] implemented in GTSAM [15]. This algorithm can automatically identify the variables that require linearization at each step, and it enables us to keep our graph solution updated while adding new nodes without experiencing memory overload. ## IV Experiments ### _Simulation of the 5G Communication_ This study uses 5G specifications for indoor base stations to generate PRS signals for TOA signals. Our simulation relies on the MATLAB 5G Toolbox to generate the resource grids for 5G NR signals, including the PRS and Physical Downlink Shared Channel (PDSCH) resources. Aiming at a realistic simulation of the 5G NR wireless communication, we employ QuaDRiGa (quasi-deterministic radio channel generator) channel simulator [4]. For precise channel modeling, it is essential to have information regarding the drone's position, orientation, and velocity. This information is required to account for factors such as the Doppler shift Fig. 1: The figure visualizes the structure of the factor graph used to optimize the variables, represented by circles, by relating them through factors, represented by squares. The nodes \(\mathbf{T}_{t}\) contain the 6DoGi pose variables connected by IMU pre-integration factors (the bias and velocity nodes are not visualized). TOA measurements create range factors between robot pose nodes and BSs position nodes, \(\mathbf{L}_{1}\) and \(\mathbf{L}_{2}\). A prior factor is connected to the first node \(\mathbf{\tau}_{1}\) to constrain it with the initial trajectory pose. effect accurately. We also assume that both the receiver and all transmitters use omnidirectional antennas. Three 5G simulation scenarios were considered, with different frequencies: QuaDRiGa-Industrial-LOS for 5 GHz, 3GPP-38.901-Indoor-LOS for 28 GHz, and mmMAGIC-Indoor-LOS for 78 GHz. * **QuaDRiGa_Industrial_LOS [16]**: This scenario is designed to replicate a LOS environment for industrial applications. * **3GPP_38.901_Indoor_LOS [17]**: This scenario simulates indoor environments, _e.g.,_ office buildings and shopping centers, with 0.5-100 GHz LOS frequency. * **mmMAGIC_Indoor_LOS [18]**: This is designed specifically for frequencies ranging from 6-100 GHz and indoor environments, _e.g.,_ offices, with LOS. The configurations for each 5G simulation scenario are provided in Table I. ### _Experimental Environment_ As the 5G channel simulation requires knowing the state of the receiver, _i.e.,_ its pose and velocity, we require a flying drone dataset to evaluate our method. The EuRoC MAV dataset [5] is a widely used benchmark dataset for visual-inertial odometry and SLAM. It was collected by an indoor drone equipped with a stereo-camera module providing images at 20 Hz and a calibrated IMU at 200HZ. The EuRoC MAV dataset contained the drone's position and orientation data obtained through the Vicon motion capture system, which can record the full 6DoF at about 100Hz. The full set of calibrated rigid transformations between sensors and the Vicon is also given. EuRoC MAV consists of several sequences. For this study, we consider the Vicon Room 1, sequence 01. We employ QuaDRiGa to model the wireless communication channel based on the available 6DoF ground truth poses of the drone provided by the dataset from which we compute the required velocity considering the translation vectors between two time-consecutive poses. To this aim, we virtually place two to five fictitious BSs in the room where the trajectory is recorded. The positions of the BSs in the EuRoC MAV Vicon system's coordinate frame are \(\mathrm{BS}_{1}=(-10,-7,2)\), \(\mathrm{BS}_{2}=(7,13,3)\), \(\mathrm{BS}_{3}=(25,-35,4)\), \(\mathrm{BS}_{4}=(-6,9,5)\), \(\mathrm{BS}_{5}=(-4,-14,6)\). We use these values to initialize the corresponding state variables of the optimization problem with a small covariance. After generating the resource grids and simulating the channel model, we generated the received signal at the receiver every 0.2 seconds enabling the calculation of TOA with the frequency of 5 Hz. To extract the TOA, the received signal correlated with the transmitter's PRS pattern, and the delay was calculated by analyzing the correlation profile. Typically, the initial or highest peak is considered as the response. This approach can be compromised by noise. The LOS coefficient may be weaker than the multipath coefficient due to attenuation from non-line-of-sight (NLOS) objects or constructive interference. To address this, a threshold was set to eliminate values below it, and the first peak above the threshold was chosen as the response. It is worth noting that the threshold value was determined through experimentation. An example of a correlation profile in the simulation is shown in Figure 2, where neither the first nor the maximum peak was the response. Still, a suitable threshold allowed the selection of the first peak as the response. Table II gives the statistic of the error in the resulting estimated distance to each BS. ### _Evaluation Metrics_ For the evaluation of our approach, we utilize the two most popular metrics in SLAM: Absolute Trajectory Error (ATE) and Relative Pose Error (\(\mathrm{RPE}\)) of the rotation \(\mathrm{RPE}_{\mathbf{b}}\) and translation \(\mathrm{RPE}_{\mathbf{p}}\)[19]. In addition to these metrics, we calculated the \(\mathrm{RMSE}\) error \(\mathrm{E}_{a},\forall a\in[x,y,z]\) for each trajectory coordinate axis. By calculating the error for each coordinate axis separately, we aim to gain insights into possible differences in accuracy that depend on the spatial direction. ### _Results_ The drone's position and orientation results are obtained from the factor graph based on the final MAP estimate for each node. Nodes are generated consistently at 10Hz, twice the TOA's frequency. To evaluate the performance of the localization algorithm, error metrics are computed by comparing the ground truth Vicon pose with the estimated pose that is temporally closest. Fig. 2: PRS Correlation Profile Based on the results, it is evident that increasing the number of BSs enhances the precision of position estimation in all situations except the first simulation scenario when transitioning from three to four. In particular, the mmMAGIC-Indoor-LOS with the highest bandwidth (400 MHz) outperforms the other 5G simulation scenarios. Figure 3 illustrates the 3D position error distribution. As expected, the median value of the position error decreases with an increase in the number of BSs. The box plot reveals that the mmMAGIC-Indoor-LOS 5G simulation scenario achieves the lowest overall error. Surprisingly, the first scenario performs slightly worse when transitioning from three to four BSs. The detailed results are given in Table III. The table includes information on ATE, RPE, and translation RMSE for each motion direction in different 5G simulation scenarios. The error values confirm that a higher number of antennas and a larger bandwidth decrease sensibly the error. However, in the last two best simulation scenarios, the accuracy improvement is marginal by increasing from four to five BSs. This may indicate a lower bound to the error reduction achieved by adding more antennas. Nevertheless, such redundancy may be helpful in those environments where NLOS conditions are more frequent. In Figure 4, we represent the most accurate results obtained in the 5G simulation scenario mmMAGIC-Indoor-LOS with five BSs. The plot displays the estimated 3D positions of the MAV, with arrows indicating the estimated attitude (excluding yaw). The position and orientation errors are color-coded to show in red the few spots in which the poses do not match well the ground-truth and in green where the error is low, down to a few centimeters. To evaluate the efficiency of the proposed method and its potential for real-time application, we recorded and reported the sum, average, and median optimization times. The sum optimization time was calculated to be 5.203 seconds for the 144-second trajectory of the EuRoC Mav dataset, indicating the total time required to optimize the drone's position and orientation using our graph-based framework. On average, the optimization process took only 0.0036 seconds, demonstrating the method's speed and potential for efficient implementation. Furthermore, the median optimization time was 0.0029 seconds, indicating that most optimization processes were even faster than the average, highlighting the consistency of the algorithm's performance. All the experiments were performed on a Ubuntu 20.04 laptop with an Intel(R) Core(TM) i9-10885H CPU @ 2.40GHz with 16 cores and 32 Gb of RAM. As the code was partially implemented in Python, we expect further improvement by a complete conversion in C++. ### _Limitations_ The approach used in the study has several limitations and potential for future work. The 3D position is not fully constrained with only two antennas, making convergence difficult without other measurements. Nevertheless, the UAV's rotation errors primarily result from IMU noise, as the radio frequency signal only provides distances to the antennas. The yaw estimation has drift issues because it lacks global measurement to correct it. Integrating other sensors can improve the localization accuracy by observing the rotation around \(z\), _e.g.,_ employing a magnetometer. Notably, a camera can be incorporated to add other constraints on the 6DoF relative motion based on visual features and loop closures. Furthermore, the error in the \(z\) axis is larger than along \(x\) and \(y\) axes because of limited offset or variation in the positions of the base stations in the height direction. We foresee the possibility of fusing the barometer's absolute height measurements to relieve such issues. Additionally, the localization accuracy depends heavily on the quality of the TOA measurements, which can be negatively affected by NLOS conditions. In such cases, correctly setting the measurement uncertainty for each TOA range factor, using Mahalanobis Fig. 4: Visualization of the 3D trajectory estimated using five BSs in mmMAGIC-Indoor-LOS 5G simulation scenario. Best viewed online and in color. Fig. 3: Box plot of the translation error in meters for each 5G simulation scenario and number of BSs. distance to discard outliers, or applying a robust kernel to the cost function, _e.g.,_ Huber, may be beneficial to alleviate the problem. Finally, the proposed method assumes that the positions of the base stations are known with high confidence and fixed in the exact location, which may not be the case in real-world scenarios where the stations may be moving or their positions may be completely unknown. Moreover, the study assumes that the odometry frame can be initially aligned with the world frame inside which the antennas are placed. This can be solved in future work by explicitly estimating the transformation between the local coordinate frame and the world. ## V Conclusion In conclusion, we have successfully demonstrated the potential of using 5G TOA-based range measurements with data from inertial sensors to locate a MAV indoors in various scenarios and network setups. Our optimization strategy, which is graph-based, enables us to accurately determine the drone's position and orientation, with an average testing error of less than 15 cm. This technique has many practical applications, such as drone-powered monitoring and communication systems. In the future, we plan to improve localization accuracy and reliability by integrating visual data from cameras, experimenting with real data, and investigating advanced techniques for precise TOA estimation.
2303.12366
Chern classes in equivariant bordism
We introduce Chern classes in $U(m)$-equivariant homotopical bordism that refine the Conner-Floyd-Chern classes in the $MU$-cohomology of $B U(m)$. For products of unitary groups, our Chern classes form regular sequences that generate the augmentation ideal of the equivariant bordism rings. Consequently, the Greenlees-May local homology spectral sequence collapses for products of unitary groups. We use the Chern classes to reprove the $MU$-completion theorem of Greenlees-May and La Vecchia.
Stefan Schwede
2023-03-22T08:06:24Z
http://arxiv.org/abs/2303.12366v2
# Chern classes in equivariant Bordism ###### Abstract. We introduce Chern classes in \(U(m)\)-equivariant homotopical bordism that refine the Conner-Floyd-Chern classes in the \(\mathbf{MU}\)-cohomology of \(BU(m)\). For products of unitary groups, our Chern classes form regular sequences that generate the augmentation ideal of the equivariant bordism rings. Consequently, the Greenlees-May local homology spectral sequence collapses for products of unitary groups. We use the Chern classes to reprove the \(\mathbf{MU}\)-completion theorem of Greenlees-May and La Vecchia. ## Introduction Complex cobordism \(\mathbf{MU}\) is arguably the most important cohomology theory in algebraic topology. It represents the bordism theory of stably almost complex manifolds, and it is the universal complex oriented cohomology theory; via Quillen's celebrated theorem [13], \(\mathbf{MU}\) is the entry gate for the theory of formal group laws into stable homotopy theory, and thus the cornerstone of chromatic stable homotopy theory. Tom Dieck's homotopical equivariant bordism \(\mathbf{MU}_{G}\)[17], defined with the help of equivariant Thom spaces, strives to be the legitimate equivariant refinement of complex cobordism, for compact Lie groups \(G\). The theory \(\mathbf{MU}_{G}\) is the universal equivariantly complex oriented theory; and for abelian compact Lie groups, the coefficient ring \(\mathbf{MU}_{G}^{*}\) carries the universal \(G\)-equivariant formal group law [7]. Homotopical equivariant bordism receives a homomorphism from the geometrically defined equivariant bordism theory; due to the lack of equivariant transversality, this homomorphism is _not_ an isomorphism for non-trivial groups. In general, the equivariant bordism ring \(\mathbf{MU}_{G}^{*}\) is still largely mysterious; the purpose of this paper is to elucidate its structure for unitary groups, and for products of unitary groups. Chern classes are important characteristic classes for complex vector bundles that were originally introduced in singular cohomology. Conner and Floyd [4, Corollary 8.3] constructed Chern classes for complex vector bundles in complex cobordism; in the universal cases, these yield classes \(c_{k}\in\mathbf{MU}^{2k}(BU(m))\) that are nowadays referred to as Conner-Floyd-Chern classes. Conner and Floyd's construction works in much the same way for any complex oriented cohomology theory, see [1, Part II, Lemma 4.3]; in singular cohomology, it reduces to the classical Chern classes. The purpose of this note is to define and study Chern classes in \(U(m)\)-equivariant homotopical bordism \(\mathbf{MU}_{U(m)}^{*}\) that map to the Conner-Floyd-Chern classes under tom Dieck's bundling homomorphism [17, Proposition 1.2]. Our classes satisfy the analogous formal properties as their classical counterparts, including the equivariant refinement of the Whitney sum formula, see Theorem 1.4. Despite the many formal similarities, there are crucial qualitative differences compared to Chern classes in complex oriented cohomology theories: our Chern classes are _not_ characterized by their restriction to the maximal torus, and some of our Chern classes are zero-divisors, see Remark 1.2. We will use our Chern classes and the splitting of [15] to prove new structure results about the equivariant bordism rings \(\mathbf{MU}_{U(m)}^{*}\) for unitary groups, or more generally for products of unitary groups. To put this into context, we recall that in the special case when \(G\) is an _abelian_ compact Lie group, the graded ring \(\mathbf{MU}_{G}^{*}\) is concentrated in even degrees and free as a module over the non-equivariant cobordism ring \(\mathbf{MU}^{*}\)[3, Theorem 5.3], [10], and the bundling homomorphism \(\mathbf{MU}_{G}^{*}\longrightarrow\mathbf{MU}^{*}(BG)\) is completion at the augmentation ideal of \(\mathbf{MU}_{G}^{*}\)[2, Theorem 1.1], [11]. For non-abelian compact Lie groups \(G\), however, the equivariant bordism rings \(\mathbf{MU}_{G}^{*}\) are still largely mysterious. The main result of this note is the following: **Theorem.**_Let \(m\geq 1\) be a natural number._ 1. _The sequence of Chern classes_ \(c_{m}^{(m)},c_{m-1}^{(m)},\dots,c_{1}^{(m)}\) _is a regular sequence that generates the augmentation ideal of the graded-commutative ring_ \(\mathbf{MU}_{U(m)}^{*}\)_._ 2. _The completion of_ \(\mathbf{MU}_{U(m)}^{*}\) _at the augmentation ideal is a graded_ \(\mathbf{MU}^{*}\)_-power series algebra in the above Chern classes._ 3. _The bundling homomorphism_ \(\mathbf{MU}_{U(m)}^{*}\longrightarrow\mathbf{MU}^{*}(BU(m))\) _extends to an isomorphism_ \[(\mathbf{MU}_{U(m)}^{*})_{I}^{\wedge}\ \longrightarrow\ \mathbf{MU}^{*}(BU(m))\] _from the completion at the augmentation ideal._ We prove this result as a special case of Theorem 2.2 below; the more general version applies to products of unitary groups. As we explain in Remark 2.4, the regularity of the Chern classes also implies that the Greenlees-May local homology spectral sequence converging to \(\mathbf{MU}^{*}(BU(m))\) degenerates because the relevant local homology groups vanish in positive degrees. As another application we use the Chern classes in equivariant bordism to give a reformulation and self-contained proof of work of Greenlees-May [6] and La Vecchia [8] on the completion theorem for \(\mathbf{MU}_{G}\), see Theorem 3.5. ## 1. Equivariant \(\mathbf{MU}\)-Chern classes In this section we introduce the Chern classes in \(U(m)\)-equivariant homotopical bordism, see Definition 1.1. We establish their basic properties in Theorem 1.4, including a Whitney sum formula and the fact that the bundling homomorphism takes our Chern classes to the Conner-Floyd-Chern classes in \(\mathbf{MU}\)-cohomology. We begin by fixing our notation. For a compact Lie group \(G\), we write \(\mathbf{MU}_{G}\) for the \(G\)-equivariant homotopical bordism spectrum introduced by tom Dieck [17]. For our purposes, it is highly relevant that the theories \(\mathbf{MU}_{G}\) for varying compact Lie groups \(G\) assemble into a global stable homotopy type, see [14, Example 6.1.53]. For an integer \(n\), we write \(\mathbf{MU}_{G}^{n}=\pi_{-n}^{G}(\mathbf{MU})\) for the \(G\)-equivariant coefficient group in cohomological degree \(n\). Since \(\mathbf{MU}\) comes with the structure of a global ring spectrum, it supports graded-commutative multiplications on \(\mathbf{MU}_{G}^{*}\), as well as external multiplication pairings \[\times\ :\ \mathbf{MU}_{G}^{k}\times\mathbf{MU}_{K}^{l}\ \longrightarrow\ \mathbf{MU}_{G\times K}^{k+l}\] for all pairs of compact Lie groups \(G\) and \(K\). We write \(\nu_{k}\) for the tautological representation of the unitary group \(U(k)\) on \(\mathbb{C}^{k}\); we denote its Euler class by \[e_{k}\ =\ e(\nu_{k})\ \in\ \mathbf{MU}_{U(k)}^{2k}\,\] compare [17, page 347]. We write \(U(k,m-k)\) for the block subgroup of \(U(m)\) consisting of matrices of the form \(\left(\begin{smallmatrix}A&0\\ 0&B\end{smallmatrix}\right)\) for \((A,B)\in U(k)\times U(m-k)\). We write \(\mathrm{tr}_{U(k,m-k)}^{U(m)}:\mathbf{MU}_{U(k,m-k)}^{*}\longrightarrow \mathbf{MU}_{U(m)}^{*}\) for the transfer associated to the inclusion \(U(k,m-k)\longrightarrow U(m)\), see for example [14, Construction 3.2.11]. **Definition 1.1**.: For \(0\leq k\leq m\), the \(k\)_-th Chern class_ in equivariant complex bordism is the class \[c_{k}^{(m)}\ =\ \mathrm{tr}_{U(k,m-k)}^{U(m)}(e_{k}\times 1_{m-k})\ \in\ \mathbf{MU}_{U(m)}^{2k}\,\] where \(1_{m-k}\in\mathbf{MU}_{U(m-k)}^{0}\) is the multiplicative unit. We also set \(c_{k}^{(m)}=0\) for \(k>m\). In the extreme cases \(k=0\) and \(k=m\), we recover familiar classes: since \(e_{0}\) is the multiplicative unit in the non-equivariant cobordism ring \(\mathbf{MU}^{*}\), the class \(c_{0}^{(m)}=1_{m}\) is the multiplicative unit in \(\mathbf{MU}_{U(m)}^{0}\). In the other extreme, \(c_{m}^{(m)}=e_{m}=e(\nu_{m})\) is the Euler class of the tautological \(U(m)\)-representation. As we will show in Theorem 1.4 (ii), the classes \(c_{k}^{(m)}\) are compatible in \(m\) under restriction to smaller unitary groups. **Remark 1.2**.: We alert the reader that the restriction homomorphism \(\mathrm{res}_{T^{m}}^{U(m)}:\mathbf{MU}_{U(m)}^{*}\longrightarrow\mathbf{MU}_ {T^{m}}^{*}\) is not injective for \(m\geq 2\), where \(T^{m}\) denotes a maximal torus in \(U(m)\). So the Chern classes in \(\mathbf{MU}_{U(m)}^{*}\) are not characterized by their restrictions to the maximal torus - in contrast to the non-equivariant situation for complex oriented cohomology theories. To show this we let \(N\) denote the maximal torus normalizer inside \(U(m)\). The class \[1-\operatorname{tr}_{N}^{U(m)}(1)\ \in\ \mathbf{MU}_{U(m)}^{0}\] has infinite order because the \(U(m)\)-geometric fixed point map takes it to the multiplicative unit; in particular, this class is nonzero. The double coset formula [9, IV Corollary 6.7 (i)] \[\operatorname{res}_{T^{m}}^{U(m)}(\operatorname{tr}_{N}^{U(m)}(1))\ =\ \operatorname{res}_{T^{m}}^{N}(1)\ =\ 1\] implies that the class \(1-\operatorname{tr}_{N}^{U(m)}(1)\) lies in the kernel of the restriction homomorphism \(\operatorname{res}_{T^{m}}^{U(m)}:\mathbf{MU}_{U(m)}^{0}\longrightarrow\mathbf{ MU}_{T^{m}}^{0}\). Moreover, the Chern class \(c_{1}^{(2)}\) is a zero-divisor in the ring \(\mathbf{MU}_{U(2)}^{*}\), also in stark contrast to Chern classes in complex oriented cohomology theories. Indeed, reciprocity for restriction and transfers [14, Corollary 3.5.17 (v)] yields the relation \[c_{1}^{(2)}\cdot(1-\operatorname{tr}_{N}^{U(2)}(1)) =\ \operatorname{tr}_{U(1,1)}^{U(2)}(e_{1}\times 1)\cdot(1- \operatorname{tr}_{N}^{U(2)}(1))\] \[=\ \operatorname{tr}_{U(1,1)}^{U(2)}((e_{1}\times 1)\cdot \operatorname{res}_{U(1,1)}^{U(2)}(1-\operatorname{tr}_{N}^{U(2)}(1)))\ =\ 0\.\] One can also show that the class \(1-\operatorname{tr}_{N}^{U(2)}(1)\) is infinitely divisible by the Euler class \(e_{2}=c_{2}^{(2)}\); so it is also in the kernel of the completion map at the ideal \((e_{2})\). The Chern class \(c_{k}^{(m)}\) is defined as a transfer; so identifying its restriction to a subgroup of \(U(m)\) involves a double coset formula. The following double coset formula will take care of all cases we need in this paper; it ought to be well-known to experts, but I do not know a reference. The case \(l=1\) is established in [16, Lemma 4.2], see also [14, Example 3.4.13]. The double coset space \(U(i,j)\backslash U(m)/U(k,l)\) is discussed at various places in the literature, for example [12, Example 3], but I have not seen the resulting double coset formula spelled out. **Proposition 1.3** (Double coset formula).: _Let \(i,j,k,l\) be positive natural numbers such that \(i+j=k+l\). Then_ \[\operatorname{res}_{U(i,j)}^{U(i+j)}\circ\operatorname{tr}_{U(k,l)}^{U(k+l)}\ =\ \sum_{0,k-j\leq d\leq i,k} \operatorname{tr}_{U(d,i-d,k-d,j-k+d)}^{U(i,j)}\circ\gamma_{d}^{*}\circ \operatorname{res}_{U(d,k-d,i-d,l-i+d)}^{U(k,l)}\,\] _where \(\gamma_{d}\in U(i+j)\) is the permutation matrix of the shuffle permutation \(\chi_{d}\in\Sigma_{i+j}\) given by_ \[\chi_{d}(a)\ =\ \begin{cases}a&\text{ for $1\leq a\leq d$,}\\ a-d+i&\text{ for $d+1\leq a\leq k$,}\\ a+d-k&\text{ for $k+1\leq a\leq k+i-d$, and}\\ a&\text{ for $a>k+i-d$.}\end{cases}\] Proof.: We refer to [9, IV 6] or [14, Theorem 3.4.9] for the general double coset formula for \(\operatorname{res}_{K}^{G}\circ\operatorname{tr}_{H}^{G}\) for two closed subgroups \(H\) and \(K\) of a compact Lie group \(G\); we need to specialize it to the situation at hand. We first consider a matrix \(A\in U(m)\) such that the center \(Z\) of \(U(i,j)\) is _not_ contained in the \(U(i,j)\)-stabilizer \[S_{A}\ =\ U(i,j)\cap{}^{A}U(k,l)\] of the coset \(A\cdot U(k,l)\). Then \(S_{A}\cap Z\) is a proper subgroup of the center \(Z\) of \(U(i,j)\), which is isomorphic to \(U(1)\times U(1)\). So \(S_{A}\cap Z\) has strictly smaller dimension than \(Z\). Since the center of \(U(i,j)\) is contained in the normalizer of \(S_{A}\), we conclude that the group \(S_{A}\) has an infinite Weyl group inside \(U(i,j)\). All summands in the double coset formula indexed by such points then involve transfers with infinite Weyl groups, and hence they vanish. So all non-trivial contributions to the double coset formula stem from double cosets \(U(i,j)\cdot A\cdot U(k,l)\) such that \(S_{A}\) contains the center of \(U(i,j)\). In particular the matrix \(\left(\begin{smallmatrix}-E_{i}&0\\ 0&E_{j}\end{smallmatrix}\right)\) then belongs to \(S_{A}\). We write \(L=A\cdot(\mathbb{C}^{k}\oplus 0^{l})\), a complex \(k\)-plane in \(\mathbb{C}^{k+l}\); we consider \(x\in\mathbb{C}^{i}\) and \(y\in\mathbb{C}^{j}\) such that \((x,y)\in L\). Because \(\left(\begin{smallmatrix}-E_{i}&0\\ 0&E_{j}\end{smallmatrix}\right)\cdot L=L\), we deduce that \((-x,y)\in L\). Since \((x,y)\) and \((-x,y)\) belong to \(L\), so do the vectors \((x,0)\) and \((y,0)\). We have thus shown that the \(k\)-plane \(L=A\cdot(\mathbb{C}^{k}\oplus 0^{l})\) is spanned by the intersections \[L\cap(\mathbb{C}^{i}\oplus 0^{j})\qquad\text{and}\qquad L\cap(0^{i}\oplus \mathbb{C}^{j})\.\] We organize the cosets with this property by the dimension of the first intersection: we define \(M_{d}\) as the closed subspace of \(U(m)/U(k,l)\) consisting of those cosets \(A\cdot U(k,l)\) such that \[\dim_{\mathbb{C}}(L\cap(\mathbb{C}^{i}\oplus 0^{j}))\ =\ d\qquad\text{and} \qquad\dim_{\mathbb{C}}(L\cap(0^{i}\oplus\mathbb{C}^{j}))\ =\ k-d\.\] If \(M_{d}\) is non-empty, we must have \(0,k-j\leq d\leq i,k\). The group \(U(i,j)\) acts transitively on \(M_{d}\), and the coset \(\gamma_{d}\cdot U(k,l)\) belongs to \(M_{d}\); so \(M_{d}\) is the \(U(i,j)\)-orbit type manifold of \(U(m)/U(k,l)\) for the conjugacy class of \[S_{\gamma_{d}}\ =\ U(i,j)\cap{}^{\gamma_{d}}U(k,l)\ =\ U(d,i-d,k-d,j-k+d)\.\] The corresponding orbit space \(U(i,j)\backslash M_{d}=U(i,j)\cdot\gamma_{d}\cdot U(k,l)\) is a single point inside the double coset space, so its internal Euler characteristic is \(1\). This orbit type thus contributes the summand \[\operatorname{tr}_{U(d,i-d,k-d,j-k+d)}^{U(i,j)}\circ\gamma_{d}^{*}\circ \operatorname{res}_{U(d,k-d,i-d,l-i+d)}^{U(k,l)}\] to the double coset formula. In [4, Corollary 8.3], Conner and Floyd define Chern classes for complex vector bundles in the non-equivariant \(\mathbf{MU}\)-cohomology rings. In the universal cases, these yield classes \(c_{k}\in\mathbf{MU}^{2k}(BU(m))\) that are nowadays referred to as Conner-Floyd-Chern classes. The next theorem spells out the key properties of our Chern classes \(c_{k}^{(m)}\); parts (i), (ii) and (iii) roughly say that all the familiar structural properties of the Conner-Floyd-Chern classes in \(\mathbf{MU}^{*}(BU(m))\) already hold for our Chern classes in \(U(m)\)-equivariant \(\mathbf{MU}\)-theory. Part (iv) of the theorem refers to the bundling maps \(\mathbf{MU}_{G}^{*}\longrightarrow\mathbf{MU}^{*}(BG)\) defined by tom Dieck in [17, Proposition 1.2]. **Theorem 1.4**.: _The Chern classes in homotopical equivariant bordism enjoy the following properties._ 1. _For all_ \(0\leq k\leq m=i+j\)_, the relation_ \[\operatorname{res}_{U(i,j)}^{U(m)}(c_{k}^{(m)})\ =\ \sum_{d=0,\ldots,k}c_{d}^{(i)}\times c_{k-d}^{(j)}\] _holds in the group_ \(\mathbf{MU}_{U(i,j)}^{2k}\)_._ 2. _The relation_ \[\operatorname{res}_{U(m-1)}^{U(m)}(c_{k}^{(m)})\ =\ \begin{cases}c_{k}^{(m-1)}&\text{ for $0\leq k \leq m-1$, and}\\ 0&\text{ for $k=m$}\end{cases}\] _holds in the group_ \(\mathbf{MU}_{U(m-1)}^{2k}\)_._ 3. _Let_ \(T^{m}\) _denote the diagonal maximal torus of_ \(U(m)\)_. Then the restriction homomorphism_ \[\operatorname{res}_{T^{m}}^{U(m)}\ :\ \mathbf{MU}_{U(m)}^{2k}\ \longrightarrow\ \mathbf{MU}_{T^{m}}^{2k}\] _takes the class_ \(c_{k}^{(m)}\) _to the_ \(k\)_-th elementary symmetric polynomial in the classes_ \(p_{1}^{*}(e_{1}),\ldots,p_{m}^{*}(e_{1})\)_, where_ \(p_{i}:T^{m}\longrightarrow T=U(1)\) _is the projection to the_ \(i\)_-th factor._ 4. _The bundling map_ \[\mathbf{MU}_{U(m)}^{*}\ \longrightarrow\ \mathbf{MU}^{*}(BU(m))\] _takes the class_ \(c_{k}^{(m)}\) _to the_ \(k\)_-th Conner-Floyd-Chern class._ Proof.: (i) This property exploits the double coset formula for \(\operatorname{res}_{U(i,j)}^{U(m)}\circ\operatorname{tr}_{U(k,m-k)}^{U(m)}\) recorded in Proposition 1.3, which is the second equation in the following list: \[\operatorname{res}_{U(i,j)}^{U(m)}(c_{k}^{(m)}) = \operatorname{res}_{U(i,j)}^{U(m)}(\operatorname{tr}_{U(k,m-k)}^{ U(m)}(e_{k}\times 1_{m-k}))\] \[= \sum_{d=0,\ldots,k}\operatorname{tr}_{U(d,i-d,k-d,j-k+d)}^{U(i,j)} (\gamma_{d}^{*}(\operatorname{res}_{U(d,k-d,i-d,j-k+d)}^{U(k,m-k)}(e_{k}\times 1 _{m-k})))\] \[= \sum_{d=0,\ldots,k}\operatorname{tr}_{U(d,i-d,k-d,j-k+d)}^{U(i,j)} (\gamma_{d}^{*}(e_{d}\times e_{k-d}\times 1_{i-d}\times 1_{j-k+d}))\] \[= \sum_{d=0,\ldots,k}\operatorname{tr}_{U(d,i-d,k-d,j-k+d)}^{U(i)}(e _{d}\times 1_{i-d}\times e_{k-d}\times 1_{j-k+d})\] \[= \sum_{d=0,\ldots,k}\operatorname{tr}_{U(d,i-d)}^{U(i)}(e_{d} \times 1_{i-d})\times\operatorname{tr}_{U(k-d,j-k+d)}^{U(j)}(e_{k-d}\times 1 _{j-k+d})\] \[= \sum_{d=0,\ldots,k}c_{d}^{(i)}\times c_{k-d}^{(j)}\] Part (ii) for \(k<m\) follows from part (i) by restriction from \(U(m-1,1)\) to \(U(m-1)\): \[\operatorname{res}_{U(m-1)}^{U(m)}(c_{k}^{(m)}) = \operatorname{res}_{U(m-1)}^{U(m-1,1)}(\operatorname{res}_{U(m-1,1 )}^{U(m)}(c_{k}^{(m)}))\] \[= \operatorname{res}_{U(m-1,1)}^{U(m-1,1)}(c_{k-1}^{(m-1)}\times c_ {1}^{(1)}\ +\ c_{k}^{(m-1)}\times c_{0}^{(1)})\] \[= c_{k-1}^{(m-1)}\times\operatorname{res}_{1}^{U(1)}(c_{1}^{(1)})\ +\ c_{k}^{(m-1)}\times \operatorname{res}_{1}^{U(1)}(c_{0}^{(1)})\ =\ c_{k}^{(m-1)}\.\] We have used that the class \(c_{1}^{(1)}=e_{1}\) is in the kernel of the augmentation \(\operatorname{res}_{1}^{U(1)}:\mathbf{MU}_{U(1)}^{*}\longrightarrow\mathbf{MU} ^{*}\). The Euler class \(c_{m}^{(m)}=e(\nu_{m})\) restricts to \(0\) in \(\mathbf{MU}_{U(m-1)}^{*}\) because the restriction of the tautological \(U(m)\)-representation to \(U(m-1)\) splits off a trivial \(1\)-dimensional summand. (iii) An inductive argument based on property (i) shows the desired relation: \[\operatorname{res}_{T^{m}}^{U(m)}(c_{k}^{(m)}) = \operatorname{res}_{U(1,\ldots,1)}^{U(m)}(c_{k}^{(m)})\] \[= \sum_{A\subset\{1,\ldots,m\},|A|=k} \prod_{a\in A}p_{a}^{*}(c_{1}^{(1)})\cdot\prod_{b\not\in A}p_{b}^{* }(c_{0}^{(1)})\] \[= \sum_{A\subset\{1,\ldots,m\},|A|=k} \prod_{a\in A}p_{a}^{*}(e_{1})\.\] (iv) As before we let \(T^{m}\) denote the diagonal maximal torus in \(U(m)\). The splitting principle holds for non-equivariant complex oriented cohomology theories, see for example [5, Proposition 8.10]. In other words, the right vertical map in the commutative square of graded rings is injective: The \(k\)-th Conner-Floyd-Chern class is characterized as the unique element of \(\mathbf{MU}^{2k}(BU(m))\) that maps to the \(k\)-th elementary symmetric polynomial in the classes \(p_{1}^{*}(e_{1}),\ldots,p_{m}^{*}(e_{1})\). Together with part (iii), this proves the claim. ## 2. Regularity results In this section we use the Chern classes to formulate new structural properties of the equivariant bordism ring \(\mathbf{MU}_{U(m)}^{*}\). In particular, we can say what \(\mathbf{MU}_{U(m)}^{*}\) looks like after dividing out some of the Chern classes, and after completing at the Chern classes. The following theorem states these facts more generally for \(U(m)\times G\) instead of \(U(m)\); by induction on the number of factors, we can then deduce corresponding results for products of unitary groups, see Theorem 2.2. The results in this section make crucial use of the splitting theorem for global functors established in [15]. **Theorem 2.1**.: _For every compact Lie group \(G\) and all \(0\leq k\leq m\), the sequence of Chern classes_ \[(c_{m}^{(m)}\times 1_{G},\ c_{m-1}^{(m)}\times 1_{G},\ldots,\ c_{k+1}^{(m)} \times 1_{G})\] _is a regular sequence in the graded-commutative ring \({\bf MU}^{*}_{U(m)\times G}\) that generates the kernel of the surjective restriction homomorphism_ \[{\rm res}^{U(m)\times G}_{U(k)\times G}\ :\ {\bf MU}^{*}_{U(m)\times G}\ \longrightarrow{\bf MU}^{*}_{U(k)\times G}\.\] _In particular, the sequence of Chern classes \((c_{m}^{(m)},c_{m-1}^{(m)},\ldots,c_{1}^{(m)})\) is a regular sequence that generates the augmentation ideal of the graded-commutative ring \({\bf MU}^{*}_{U(m)}\)._ Proof.: We argue by downward induction on \(k\). The induction starts with \(k=m\), where there is nothing to show. Now we assume the claim for some \(k\leq m\), and we deduce it for \(k-1\). The inductive hypothesis shows that \(c_{m}^{(m)}\times 1_{G},\ldots,c_{k+1}^{(m)}\times 1_{G}\) is a regular sequence in the graded-commutative ring \({\bf MU}^{*}_{U(m)\times G}\), and that the restriction homomorphism \({\rm res}^{U(m)\times G}_{U(k)\times G}\) factors through an isomorphism \[{\bf MU}^{*}_{U(m)\times G}/(c_{m}^{(m)}\times 1_{G},\ldots,c_{k+1}^{(m)} \times 1_{G})\ \cong{\bf MU}^{*}_{G\times U(k)}\.\] We exploit that the various equivariant bordism spectra \({\bf MU}_{G}\) underlie a global spectrum, see [14, Example 6.1.53]; thus the restriction homomorphism \({\rm res}^{U(k)\times G}_{U(k-1)\times G}\) is surjective by Theorem 1.4 and Proposition 2.2 of [15]. Hence the standard long exact sequence unpsiltes into a short exact sequence of graded \({\bf MU}^{*}\)-modules: \[0\ \longrightarrow\ {\bf MU}^{*-2k}_{U(k)\times G}\ \xrightarrow{(e_{k}\times 1 _{G})\cdot\cdot\ -}\ {\bf MU}^{*}_{U(k)\times G}\ \xrightarrow{{\rm res}^{U(k)\times G}_{U(k-1)\times G}}\ {\bf MU}^{*}_{U(k-1)\times G}\ \longrightarrow\ 0\] Because \[{\rm res}^{U(m)\times G}_{U(k)\times G}(c_{k}^{(m)}\times 1_{G})\ =\ c_{k}^{(k)} \times 1_{G}\ =\ e_{k}\times 1_{G}\,\] we conclude that \(c_{k}^{(m)}\times 1_{G}\) is a non zero-divisor in \({\bf MU}^{*}_{U(m)\times G}/(c_{m}^{(m)}\times 1_{G},c_{m-1}^{(m)}\times 1_{G}, \ldots,c_{k+1}^{(m)}\times 1_{G})\), and that additionally dividing out \(c_{k}^{(m)}\times 1_{G}\) yields \({\bf MU}^{*}_{U(k-1)\times G}\). This completes the inductive step. We can now identify the completion of \({\bf MU}^{*}_{U(m)}\) at the augmentation ideal as an \({\bf MU}^{*}\)-power series algebra on the Chern classes. We state this somewhat more generally for products of unitary groups, which we write as \[U(m_{1},\ldots,m_{l})\ =\ U(m_{1})\times\cdots\times U(m_{l})\,\] for natural numbers \(m_{1},\ldots,m_{l}\geq 1\). For \(1\leq i\leq l\), we write \(p_{i}:U(m_{1},\ldots,m_{l})\longrightarrow U(m_{i})\) for the projection to the \(i\)-th factor, and we set \[c_{k}^{[i]}\ =\ p_{i}^{*}(c_{k}^{(m_{i})})\ =\ 1_{U(m_{1},\ldots,m_{i-1})} \times c_{k}^{(m_{i})}\times 1_{U(m_{i+1},\ldots,m_{l})}\ \in\ {\bf MU}^{2k}_{U(m_{1},\ldots,m_{l})}\.\] The following theorem was previously known for tori, i.e., for \(m_{1}=\cdots=m_{l}=1\). **Theorem 2.2**.: _Let \(m_{1},\ldots,m_{l}\geq 1\) be positive integers._ 1. _The sequence of Chern classes_ (2.3) \[c_{m_{1}}^{[1]},\ldots,c_{1}^{[1]},c_{m_{2}}^{[2]},\ldots,c_{1}^{[2]},\ldots, c_{m_{l}}^{[l]},\ldots,c_{1}^{[l]}\] _is a regular sequence that generates the augmentation ideal of the graded-commutative ring_ \({\bf MU}^{*}_{U(m_{1},\ldots,m_{l})}\)_._ 2. _The completion of_ \({\bf MU}^{*}_{U(m_{1},\ldots,m_{l})}\) _at the augmentation ideal is a graded_ \({\bf MU}^{*}\)_-power series algebra in the Chern classes (_2.3_)._ 3. _The bundling map_ \({\bf MU}^{*}_{U(m_{1},\ldots,m_{l})}\longrightarrow{\bf MU}^{*}(BU(m_{1}, \ldots,m_{l}))\) _extends to an isomorphism_ \[({\bf MU}^{*}_{U(m_{1},\ldots,m_{l})})^{\wedge}_{I}\ \longrightarrow\ {\bf MU}^{*}(BU(m_{1},\ldots,m_{l}))\] _from the completion at the augmentation ideal._ Proof.: Part (i) follows from Theorem 2.1 by induction on the number \(l\) of factors. We prove parts (ii) and (iii) together. We must show that for every \(n\geq 1\), \({\bf MU}^{*}_{U(m_{1},\ldots,m_{l})}/I^{n}\) is free as an \({\bf MU}^{*}\)-module on the monomials of degree less than \(n\) in the Chern classes (2.3). There is nothing to show for \(n=1\). The short exact sequence \[0\ \longrightarrow\ I^{n}/I^{n+1}\ \longrightarrow\ {\bf MU}^{*}_{U(m_{1}, \ldots,m_{l})}/I^{n+1}\ \longrightarrow\ {\bf MU}^{*}_{U(m_{1},\ldots,m_{l})}/I^{n}\ \longrightarrow\ 0\] and the inductive hypothesis reduce the claim to showing that \(I^{n}/I^{n+1}\) is free as an \(\mathbf{MU}^{*}\)-module on the monomials of degree exactly \(n\) in the Chern classes (2.3). Since the augmentation ideal \(I\) is generated by these Chern classes, the \(n\)-th power \(I^{n}\) is generated, as a module over \(\mathbf{MU}^{*}_{U(m_{1},\ldots,m_{l})}\), by the monomials of degree \(n\). So \(I^{n}/I^{n+1}\) is generated by these monomials as a module over \(\mathbf{MU}^{*}\). The bundling map \(\mathbf{MU}^{*}_{U(m_{1},\ldots,m_{l})}\longrightarrow\mathbf{MU}^{*}(BU(m_{1},\ldots,m_{l}))\) is a homomorphism of augmented \(\mathbf{MU}^{*}\)-algebras, and it takes the Chern class \(c_{k}^{[t]}\) to the inflation of the \(k\)-th Conner-Floyd-Chern class along the projection to the \(i\)-th factor. By the theory of complex orientations, the collection of these Conner-Floyd-Chern classes are \(\mathbf{MU}^{*}\)-power series generators of \(\mathbf{MU}^{*}(BU(m_{1},\ldots,m_{l}))\); in particular, the images of the Chern class monomials are \(\mathbf{MU}^{*}\)-linearly independent in \(\mathbf{MU}^{*}(BU(m_{1},\ldots,m_{l}))\). Hence these classes are themselves linearly independent in \(I^{n}/I^{n+1}\). **Remark 2.4**.: Greenlees and May [6, Corollary 1.6] construct a local homology spectral sequence \[E_{2}^{p,q}\ =\ H_{-p,-p}^{I}(\mathbf{MU}^{*}_{G})\ \Longrightarrow\ \mathbf{MU}^{p+q}(BG)\.\] The regularity results about Chern classes from Theorem 2.2 imply that whenever \(G=U(m_{1},\ldots,m_{l})\) is a product of unitary groups, the \(E_{2}^{p,q}\)-term vanishes for all \(p\neq 0\), and the spectral sequence degenerates into the isomorphism \[E_{2}^{0,*}\ \cong\ (\mathbf{MU}^{*}_{U(m_{1},\ldots,m_{l})})^{\wedge}_{I}\ \cong\ \mathbf{MU}^{*}(BU(m_{1},\ldots,m_{l}))\] of Theorem 2.2 (iii). **Remark 2.5**.: The previous regularity theorems are special cases of the following more general results that hold for every global \(\mathbf{MU}\)-module \(E\): * For every compact Lie group \(G\), the sequence of Chern classes \(c_{m}^{(m)}\times 1_{G},\ldots,c_{1}^{(m)}\times 1_{G}\) acts regularly on the graded \(\mathbf{MU}^{*}_{U(m)\times G}\)-module \(E^{*}_{U(m)\times G}\). * The restriction homomorphism \[\operatorname{res}_{G}^{U(m)\times G}\ :\ E_{U(m)\times G}^{*}\ \longrightarrow\ E_{G}^{*}\] factors through an isomorphism \[E_{U(m)\times G}^{*}/(c_{m}^{(m)}\times 1_{G},\ldots,c_{1}^{(m)}\times 1_{G})\ \cong\ E_{G}^{*}\.\] * For all \(m_{1},\ldots,m_{l}\geq 1\), the sequence of Chern classes (2.3) acts regularly on the graded \(\mathbf{MU}^{*}_{U(m_{1},\ldots,m_{l})}\)-module \(E^{*}_{U(m_{1},\ldots,m_{l})}\). As in Remark 2.4, the regularity properties also imply the degeneracy of the Greenlees-May local homology spectral sequence converging to \(E^{*}(BU(m_{1},\ldots,m_{l}))\). ## 3. The \(\mathbf{MU}\)-completion theorem via Chern classes In this section we use the Chern classes to reformulate the \(\mathbf{MU}_{G}\)-completion theorem of Greenlees-May [6] and La Vecchia [8], for any compact Lie group \(G\), and we give a short and self-contained proof. We emphasize that the essential arguments of this section are all contained in [6] and [8]; the Chern classes let us arrange them in a more conceptual and concise way. The references [6, 8] ask for a finitely generated ideal of \(\mathbf{MU}^{*}_{G}\) that is'sufficiently large' in the sense of [6, Definition 2.4]; while we have no need to explicitly mention sufficiently large ideals, the new insight is that the ideal generated by the Chern classes of any faithful \(G\)-representation is'sufficiently large'. **Construction 3.1** (Chern classes of representations).: We let \(V\) be a complex representation of a compact Lie group \(G\). We let \(\rho:G\longrightarrow U(m)\) be a continuous homomorphism that classifies \(V\), i.e., such that \(\rho^{*}(\nu_{m})\) is isomorphic to \(V\); here \(m=\dim_{\mathbb{C}}(V)\). The \(k\)_-th Chern class_ of \(V\) is \[c_{k}(V)\ =\ \rho^{*}(c_{k}^{(m)})\ \in\ \mathbf{MU}^{2k}_{G}\.\] In particular, \(c_{0}(V)=1\), \(c_{m}(V)=e(V)\) is the Euler class, and \(c_{k}(V)=0\) for \(k>m\). **Example 3.2**.: As an example, we consider the tautological representation \(\nu_{2}\) of \(SU(2)\) on \(\mathbb{C}^{2}\). By the general properties of Chern classes we have \(c_{0}(\nu_{2})=1\), \(c_{2}(\nu_{2})=e(\nu_{2})\) is the Euler class, and \(c_{k}(\nu_{2})=0\) for \(k\geq 3\). The first Chern class of \(\nu_{2}\) can be rewritten by using a double coset formula as follows: \[c_{1}(\nu_{2}) =\ \mathrm{res}_{SU(2)}^{U(2)}(c_{1}^{(2)})\ =\ \mathrm{res}_{SU(2)}^{U(2)}( \mathrm{tr}_{U(1,1)}^{U(2)}(e_{1}\times 1))\] \[=\ \mathrm{tr}_{T}^{SU(2)}(\mathrm{res}_{T}^{U(1,1)}(e_{1}\times 1 ))\ =\ \mathrm{tr}_{T}^{SU(2)}(e(\chi))\.\] Here \(T=\{(\begin{smallmatrix}\lambda&0\\ 0&\lambda^{-1}\end{smallmatrix})\ :\ \lambda\in U(1)\}\) is the diagonal maximal torus of \(SU(2)\), \(\chi:T\cong U(1)\) is the character that projects onto the upper left diagonal entry, and \(e(\chi)\in\mathbf{MU}_{T}^{2}\) is its Euler class. **Construction 3.3**.: We construct a \(G\)-equivariant \(\mathbf{MU}_{G}\)-module \(K(G,V)\) associated to a complex representation \(V\) of a compact Lie group \(G\). The construction is a special case of one used by Greenlees and May [6, Section 1], based on the sequence of Chern classes \(c_{1}(V),\ldots,c_{m}(V)\), where \(m=\dim_{\mathbb{C}}(V)\). For any equivariant homotopy class \(x\in\mathbf{MU}_{G}^{l}\), we write \(\mathbf{MU}_{G}[1/x]\) for the \(\mathbf{MU}_{G}\)-module localization of \(\mathbf{MU}_{G}\) with \(x\) inverted; in other words, \(\mathbf{MU}_{G}[1/x]\) is a homotopy colimit (mapping telescope) in the triangulated category of the sequence \[\mathbf{MU}_{G}\ \xrightarrow{\ -x\ }\ \Sigma^{l}\mathbf{MU}_{G}\ \xrightarrow{\ -x\ }\Sigma^{2l}\mathbf{MU}_{G}\ \xrightarrow{\ -x\ }\ \Sigma^{3l}\mathbf{MU}_{G}\ \xrightarrow{\ -x\ }\ \ldots\ \.\] We write \(K(x)\) for the fiber of the morphism \(\mathbf{MU}_{G}\longrightarrow\mathbf{MU}_{G}[1/x]\). Then we define \[K(G,V)\ =\ K(c_{1}(V))\wedge_{\mathbf{MU}_{G}}\ldots\wedge_{\mathbf{MU}_{G}}K(c_{ m}(V))\.\] The smash product of the morphisms \(K(c_{i}(V))\longrightarrow\mathbf{MU}_{G}\) provides a morphism of \(G\)-equivariant \(\mathbf{MU}_{G}\)-module spectra \[\epsilon_{V}\ :\ K(G,V)\ \longrightarrow\ \mathbf{MU}_{G}.\] By general principles, the module \(K(G,V)\) only depends on the radical of the ideal generated by the classes \(c_{1}(V),\ldots,c_{m}(V)\). But more is true: as a consequence of Theorem 3.5 below, \(K(G,V)\) is entirely independent, as a \(G\)-equivariant \(\mathbf{MU}_{G}\)-module, of the faithful representation \(V\). **Proposition 3.4**.: _Let \(V\) be a faithful complex representation of a compact Lie group \(G\)._ 1. _The morphism_ \(\epsilon_{V}:K(G,V)\longrightarrow\mathbf{MU}_{G}\) _is an equivalence of underlying non-equivariant spectra._ 2. _For every non-trivial closed subgroup_ \(H\) _of_ \(G\)_, the_ \(H\)_-geometric fixed point spectrum_ \(\Phi^{H}(K(G,V))\) _is trivial._ Proof.: (i) We set \(m=\dim_{\mathbb{C}}(V)\). The Chern classes \(c_{1}(V),\ldots,c_{m}(V)\) belong to the augmentation ideal of \(\mathbf{MU}_{G}^{*}\), so they restrict to \(0\) in \(\mathbf{MU}_{\{1\}}^{*}\), and hence the underlying non-equivariant spectrum of \(\mathbf{MU}_{G}[1/c_{i}(V)]\) is trivial for each \(i=1,\ldots,m\). Hence the morphisms \(K(c_{i}(V))\longrightarrow\mathbf{MU}_{G}\) are underlying non-equivariant equivalences for \(i=1,\ldots,m\). So also the morphism \(\epsilon_{V}\) is an underlying non-equivariant equivalence. (ii) We let \(H\) be a non-trivial closed subgroup of \(G\). We set \(W=V-V^{H}\), the orthogonal complement of the \(H\)-fixed points. This is a complex \(H\)-representation with \(W^{H}=0\); moreover, \(W\) is nonzero because \(H\) acts faithfully on \(V\) and \(H\neq\{1\}\). For \(k=\dim_{\mathbb{C}}(W)\) we then have \[e(W)\ =\ c_{k}(W)\ =\ c_{k}(W\oplus V^{H})\ =\ c_{k}(\mathrm{res}_{H}^{G}(V))\ =\ \mathrm{res}_{H}^{G}(c_{k}(V))\ ;\] the second equation uses the fact that adding a trivial representation leaves Chern classes unchanged, by part (ii) of Theorem 1.4. Since \(W^{H}=0\), the geometric fixed point homomorphism \(\Phi^{H}:\mathbf{MU}_{H}^{*}\longrightarrow\Phi_{H}^{*}(\mathbf{MU})\) sends the Euler class \(e(W)=\mathrm{res}_{H}^{G}(c_{k}(V))\) to an invertible element. The functor \(\Phi^{H}\circ\mathrm{res}_{H}^{G}\) commutes with inverting elements. Since the class \(\Phi^{H}(\mathrm{res}_{H}^{G}(c_{k}(V)))\) is already invertible, the localization morphism \(\mathbf{MU}_{G}\longrightarrow\mathbf{MU}_{G}[1/c_{k}(V)]\) induces an equivalence on \(H\)-geometric fixed points. Since the functor \(\Phi^{H}\circ\mathrm{res}_{H}^{G}\) is exact, it annihilates the fiber \(K(c_{k}(V))\) of the localization \(\mathbf{MU}_{G}\longrightarrow\mathbf{MU}_{G}[1/c_{k}(V)]\). The functor \(\Phi^{H}\circ\mathrm{res}_{H}^{G}\) is also strong monoidal, in the sense of a natural equivalence of non-equivariant spectra \[\Phi^{H}(X\wedge_{\mathbf{MU}_{G}}Y)\ \simeq\ \Phi^{H}(X)\wedge_{\Phi^{H}( \mathbf{MU}_{G})}\Phi^{H}(Y)\,\] for all \(G\)-equivariant \(\mathbf{MU}_{G}\)-modules \(X\) and \(Y\). Since \(K(G,V)\) contains \(K(c_{k}(V))\) as a factor (with respect to \(\wedge_{\mathbf{MU}_{G}}\)), we conclude that the spectrum \(\Phi^{H}(K(G,V))\) is trivial. The following 'completion theorem' is a reformulation of the combined work of Greenlees-May [6, Theorem 1.3] and La Vecchia [8]. It is somewhat more precise in that an unspecified'sufficiently large' finitely generated ideal of \(\mathbf{MU}_{G}^{*}\) is replaced by the ideal generated by the Chern classes of a faithful \(G\)-representation. The proof is immediate from the properties of \(K(G,V)\) listed in Proposition 3.4. We emphasize, however, that our proof is just a different way of arranging some arguments from [6] and [8] while taking advantage of the Chern class formalism. Since the morphism \(\epsilon_{V}:K(G,V)\longrightarrow\mathbf{MU}_{G}\) is a non-equivariant equivalence of underlying spectra, the morphism \(EG_{+}\wedge\mathbf{MU}_{G}\longrightarrow\mathbf{MU}_{G}\) that collapses the universal space \(EG\) to a point admits a unique lift to a morphism of \(G\)-equivariant \(\mathbf{MU}_{G}\)-modules \(\psi:EG_{+}\wedge\mathbf{MU}_{G}\longrightarrow K(G,V)\) across \(\epsilon_{V}\). **Theorem 3.5**.: _Let \(V\) be a faithful complex representation of a compact Lie group \(G\). Then the morphism_ \[\psi\ :\ EG_{+}\wedge\mathbf{MU}_{G}\ \longrightarrow\ K(G,V)\] _is an equivalence of \(G\)-equivariant \(\mathbf{MU}_{G}\)-module spectra._ Proof.: Because the underlying space of \(EG\) is contractible, the composite \[EG_{+}\wedge\mathbf{MU}_{G}\ \xrightarrow{\psi}\ K(G,V)\ \xrightarrow{ \epsilon_{V}}\ \mathbf{MU}_{G}\] is an equivariant equivalence of underlying non-equivariant spectra. Since \(\epsilon_{V}\) is an equivariant equivalence of underlying non-equivariant spectra by Proposition 3.4, so is \(\psi\). For all non-trivial closed subgroups \(H\) of \(G\), source and target of \(\psi\) have trivial \(H\)-geometric fixed points spectra, again by Proposition 3.4. So the morphism \(\psi\) induces an equivalence on geometric fixed point spectra for all closed subgroup of \(G\), and it is thus an equivariant equivalence.
2308.10603
A step towards understanding why classification helps regression
A number of computer vision deep regression approaches report improved results when adding a classification loss to the regression loss. Here, we explore why this is useful in practice and when it is beneficial. To do so, we start from precisely controlled dataset variations and data samplings and find that the effect of adding a classification loss is the most pronounced for regression with imbalanced data. We explain these empirical findings by formalizing the relation between the balanced and imbalanced regression losses. Finally, we show that our findings hold on two real imbalanced image datasets for depth estimation (NYUD2-DIR), and age estimation (IMDB-WIKI-DIR), and on the problem of imbalanced video progress prediction (Breakfast). Our main takeaway is: for a regression task, if the data sampling is imbalanced, then add a classification loss.
Silvia L. Pintea, Yancong Lin, Jouke Dijkstra, Jan C. van Gemert
2023-08-21T10:00:46Z
http://arxiv.org/abs/2308.10603v1
# A step towards understanding why classification helps regression ###### Abstract A number of computer vision deep regression approaches report improved results when adding a classification loss to the regression loss. Here, we explore why this is useful in practice and when it is beneficial. To do so, we start from precisely controlled dataset variations and data samplings and find that the effect of adding a classification loss is the most pronounced for regression with imbalanced data. We explain these empirical findings by formalizing the relation between the balanced and imbalanced regression losses. Finally, we show that our findings hold on two real imbalanced image datasets for depth estimation (NYUD2-DIR), and age estimation (IMDB-WIKI-DIR), and on the problem of imbalanced video progress prediction (Breakfast). Our main takeaway is: for a regression task, if the data sampling is imbalanced, then add a classification loss. ## 1 Introduction Regression models predict continuous outputs. In contrast, classification models make discrete, binned, predictions. For a continuous task, regression targets are a superset of the classification labels: they are more precise, taking values in-between the discrete classification bins. For regression, the error is only bounded by the precision of the measurements, for classification this also depends on the bin sizes: e.g. an age estimation classifier that can predict only young/old classes, cannot discriminate between middle-aged people. Additionally, when training a regression model, losses are proportional to the error magnitude, while for classification all errors receive an equal penalty: predicting bin 10 instead of 20, is just as incorrect as predicting bin 10 instead of bin 100. So classification cannot add anything new to regression; or can it? Surprisingly, adding a classification loss to the regression loss [30, 44, 46, 51], or even replacing the regression loss with classification [10, 11, 38] is extensively used in practice when training deep models for predicting continuous outputs. The classification is typically defined by binning the regression targets into a fixed number of classes. This is shown to be beneficial for tasks such as: depth estimation [11], horizon line detection [44], object orientation estimation [30, 51], age estimations [34]. And the reported motivation for discretizing the regression loss is that: it improves performance [44, 46], or that it helps in dealing with noisy data [44], or that it helps overcome the overly-smooth regression predictions [30, 40], or that it helps better regularize the model [22, 44]. However, none of these assumptions has been thoroughly investigated. In this work, we aim to explore in the context of deep learning: _Why does classification help regression?_ Intuitively, the regression targets contain more information than the classification labels. And adding a classification loss does not contribute any novel information. What is it really that a classification loss can add to a standard MSE (mean squared error) regression loss? And why does it seem beneficial in practice? To take a step towards understanding why classification helps regression, we start the analysis in a fully controlled setup, using a set of 1\(D\) synthetic functions. We consider several prior hypothesis of when classification can help re Figure 1: To probe “_Why does classification help regression?_” we design a fully controlled dataset including the following scenarios: Clean data – 1\(D\) non-linear functions defined by the sum of two sine waves with different frequencies and amplitudes; Noisy data – uniform noise added to the outputs; Out of distribution – sampling different regions of the input space during training and during testing. (We show a single function here. The gray shading groups the function targets into 4 classes, as an example.) gression: noisy data, out-of-distribution data, and the normal clean data case, as in Fig. 1. Additionally, we vary the sampling of the data from uniform to highly imbalanced as in Fig. 2. We empirically find out in which of these cases adding a classification loss improves the regression quality on the test set. Moreover, we explain these empirical observations by formulating them into a probabilistic analysis. We urge the reader to note that our goal is not proposing a novel regression loss, nor do we aim to improve results with "superior performance" over state-of-the-art regression models. But rather, we aim to investigate a common computer vision practice: _i.e_. adding a classification loss to the regression loss, and we analyze for what kind of dataset properties and dataset samplings this practice is useful. Finally, we show experimentally that our findings hold in practice on two imbalanced real-world computer vision datasets: NYUD2-DIR (depth estimation) and IMDB-WIKI-DIR (age estimation) [47], and on the Breakfast dataset [23] when predicting video progression. ## 2 Why does classification help regression? For an input dataset, \(\mathcal{D}\), containing N samples of the form \((\mathbf{x},y)\)\(\in\)\(\mathcal{D}\), we analyze what happens when we train a deep network with parameters \(\boldsymbol{\omega}\) to predict a target \(y^{*}\)\(=\)\(f(\mathbf{x},\boldsymbol{\omega})\) for a sample \(\mathbf{x}\), by minimizing NLL (negative log-likelihood) or equivalently, minimizing the MSE (mean squared error) regression loss: \[\boldsymbol{\omega}^{*} =\operatorname*{arg\,min}_{\boldsymbol{\omega}}\sum_{(\mathbf{x },y)\in\mathcal{D}}L(y,\mathbf{x},\boldsymbol{\omega}) \tag{1}\] \[=\operatorname*{arg\,min}_{\boldsymbol{\omega}}\left(\sum_{( \mathbf{x},y)\in\mathcal{D}}-\log p(y|\mathbf{x},\boldsymbol{\omega})\right)\] (2) \[\equiv\operatorname*{arg\,min}_{\boldsymbol{\omega}}\lambda\frac {1}{N}\sum_{(\mathbf{x},y)\in\mathcal{D}}\left(y-y^{*}\right)^{2} \tag{3}\] where \(\boldsymbol{\omega}^{*}\) are the optimal parameters, and we can reinterpret the imbalanced likelihood as the mean of a Gaussian distribution with \(\sigma\) noise: \(p(y|\mathbf{x},\boldsymbol{\omega})=\mathcal{N}(y;y^{*},\sigma^{2}I)\), in which case minimizing the NLL is equivalent to minimizing the MSE loss [3], and \(\lambda\) is a function of the noise \(\sigma\). We contrast Eq. (3) to the case when we discretize the targets \(y\) into a set of \(C\) classes and use a classification loss next to the regression loss: \[L(y,\mathbf{x},\boldsymbol{\omega})= \lambda\left(y-y^{*}\right)^{2}-\log p(y_{c}^{*}|\mathbf{x}, \boldsymbol{\omega}), \tag{4}\] where \(y_{k}^{*}\), \(k\)\(\in\)\(\{1,..,C\}\) denotes the model predictions binned into classes, and specifically \(y_{c}^{*}\) is the prediction at the true class indexed by \(c\), for the sample \(\mathbf{x}\). For the classification term, we make the standard softmax distribution assumption. ### Controlled 1D analysis To probe the question "_Why does classification help regression?_" we first want to know in which cases does classification help regression. We measure test-time MSE scores for each case in Fig. 1, and compare training with a regression loss as in Eq. (3), with training using an extra classification loss as in Eq. (4). We randomly sample 10 functions of the form: \(f(x)\)\(=\)\(a\sin(cx)+b\sin(dx)\), where \(f(x)\)\(\in\)\([-1.5,1.5]\) and \(x\)\(\in\)\([-1,1]\). For every function we vary the dataset scenario as in Fig. 1, and we also vary the sampling of the data from uniform to severely imbalanced sampling, as in Fig. 2. Each dataset sampling is repeatedly performed with 5 different random seeds, where we always sample \(\approx\)\(30,000\) samples in total, and then randomly pick 1/3 for training, for validation, and for testing, respectively. In the out-of-distribution case, the training set misses certain function regions that are present in the validation/test set, and vice-versa; and there is an overlap of 1/4 between the function regions in the training set and the regions in the validation/test set. For the imbalanced sampling, we aim to sample a range of the targets \(y\) more frequently than other ranges. For this, we randomly select a location along the y-axis in each repetition: this defines the center of the peak in Fig. 2, second row. And depending on the sampling scenario, we use a fixed variance ratio around the peak (0.3, 0.1 and 0.03 for mild, moderate and severe sampling) to define the region from which we draw the frequent samples. We sample 75% of the samples from the peak region, and the rest uniformly from the other function areas. We train a simple MLP (multi layer perceptron) with 3 linear layers (\([1\times 6]\), \([6\times 16]\), \([16\times 1]\)) and ReLU non Figure 2: Data sampling. On the columns we increase the data imbalance from uniform (balanced) to severely imbalanced. On the first row we show the function \(f(\mathbf{x})\) where darker datapoint colors visualize higher density. The gray shading on the first row groups the targets into 4 classes. On the second row we show the log-counts per function value, sampled for the training data. We sample the test data uniformly. linearities. For setting the hyperparameter \(\lambda\), we perform a hyper-parameter search on the validation set. We find the best \(\lambda\) to be \(1e{+}2\), \(1e{+}3\) and \(1e{+}4\) for the clean, noisy and out-of-distribution. We train for 80 epochs using an Adam optimizer with a learning rate of \(1e{-}3\), \(1e{-}2\) and \(1e{-}4\) for clean, noisy and out-of-distribution respectively. We use a weight decay of \(1e{-}3\). For classification we add at training-time a linear layer ([\(16{\times}C\)]) predicting \(C\) classes. More details are in the supplementary material. In Fig. 3 we show the MSE across all dataset and sampling variations, for \(\{2^{2},2^{4},2^{6},2^{8},2^{10}\}\) classes. We define the class ranges uniformly. The test sets are uniformly sampled and we perform 5 repetitions. We plot the means and standard deviations. We print on every plot the gap between the reg and reg+cls measured as the average absolute difference of MSE scores. From this \(1D\) analysis, we observe that the effect of the classification loss is visible when the training data is imbalanced for clean and noisy data. ### Anchoring \(\boldsymbol{1}\boldsymbol{D}\) experimental observations In Section 2.1 we observe that classification has a more pronounced effect when the sampling of the data is imbalanced. Therefore, from here on we focus the analysis on imbalanced data sampling. We start from the derivations of Ren _et al_. [31] who define the relation between the NLL (negative log-likelihood) of imbalanced samples \(-\log\widetilde{p}(y|\mathbf{x},\boldsymbol{\omega})\) and the NLL of the balanced samples \(-\log p(y|\mathbf{x},\boldsymbol{\omega})\): \[-\log\widetilde{p}(y|\mathbf{x},\boldsymbol{\omega})=-\log\frac{p(y|\mathbf{x },\boldsymbol{\omega})\widetilde{p}(y)}{\int_{y^{\prime}}p(y^{\prime}| \mathbf{x},\boldsymbol{\omega})\widetilde{p}(y^{\prime})dy^{\prime}} \tag{5}\] where \(\widetilde{p}(y)\) denotes the prior over imbalanced targets. Eq. (5) holds under the assumption that the data function remains unchanged, \(\widetilde{p}(\mathbf{x}|y){=}p(\mathbf{x}|y)\), which is the case for the clean and noisy data scenarios above. We decompose the log and rewrite the relation between the NLL of the balanced and the NLL of the imbalanced data: \[-\log\widetilde{p}(y|\mathbf{x},\boldsymbol{\omega})+L_{\text{extra}}(y, \mathbf{x},\boldsymbol{\omega})=-\log p(y|\mathbf{x},\boldsymbol{\omega}), \tag{6}\] \(L_{\text{extra}}\) contains all the information about the imbalanced regression targets: \[L_{\text{extra}}= \log\widetilde{p}(y)-\log\int_{y^{\prime}}p(y^{\prime}|\mathbf{x },\boldsymbol{\omega})\widetilde{p}(y^{\prime})dy^{\prime}, \tag{7}\] \[= \log\widetilde{p}(y)-\log\int_{y^{\prime}}\mathcal{N}(y^{\prime} ;y^{*},\sigma^{2}I)\widetilde{p}(y^{\prime})dy^{\prime}, \tag{8}\] where again \(y^{*}\) are the predicted targets. To derive the link between optimizing a model on imbalanced data and using both a regression MSE loss and a classification loss, we assume the imbalanced regression targets \(y\) can be discretized into a set of classes, \(k{\in}\{1,..,C\}\) such that \(\sum_{k=1}^{C}p(y_{k}){=}1\). By going from continuous regression targets to discrete classes, we change the form of the log-likelihood from Gaussian to softmax: \[L_{\text{extra}}\approx\log\widetilde{p}(y_{c})-\log\sum_{k=1}^{C}p(y_{k}^{*}| \mathbf{x},\boldsymbol{\omega})\widetilde{p}(y_{k}), \tag{9}\] where we denote the true class label by \(y_{c}\). Note that the regression targets \(y\) are imbalanced, but the classes \(y_{k}\) do not necessarily need to be imbalanced. We analyze in the experimental section the effect of defining balanced classes. We make the observation that if we could optimize the class assignment, the \(L_{\text{extra}}\) term would disappear. If the classes are optimized, then the class likelihoods are close to 0 for all classes except the true class: \(p(y_{k}^{*}|\mathbf{x},\boldsymbol{\omega}){\approx}0\), \(\forall k{\neq}c\), where \(c\) indexes the true class. Using this in the expression of \(L_{\text{extra}}\), we obtain: \[L_{\text{extra}}\approx \log\widetilde{p}(y_{c})-\log p(y_{c}^{*}|\mathbf{x},\boldsymbol{ \omega})\widetilde{p}(y_{c}), \tag{10}\] \[\approx -\log p(y_{c}^{*}|\mathbf{x},\boldsymbol{\omega}),\] Figure 3: MSE per class, where we vary the number of classes from 4 to 1024. We evaluate on uniformly sampled test sets, across 5 repetitions. We plot means and standard deviations for each dataset (rows) and sampling variation (columns), where the shading represents the standard deviation. The red line, reg, should be constant across classes but it varies due to the sampling/training randomness. We also print the gap between the reg and reg+cls measured as absolute difference of MSE scores. The effect of the classification loss is present when the sampling of the data is imbalanced for the clean and noisy data. where the \(\widehat{p}(y_{c})\) terms cancel out when decomposing the second \(\log\). Therefore, we can see that optimizing the class cross-entropy loss reduces the gap between between the NLL of imbalanced data and NLL of balanced data. (Note: we observe in practice that if the classifier fails to converge, adding a classification loss is detrimental to regression.) ### Defining balanced classes in practice Existing works show that optimizing imbalanced classes is problematic [26, 49]. In practice, researchers opt for using balanced classes in combination with regression [30, 44, 46, 51]. Here, the data is imbalanced, however we are free to define the class ranges such that we obtain balanced classes over imbalanced data sampling. To empirically test the added value of using balanced classes, we need a way to define balanced classes over imbalanced data sampling. Given an imbalanced data sampling, we bin samples into classes, using uniform class ranges. This generates imbalanced classes, which we then re-balance by redefining the class ranges such that the class histogram is approximately uniform. To this end, we apply histogram equalization over the original classes: \[q(k)=\left\lfloor\frac{C}{N}\sum_{j=1}^{k}\mathcal{H}_{C}(j)\right\rfloor, \tag{12}\] where \(\lfloor x\rfloor\) rounds \(x\) down to the nearest integer, and \(\mathcal{H}_{C}(\cdot)\) computes the histogram of the samples per class, and \(q(\cdot)\) is a mapping function that maps the old classes indexed by \(k{\in}\{1,..,C\}\) to a new set of classes \(\{1,..,\overline{C}\}\). Eq. (12) merges class ranges such that their counts are as close as possible. Thus, the number of equalized classes is lower or equal to the original number of classes, \(\overline{C}\leq C\). After class equalization, the new classes are not perfectly uniform. We further define a class-keeping probability \(\rho(k)\), as the ratio between the minimum class count and the current equalized class count \(\mathcal{H}_{\overline{C}}(k)\): \[\rho(k)=\frac{\min_{j=1}^{\overline{C}}\mathcal{H}_{\overline{C}}(j)}{ \mathcal{H}_{\overline{C}}(k)}, \tag{13}\] where \(\mathcal{H}_{\overline{C}}(\cdot)\) computes the histogram of equalized classes. Selecting training samples using only Eq. (13), without first equalizing the classes, will lead to never seeing samples from the most frequent classes. During training, for the regression loss we use all samples, while for the classification loss we pick samples \((\mathbf{x},y_{k})\) with a probability defined by \(\rho(k)\). More details are in the supplementary material. ## 3 Empirical analysis ### Hypothesis analysis on 1D data We use the 10 randomly sampled \(1D\) functions to further analyze the regression loss -- reg from Eq. (3), and regression with classification -- reg\(+\)cls from Eq. (4). We start with the \(1D\) data because it is easily interpretable and it offers a controlled environment to test the hypothesis that classification helps regression and to analyze the properties of the classes. 1 Footnote 1: Our source code will be made available online, at the address: [https://github.com/Silvial_auraPintca/reg-cls](https://github.com/Silvial_auraPintca/reg-cls). **Effect of the \(\lambda\) hyperparameter**. We perform hyperparameter search on the validation set for setting the \(\lambda\) in Eq. (4). We vary \(\lambda{\in}\{1e{-3},1e{-2},1e{-1},1,1e{+1},1e{+2},\)\(1e{+3},1e{+4}\}\). Fig. 4 shows the MSE across 3 repetitions, when considering different number of classes, for clean and noisy data scenarios, across sampling variations. Higher values of \(\lambda\) typically perform better in this case. When the sampling of the data is imbalanced, there exists a value of \(\lambda\) such that the reg baseline is outperformed by the reg+cls. We use the best \(\lambda\) values found on the validation, when evaluating on the test set. **Effect of balancing the classes on imbalanced data.** We numerically evaluate in Tab. 1 if using balanced classes (as defined in Eq. (12)-Eq. (13)) is less sensitive to the choice of \(\lambda\). We perform 3 repetitions over all dataset scenarios and sampling cases, and vary \(\lambda{\in}\{1e{-3},1e{-2},\)\(1e{-1},1,1e{+1},1e{+2},1e{+3},1e{+4}\}\). For this we consider the percentage of runs (across different number of classes, random seeds, and values of \(\lambda\)) where classification helps regression -- where reg+cls MSE is lower than reg MSE. Ideally this number should be close to 100%. Balancing the classes is more robust to the choice of \(\lambda\), as on average there Figure 4: **Effect of the \(\lambda\) hyperparameter on the validation:** We evaluate a range of values \(\lambda{\in}\{1e{-3},1e{-2},1e{-1},1,1e{+1},\)\(1e{+2},\)\(1e{+3},1e{+4}\}\) on the validation set. The shading represents the standard deviation of the MSE error across 3 repetitions. Setting \(\lambda\) correctly is essential when using a classification loss next to the regression loss. are more runs where classification helps regression. **Effect of the noisy targets.** In Fig. 3 we considered a single noise level for the Noisy data scenario. Here, we further analyze the effect on the MSE scores when varying the noise level in the targets for the baseline reg (in red) and the reg+cls with imbalanced classes (in green) as well as reg+cls bal where the classes are balanced (in lime color). We vary the level of noise \(\sigma\) of the targets on the y-axis \(\sigma\)\(\in\)\(\{0.05,0.1,0.5\}\), and plot mean MSE and standard deviation on uniform test sets across 5 repetitions. For the balanced case, we indicate the original number of classes on the x-axis, while in practice the number of balanced classes is typically \(2\times\) lower than the initial one. Fig. 5 shows that despite severely increasing the noise level, adding a classification loss remains beneficial when the data is imbalanced. ### Realistic image datasets **Imbalanced realistic image datasets.** We run experiments on two realistic imbalanced datasets from Yang _et al_. [47]: depth estimation on the NYUD2-DIR dataset, and age estimation on the IMDB-WIKI-DIR. The supplementary material plots dataset statistics. For both datasets we use the Adam optimizer and set the learning rate to \(1e{-4}\) for NYUD2-DIR with 5 epochs1 and batch size 16 accumulated over 2 batches (to mimic batch size 32 on 2 GPUs), while for IMDB-WIKI we use a learning rate of \(1e{-3}\) for 90 epochs and batch size 128. When comparing with Ren _et al_. [31] we use their best results (their GAI method). We also use the evaluation code provided in Yang _et al_. [47] and we report RMSE (root mean squared error) on NYUD2-DIR and MAE (mean absolute error)/MSE (mean squared error) for IMDB-WIKI-DIR as also done in [31, 47]. When re-running the baseline reg results we observed a large variability across different runs by varying the random seed, especially on the NYUD2-DIR dataset, therefore we report results averaged over 3 random seeds. We use the architecture of Yang _et al_. [47]: ResNet-50 [17] for IMDB-WIKI-DIR, and the ResNet-50-based model from [19] for NYUD2-DIR. Footnote 1: Using more epochs on NYUD2-DIR seems to lead to overfitting. For adding the classification loss on IMDB-WIKI-DIR we append, only during training, a linear layer of size \([F,C]\) followed by a softmax activation and a cross-entropy loss, where \(F\) is the number of channels in the one-to-last layer. For the NYUD2-DIR the predictions are per-pixel, thus we use the segmentation head from Mask R-CNN [16] composed of a transposed convolution of size \(2{\times}2\), ReLU, and a \(1{\times}1\) convolution predicting the number of classes, \(C\). At test time the classification branch is not used. We estimate the \(\lambda\) hyperparameter using the validation set provided in [47] on IMDB-WIKI-DIR. For NYUD2-DIR we define a validation set by randomly selecting 1/5 of the training directories with a seed of 0, and we use the same training/validation/test split for all our results. For NYUD2-DIR we use \(\lambda{=}1.0\) for 100 classes and \(\lambda{=}0.1\) for 10 and 2 classes. For IMDB-WIKI-DIR we set \(\lambda{=}0.1\) for 100 classes and \(\lambda{=}1.0\) for 10 and 2 classes. We compare the reg results with reg\(+\)cls adding the classification loss, and reg\(+\)cls bal. with balanced classes, where we consider 2, 10 and 100 classes. Tab. 2 shows the RMSE results on NYUD2-DIR when training with the standard MSE loss compared to adding a classification loss. We report RMSE on the test, where the best model is selected on the validation set across epochs -- RMSE-val. To compare with previous work who selects the best model on the test, we also report this as RMSE-test \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Imbalanced classes (\(\uparrow\))} & \multicolumn{2}{c}{Balanced classes (\(\uparrow\))} \\ \cline{2-5} Dataset case & Clean & Noisy data & Clean & Noisy data \\ \cline{2-5} Uniform & 55.42\% & 59.83\% & 59.00\% & 65.00\% \\ Mild & 54.17\% & 55.75\% & 57.67\% & 62.42\% \\ Moderate & 47.50\% & 56.58\% & 47.50\% & 56.00\% \\ Severe & 36.83\% & 49.08\% & 34.25\% & 46.25\% \\ \hline Avg & 48.48\% & 55.31\% & 49.60\% & 57.42\% \\ \hline \hline \end{tabular} \end{table} Table 1: **Effect of balancing the classes:** On the validation sets we test how sensitive reg+cls is to the choice of \(\lambda\) when balancing classes versus when using imbalanced classes. For this we vary \(\lambda{\in}\)\(\{1e{-3},1e{-2},1e{-1},1,1e{+1},1e{+2},1e{+3},1e{+4}\}\). And we measure the percentage of runs where adding a classification loss improves the regression predictions. Using balanced classes is less sensitive to the choice of \(\lambda\). Figure 5: **Effect of noisy targets:** We vary the amount of noise in the targets, along the y-axis. We plot both using imbalanced classes (green), and using balanced classes (lime) compared to the reg baseline (red). We report MSE on the test sets across 5 repetitions. Despite the the noise level increasing drastically, the benefits of adding a classification loss to the regression loss remain visible. (despite this being a bad practice). Additionally, note that our training set is slightly smaller because of using a validation set, so the reg results are worse than in [47] (_i.e_. \(1.477\) RMSE-test). We observe that there is an inconsistency between the training and test set, as the best model on the test set does not correspond to the best model on the validation set (which is a subset of the training). Despite all these, adding a classification loss still improves across all class-options, when selecting the best model on the validation set, and for 100 classes and 10 balanced classes, when selecting the best model on the test set. This may be due to the classifier overfitting on the training data for fewer classes. Tab. 3 gives the MAE and MSE results on IMDB-WIKI-DIR when using the standard MSE loss during training compared to when adding the classification at training time. The best results are obtained using 100 balanced classes. Here, adding a classification loss not only improves over the regression baseline, but it is also on-par with state-of-the-art methods specifically designed for imbalanced regression, such as [32, 47]. Adding a classification loss is similar to [32, 47], who define smooth classes over the data. **Balanced realistic image datasets.** On the same two datasets: NYUD2-DIR and IMDB-WIKI, we test the effect of adding a classification loss when we re-balance the data by binning the targets into 100 bins and selecting samples per batch during training as defined as in Eq. (12)-Eq. (13) for both reg and reg+cls. Because we mask samples per batch to balance the training data, for IMDB-WIKI here we use a batch size of 128 and learning rate \(1e{-4}\) for 90 epochs. While for NYUD2-DIR we use a learning rate of \(1e{-4}\) and batch size of 8, accumulated over 4 batches (to mimic a batch of 32 on 1 GPU), for 5 epochs. Tab. 4 shows \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{3}{c}{Balanced NYUD2-DIR} \\ \hline & All RMSE-val (\(\downarrow\)) & All RMSE-test (\(\downarrow\)) \\ \hline reg (MSE) & 1.442 (\(\pm\)0.077) & 1.492 (\(\pm\)0.042) \\ reg+cls & 1.456 (\(\pm\)0.033) & 1.593 (\(\pm\)0.025) \\ \hline \multicolumn{3}{c}{Balanced IMDB-WIKI-DIR} \\ \hline & All MAE (\(\downarrow\)) & All MSE (\(\downarrow\)) \\ \hline reg (MSE) & 7.74 (\(\pm\)0.04) & 131.03 (\(\pm\)1.44) \\ reg+cls & 7.71 (\(\pm\)0.06) & 131.27 (\(\pm\)1.00) \\ \hline \hline \end{tabular} \end{table} Table 4: **Balanced realistic image data: NYUD2-DIR depth estimation and IMDB-WIKI-DIR age estimation.** We compare the reg and reg+cls (using 100 classes) when the data is balanced. We report RMSE and MAE/MSE, respectively, across 3 repetitions. There are no clear improvements when adding a classification loss to the regression on balanced data. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{NYUD2-DIR RMSE-val (\(\downarrow\))} & \multicolumn{4}{c}{NYUD2-DIR RMSE-test (\(\downarrow\))} \\ \cline{2-9} Samples & All & Many & Med. & Few & All & Many & Med. & Few \\ \hline Kernel [47] & — & — & — & — & 1.338 & 0.670 & 0.851 & 1.880 \\ Balanced [31] & — & — & — & 1.251 & 0.692 & 0.959 & 1.703 \\ reg (MSE) & 1.614 (\(\pm\)0.051) & 0.554 (\(\pm\)0.002) & 0.934 (\(\pm\)0.042) & 2.360 (\(\pm\)0.081) & 1.499 (\(\pm\)0.083) & 0.578 (\(\pm\)0.010) & 0.896 (\(\pm\)0.043) & 2.171 (\(\pm\)0.136) \\ \hline reg+cls (2 cls) & 1.587 (\(\pm\)0.026) & 0.618 (\(\pm\)0.003) & 1.062 (\(\pm\)0.025) & 2.278 (\(\pm\)0.041) & 1.532 (\(\pm\)0.082) & 0.624 (\(\pm\)0.036) & 0.946 (\(\pm\)0.032) & 2.204 (\(\pm\)0.141) \\ reg+cls (10 cls) & 1.576 (\(\pm\)0.063) & 0.585 (\(\pm\)0.008) & 0.982 (\(\pm\)0.058) & 2.282 (\(\pm\)0.095) & 1.509 (\(\pm\)0.022) & 0.582 (\(\pm\)0.013) & 0.947 (\(\pm\)0.046) & 2.178 (\(\pm\)0.047) \\ reg+cls (100 cls) & 1.536 (\(\pm\)0.090) & 0.569 (\(\pm\)0.018) & 0.966 (\(\pm\)0.041) & 2.222 (\(\pm\)0.146) & 1.488 (\(\pm\)0.028) & 0.578 (\(\pm\)0.015) & 0.971 (\(\pm\)0.049) & 2.141 (\(\pm\)0.045) \\ reg+cls bal. (2 cls) & 1.599 (\(\pm\)0.020) & 0.616 (\(\pm\)0.026) & 1.033 (\(\pm\)0.058) & 2.304 (\(\pm\)0.044) & 1.522 (\(\pm\)0.060) & 0.665 (\(\pm\)0.033) & 1.003 (\(\pm\)0.059) & 2.166 (\(\pm\)0.118) \\ reg+cls bal. (10 cls) & 1.454 (\(\pm\)0.044) & 0.607 (\(\pm\)0.041) & 0.965 (\(\pm\)0.023) & 2.077 (\(\pm\)0.087) & 1.454 (\(\pm\)0.044) & 0.607 (\(\pm\)0.041) & 0.965 (\(\pm\)0.023) & 2.077 (\(\pm\)0.087) \\ reg+cls bal. (100 cls) & 1.553 (\(\pm\)0.117) & 0.563 (\(\pm\)0.033) & 0.897 (\(\pm\)0.043) & 2.263 (\(\pm\)0.193) & 1.487 (\(\pm\)0.051) & 0.574 (\(\pm\)0.014) & 0.869 (\(\pm\)0.019) & 2.156 (\(\pm\)0.084) \\ \hline \hline \end{tabular} \end{table} Table 2: **Imbalanced realistic image data: NYUD2-DIR depth estimation.** We evaluate the baseline reg trained with MSE, and the reg+cls variants. We report RMSE when the best model is selected on the validation (RMSE-val), or as in [47] on the test set (RMSE-test). We average over 3 different random seeds. Adding a classification loss helps regression, which validates our hypothesis. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{IMDB-WIKI-DIR MAE(\(\downarrow\))} & \multicolumn{4}{c}{IMDB-WIKI-DIR MSE(\(\downarrow\))} \\ \cline{2-9} Samples & All & Many & Med. & Few & All & Many & Med. & Few \\ \hline Focal [27] & 7.97 & 7.12 & 15.14 & 26.96 & 136.98 & 106.87 & 368.60 & 1002.90 \\ Kernel [47] & 7.78 & 7.20 & 12.61 & 22.19 & 129.35 & 106.52 & 311.49 & 811.82 \\ Balanced [31] & 8.12 & 7.58 & 12.27 & 23.05 & — & — & — \\ reg (MAE) & 8.09 (\(\pm\)0.01) & 7.23 (\(\pm\)0.02) & 15.48 (\(\pm\)0.15) & 26.81 (\(\pm\)0.48) & 1385.53 (\(\pm\)1.17) & 107.82 (\(\pm\)0.96) & 375.27 (\(\pm\)6.97) & 1017.59 (\(\pm\)27.51) \\ \hline reg+cls (2 cls) & 7.95 (\(\pm\)0.05) & 7.11 (\(\pm\)0.03) & 15.08 (\(\pm\)0.27) & 26.15 (\(\pm\)0.06) & 135.15 (\(\pm\)0.40) & 105.86 (\(\pm\)0.27) & 361.02 (\(\pm\)7.00) & 973.53 (\(\pm\)11.52) \\ reg+cls (10 cls) & 7.93 (\(\pm\)0.06) & 7.12 (\(\pm\)0.06) & 14.93 (\(\pm\)0.17) & 25.91 (\(\pm\)0.27) & 135.69 (\(\pm\)1.65) & 106.58 (\(\pm\)1.42) & 359.27 (\(\pm\)5.70) & 975.37 (\(\pm\)34.09) \\ reg+cls (100 cls) & 7.61 (\(\pm\)0.02) & 6.90 (\(\pm\)0.03 the results across 3 repetitions. When the data is already balanced, adding a classification loss has limited effect. ### Imbalanced video progress prediction As an additional investigation on imbalanced data, we explore videos which are naturally imbalanced in the number of frames. We perform video progress prediction on the Breakfast video dataset [23] containing 52 participants performing 10 cooking activities. We follow the standard dataset split (S1, S2, S3, S4) [23] and use the corresponding train/test splits. We adopt the method in [24] and train a simple MLP on top of IDT (improved dense trajectory) features [41] of dimension 64 over trajectories of 15 frames. We evaluate RMSE when predicting either video progress percentages, or absolute frame numbers. Kukleva [24] use an MLP with 3 linear layers and sigmoid activations. We change the sigmoid activations into ReLU activations for the reg and reg\(+\)cls models, since it works better when predicting absolute frame numbers. For all methods we keep the training hyperparameters from [24]: learning rate \(1e{-3}\), Adam optimizer, and training for 40 epochs with the learning rate decreased by 0.1 at 30 epochs. We also report the random baseline results when using the untrained model. The data is imbalanced, in the sense that video lengths vary widely (see supplementary material for data distribution). We test again if adding a classification loss can benefit the regression predictions when using 100 classes in the reg\(+\)cls. We search for \(\lambda\) on a validation set created by randomly selecting 1/3 of the training videos with a seed of 0. We use \(\lambda{=}100\) when predicting percentages, and \(\lambda{=}10\) when predicting frame numbers. At training time, we add the classification loss via a linear layer on top of the one-to-last layer, followed by softmax. We train one model per task and report mean and standard deviations over all 10 tasks. Tab. 5 depicts the results across all 4 splits. We report RMSE scores when predicting video progress in terms of percentages in Tab. 5(a), and when predicting video progress in terms of frames in Tab. 5(b). Even if we predict video progress percentages between [0,100]% in Tab. 5(a), because the video length varies widely, for some videos we will have a lot more frame than for others, causing the data sampling to still be imbalanced. In both cases there is gain from adding a classification loss to the regression loss. ## 4 Discussion and limitations **Relation to nonlinear ICA.** Hyvarinen [21] show that there is a relation between learning to bin a continuous function and performing nonlinear ICA. Hyvarinen [21] start from a continuous signal \(\mathbf{x}\) generated by a non-linear combination of source-signals: \(\mathbf{x}{=}f(\mathbf{s})\), where \(f(\cdot)\) is a non-linear function and \(\mathbf{s}\) are the independent and non-stationary sources. They split the signal into \(C\) temporal segments \(\mathbf{x}{=}\{\mathbf{x}_{1},\mathbf{x}_{2},..,\mathbf{x}_{C}\}\) and train an MLP to predict for every sample \(\mathbf{x}_{k_{i}}\) the segment it belongs to: \(g(\mathbf{x}_{k_{i}},\theta){=}k,k{\in}\{1,..,C\}\) and \(\mathbf{x}_{k_{i}}{\subset}\mathbf{x}_{k},\text{ and }\theta\) are the MLP parameters. Hyvarinen [21] prove that the last hidden layer of the MLP \(h_{g}(\cdot,\theta)\) recovers the original sources \(\mathbf{s}\) within a linear transformation: \(h_{g}(\mathbf{x}_{k_{i}},\theta)=\mathbf{w}_{\mathbf{s}_{k_{i}}}+\mathbf{b}\), where \(\mathbf{w},\mathbf{b}\) define a linear transformation. Intuitively, Hyvarinen [21] discretize the signal into segments and classify the segments, thus performing classification on a continuous function. However, they do not focus on combining the binning with optimizing a regression problem. Similar to Hyvarinen [21], predicting discrete signal bins has been successfully used for unsupervised video representation learning by discriminating between video segments [9] or classifying video speed [42]. Up to the point which the underlying continuous regression function (speed, time, age, depth) can be assumed to be generated by a non-linear combination of non-stationary source (whose statistics change with the function), adding a classification loss decorrelates the independent sources. Additionally, we hypothesize that there may be a relation between nonlinear ICA and imbalanced sampling: the independent sources \(\mathbf{s}\) are also continuous and shared across samples. And having the hidden representation of the MLP constrained to be independent across dimensions may lead to a better use of sparse samples in certain areas of the target-space. However, we leave this for future research. **Limitations of analysis.** The analysis performed here is still elementary and only aims to scratch the surface on the \begin{table} \begin{tabular}{c c c c c c c c c} \multicolumn{8}{c}{Breakfast RMSE \% (\(\downarrow\))} & \multicolumn{3}{c}{Breakfast RMSE frames (\(\downarrow\)) - unnormalized} \\ \multicolumn{1}{c}{Dataset split} & S1 & S2 & S3 & S4 & S1 & S2 & S3 & S4 \\ \hline Random baseline & 58.50 (\(\pm\)0.29) & 58.37 (\(\pm\)0.07) & 58.39 (\(\pm\)0.14) & 58.38 (\(\pm\)0.12) & 1394.48 (\(\pm\)992.50) & 1511.014 (\(\pm\)1132.60) & 1450.26 (\(\pm\)1058.56) & 1420.21 (\(\pm\)1063.16) \\ [24] & 31.24 (\(\pm\)0.80) & 31.49 (\(\pm\)0.98) & 30.58 (\(\pm\)0.66) & 30.78 (\(\pm\)0.53) & 1079.38 (\(\pm\)775.17) & 1235.02 (\(\pm\)4932.89) & 1172.67 (\(\pm\)880.69) & 1170.07 (\(\pm\)886.92) \\ reg (RMSE) & 32.57 (\(\pm\)1.45) & 33.30 (\(\pm\)1.69) & 32.52 (\(\pm\)1.17) & 33.08 (\(\pm\)1.49) & 860.17 (\(\pm\)583.44) & 891.39 (\(\pm\)641.58) & 862.65 (\(\pm\)609.01) & 845.48 (\(\pm\)605.46) \\ \hline reg\(+\)cls (100 cls) & 28.71 (\(\pm\)0.38) & 28.84 (\(\pm\)0.55) & 28.46 (\(\pm\)0.48) & 28.44 (\(\pm\)0.50) & 837.59 (\(\pm\)573.15) & 870.55 (\(\pm\)630.89) & 845.65 (\(\pm\)601.73) & 809.76 (\(\pm\)595.37) \\ reg\(+\)cls bal. (100 cls) & — & — & — & — & 837.61 (\(\pm\)573.17) & 870.54 (\(\pm\)630.88) & 845.65 (\(\pm\)601.72) & 809.76 (\(\pm\)595.37) \\ \end{tabular} \end{table} Table 5: **Imbalanced video progress prediction on Breakfast.** We report mean RMSE and standard deviations averaged over all 10 cooking tasks of the Breakfast dataset. (a) Progress prediction in video percentages. (b) Progress prediction in frame numbers. The overall progress prediction results leave space for improvements for all methods, because of the dataset challenges. However, also for this regression problem adding a classification loss has benefits. usefulness of adding a classification loss when performing regression. A number of things have been disregarded here such as: the effect of the model depth, while keeping the model size fixed. Additionally, the choice of the optimizer and the loss function during training may also play an important role. Finally, delving more into the relation between nonlinear ICA and adding a classification loss to the regression, may be an interesting future research avenue. ## 5 Related work ### Improved deep regression A thorough analysis of the effect of deep architecture choices on regression scores is performed in [25]. Rather than considering architecture choices, other works focus on regression robustness - defined as less influence from outliers [15, 20]. The benefits of having a weighted regression loss has been extensively analyzed in classic work such as [6, 7]. With a similar goal, Barron [1] proposes a general regression loss that can be adapted to well-known functions. Minimizing a Tukey's biweight function also offers robustness to outliers [2]. Similarly, a smooth \(L_{1}\) loss is used in [12] for improved object bounding-box regression. Mapping regression targets to the hyper-sphere can also aid regression [29]. Instead of focusing on the loss function, a smooth adaptive activation function is used in [18]. From a different direction, ensemble networks have been successfully used for improved regression estimates [8, 14, 39]. Deep negative correlation learning is proposed in [48] to learn diversified base networks for deep ensemble regression. Dissimilar to these works, our goal is not proposing a new network architecture or a new regression loss function, but rather analyzing why a combination between a classification and a regression loss can lead to improvements. ### Discretized regression targets Instead of optimizing continuous object pose angles, [38, 36] discretize them into classes and perform classification. Continuous prediction can be obtained back from discretized targets, by using soft-min for disparity learning [22]. Similarly, ages are discretized in classes and the final prediction is computed as an expectation over class probabilities in [34]. Rather than using predefined classes, clusters can be defined and, again, the final predication is a weighted sum over clusters [40, 43]. In a similar manner, a weighted combination of clustered regression targets is used for finding surface normals in [43]. Popular object detectors [28, 32, 33, 45] also rely on a set of predefined box locations that can be seen as bin centers, and regress the final box locations with respect to these centers. An explicit combination of regression and classification losses has been shown to improve results for object orientation regression [30, 50, 51] by first splitting orientations into bins and then regressing the precise value in each bin. Similarly, a joint classification loss over discretized targets and a regression loss is effective for horizon line detection [44]. Here, we want to analyze why these prior works opt for discretizing regression targets. More specifically, we analyze why and when a combination of a regression loss and a classification loss over discretized continuous targets improves results. ### Regression on imbalanced data Prior work has shown that imbalanced sampling can negatively affect the regression scores, and proposed ways to mitigate this by designing better data sampling techniques [4, 37]. How rare a data point is in the training set, can be modeled with kernel density estimation [35]. Not only the data sampling but also the target distribution can be fixed by smoothing the distribution of both labels and features using nearby targets [47]. Similarly, the learning model can be regularized such that samples that are close in label space are also close in the feature space [13]. While focusing on the learning model, ensemble methods are a viable solution when working with imbalanced regression problems [5]. Instead of focusing on the data sampling or the model, imbalanced regression estimates can be improved by adapting the MSE (mean squared error) loss [31]. Most similar to us are [32, 47] whose methods can be seen as defining smooth classes over the data. However, they aim to improve regression on imbalanced data, while we set off to analyze in which cases adding a classification loss can help regression, and what is the motivation behind this. ## 6 Conclusion Here, we present a preliminary analysis on the effect of adding a classification loss to the regression loss. We make the observation that adding a classification loss to the regression loss has been used in computer vision for deep regression [30, 44, 46]. And we empirically test across data variations and data samplings on a set of 1D functions, the effect of adding a classification loss to a regression loss. We find that for imbalanced regression, adding a classification loss helps the most. Furthermore, we present an attempt at formalizing this observation starting from the derivations of Ren _et al_. [31] for imbalanced regression. Additionally, we validate that adding a classification loss to the regression loss is beneficial on imbalanced real data, where we evaluate on imbalanced image data on NUYD2-DIR and IMDB-WIKI-DIR datasets [47], and imbalanced video progress prediction on the Breakfast dataset [23]. **Acknowledgements.** This work was done with the support of the Eureka cluster Program, IWISH project, grant number AI2021-066. Jan van Gemert is financed by the Dutch Research Council (NWO) (project VI.Vidi.192.100).
2309.00739
Concerning the Verity of the MMRD Relation for Novae
It has long been claimed that novae reaching the highest luminosity at the peak of their eruptions appear to fade the fastest from maximum light. The relationship between peak brightness and fade rate is known as the Maximum-Magnitude, Rate-of-Decline (MMRD) relation. Lightcurve parameters for the most recent sample of M31 recurrent novae are presented and used to buttress the case that the observed MMRD relation can be explained as a consequence of observational selection effects coupled with expectations from standard nova models.
Allen W. Shafter, J. Grace Clark, Kamil Hornoch
2023-09-01T21:29:23Z
http://arxiv.org/abs/2309.00739v1
# Concerning the Verity of the MMRD Relation for Novae ###### Abstract It has long been claimed that novae reaching the highest luminosity at the peak of their eruptions appear to fade the fastest from maximum light. The relationship between peak brightness and fade rate is known as the Maximum-Magnitude, Rate-of-Decline (MMRD) relation. Lightcurve parameters for the most recent sample of M31 recurrent novae are presented and used to buttress the case that the observed MMRD relation can be explained as a consequence of observational selection effects coupled with expectations from standard nova models. Cataclysmic Variable Stars (203) - Novae (1127) - Recurrent Novae (1366) 0000-0002-4880-2886]Allen W. Shafter + 0000-0002-4880-7885]J. Grace Clark 0000-0002-4880-7885]Kamil Hornoch ## 1 Introduction Mclaughlin (1945) was first to argue that the peak luminosity of a classical nova was correlated with its rate of decline from maximum light. Over the years, the correlation has come to be known as the Maximum-Magnitude versus Rate-of-Decline (MMRD) relation, and has been calibrated many times, both in the Galaxy and in M31 (e.g., Cohen, 1985; Capaccioli et al., 1989; Downes & Duerbeck, 2000; Shafter et al., 2011; Ozdonmez et al., 2018). Despite its long history, the verity of the MMRD has been called into question in recent years. In a sample of M31 novae, Kasliwal et al. (2011) found several systems that were fainter for a given decline rate than predicted by the MMRD. They considered the possibility that these objects might be recurrent novae (RNe), but noted that most had spectroscopic types (Fe II) that were inconsistent with that interpretation. Subsequent to the Kasliwal et al. (2011) study, Ozdonmez et al. (2018) compiled an extensive database for Galactic novae showing that the MMRD relation was generally followed and remained "a useful tool for statistical analyses". Shortly thereafter, Schaefer (2018) recalibrated the Galactic MMRD based on _Gaia_ distances concluding that the MMRD relation was plagued by considerable scatter with a fit too poor to be usable for distance determinations. In this note, we argue that the MMRD relation can be best understood as a combination of observational selection effects coupled with standard predictions from nova theory. ## 2 The Nature of the MMRD To understand the placement of novae in the \(M\) (max) - \(\log\,t_{2}\) plane (hereafter the MMRD plane) we first consider how the ignition mass - the accreted mass required to trigger a thermonuclear runaway (TNR) on the white dwarf - depends on properties of the progenitor binary. To first order, \(M_{\rm ign}\) depends only on the pressure at the base of the accreted layer, which is a function of the WD mass. However, the temperature of the accreted layer is also important, and it is strongly affected by the rate of accretion onto the white dwarf. Thus, as first considered by Nomoto (1982) and explored by many groups since (e.g., Townsley & Bildsten, 2005; Wolf et al., 2013; Kato et al., 2014), the mass necessary to trigger a TNR depends on both the WD mass and the rate of accretion onto its surface. For illustrative purposes, we consider in Figure 1 the MMRD relation for the sample of Galactic novae studied by Downes & Duerbeck (2000). In addition, we show separately the known Galactic RNe from Schaefer (2010), along with the most recent M31 RN sample presented here together for the first time. Despite some scatter, the Downes & Duerbeck nova sample follows the best-fitting MMRD relation (dashed line) quite well. However, the RNe fall consistently below the MMRD relation. To explore the observed properties of the MMRD in detail, it is useful to divide the MMRD plane into four quadrants: (Q1) an upper left quadrant consisting of relatively fast and bright novae, (Q2) a lower left quadrant that includes fast and faint novae, (Q3) an upper right quadrant where slowly evolving luminous novae should lie, and finally (Q4) a lower right quadrant where slowly evolving and faint novae are found. Given a population of nova progenitors with a range of WD masses and accretion rates, one can imagine systems occupying all quadrants of the MMRD plane. However, we argue below that the observed MMRD relation arises because two of the quadrants, the second and especially the third, are selected against. In the case of the second quadrant, the low luminosity and fast evolution (short \(t_{2}\)) suggests a weak TNR and a relatively small ejected (and ignition) mass. Models show that novae with these properties arise from systems with high mass white dwarfs accreting at high rates. The small ignition masses and high accretion rates produce novae with the shortest recurrence times (e.g., see Kato et al. 2014, their figure 6). Thus, it is not surprising that the known RNe are found in the lower left quadrant of the MMRD plane. Considering their short recurrence times, it is reasonable to wonder why this quadrant of the MMRD plane is not more heavily populated. The answer lies in the fact that faint and fast novae are strongly selected against in typical nova surveys, most of which have relatively bright limiting magnitudes and coarse temporal coverage. In the third quadrant of the MMRD plane we expect to find novae that are luminous and slowly evolving. The slow evolution suggests a massive ejecta (and ignition mass), while the high luminosity implies a strong TNR. Models suggest that the progenitors of such novae contain relatively low mass white dwarfs accreting at low rates. The slow accumulation of matter on the white dwarf and a high ignition mass results in both a strong TNR and a very long recurrence time. Such systems are the polar opposite of the RNe. Although they should have bright eruptions, the recurrence times are expected to be exceedingly long. Thus, systems in Figure 1: The MMRD plane divided into four quadrants: Q1 – Q4. The filled black circles show data for Galactic novae from Downes & Duerbeck (2000). The dashed line is the best linear fit to these data. The blue squares show Galactic RNe from Schaefer (2010), while the red diamonds show our updated M31 RN sample. Most classical novae discovered in routine nova patrols fall either in Q1 or Q4. Known RNe on the other hand fall almost exclusively in Q2, while Q3 is almost devoid of novae. the upper right quadrant of the MMRD are expected to erupt extremely rarely, in agreement with observation. Consistent with the nature of the MMRD relation, most novae fall into either the first or the fourth quadrants. Novae in the first quadrant presumably arise from novae with high mass white dwarfs accreting at relatively low rates, while novae in the fourth quadrant likely have progenitors that contain relatively low mass white dwarfs accreting at relatively high rates. In the first case, the high mass WD and the slow accretion will result in an accreted layer that is highly degenerate at the time when the TNR ensues. Such novae should appear relatively bright and evolve quickly. Conversely, the rapidly accreting low mass WD systems will be characterized by less degenerate accreted layers and a weaker TNR resulting in less luminous novae with a generally slower evolution. In summary, the MMRD emerges as a result of observational selection against faint and fast (recurrent) novae coupled with a dearth of eruptions from systems with extremely long recurrence times.
2310.15531
Small Systle Sets and Coxeter Groups
The systoles of a hyperbolic surface {\Sigma} are the shortest closed geodesics. We say that the systoles fill the surface if the set Syst({\Sigma}) of all systoles cuts {\Sigma} into polygons. We refine an idea of Schmutz [15] to construct closed hyperbolic surfaces {\Sigma} of arbitrarily large genus with a small set Syst({\Sigma}) that fills. In fact, for the surfaces {\Sigma} considered, the cardinality of Syst({\Sigma}) is in o(g/ ln g), where g is the genus of {\Sigma}. The proof is based on the theory Coxeter groups, combined with some elementary number theory.
Ingrid Irmer, Olivier Mathieu
2023-10-24T05:31:55Z
http://arxiv.org/abs/2310.15531v1
# Small systole sets and Coxeter groups ###### Abstract. The systoles of a hyperbolic surface \(\Sigma\) are the shortest closed geodesics. We say that the systoles _fill_ the surface if the set \(\operatorname{Syst}(\Sigma)\) of all systoles cuts \(\Sigma\) into polygons. We refine an idea of Schmutz [15] to construct closed hyperbolic surfaces \(\Sigma\) of arbitrarily large genus with a small set \(\operatorname{Syst}(\Sigma)\) that fills. In fact, for the surfaces \(\Sigma\) considered, the cardinality of \(\operatorname{Syst}(\Sigma)\) is in \(o(g/\sqrt{\ln\,g})\), where \(g\) is the genus of \(\Sigma\). The proof is based on the theory Coxeter groups, combined with some elementary number theory. ###### Contents * 1 Introduction * 2 The 2-dimensional Cayley Complex \(\operatorname{Cay}^{+}W\) * 3 Uniformization of \(2k\)-Regular Tesselations * 4 The subgroup \(H(k)\) of \(W(k)\) * 5 Bounds on \(\operatorname{Fill}(g)\) **Keywords** Coxeter groups, Tits representation, hyperbolic surfaces, systoles, Thurston spine. ## 1. Introduction ### General introduction In this paper, surfaces are assumed to be oriented. A _systole_ of a hyperbolic surface \(\Sigma\) is an essential closed geodesic of minimal length. Let \(\operatorname{Syst}(\Sigma)\) be the set of all systoles of \(\Sigma\). We say that \(\operatorname{Syst}(\Sigma)\)_fills_ the surface if it cuts \(\Sigma\) into polygons. For \(g\geq 2\), the Teichmuller space \(\mathcal{T}_{g}\) is a manifold used for parameterising closed hyperbolic surfaces of genus \(g\). In [16], Thurston defined a remarkable subspace \(\mathcal{P}_{g}\subset\mathcal{T}_{g}\), called the _Thurston spine_. It consists of all surfaces \(\Sigma\in\mathcal{T}_{g}\) for which \(\operatorname{Syst}(\Sigma)\) fills. Since \(\mathcal{P}_{g}\) is nonempty by [16], one can meaningfully define the integer \(\operatorname{Fill}(g)\) as the smallest cardinality of \(\operatorname{Syst}(\Sigma)\) when \(\Sigma\) varies over \(\mathcal{P}_{g}\). Trying to understand the dimension of \(\mathcal{P}_{g}\) leads to the question of finding an upper bound for \(\operatorname{Fill}(g)\). In this direction, we prove **Theorem 25**.: _There exists an infinite set \(A\) of integers \(g\geq 2\) such that_ \[\operatorname{Fill}(g)\leq\frac{57}{\sqrt{\ln\ln\ln g}}\ \frac{g}{\sqrt{\ln g}}\] _for any \(g\in A\)._ In fact, the theorem 25 proved in the main body is slightly stronger than the previous statement. Since \(57/\sqrt{\ln\ln\ln g}\) belongs to \(o(1)\), it implies the result stated in the abstract. From our main result, we will deduce, in a forthcoming preprint [12], a related bound for the codimension of the Thurston spine \(\mathcal{P}_{g}\). ### Previous works for \(\operatorname{Fill}(g)\) The idea of studying examples for which the systoles cuts the surface into regular right-angled polygons goes back to [15], and this idea has been used extensively in the subsequent works [1][14][2]. Schmutz's paper [15] seems to have been motivated by the observation that many critical points of mapping class group-equivariant Morse functions are of this type. Classical examples are the Bolza surface and the Klein quartic. This investigation also led to the study of upper bounds of \(\operatorname{Fill}(g)\). Hyperbolic surfaces with \(2g\) systoles have been found in [15],[1] and [14]. The recent result of [2] can be reformulated as follows. **Theorem**.: _[_2_]_ _There is infinite set \(B\) of integers \(g\geq 2\) and an increasing function \(\psi\) (discussed below) such that_ \[\operatorname{Fill}(g)\leq g/\psi(g)\text{ for any }g\in B\text{.}\] The function \(\psi\) is only implicitly defined in _loc. cit._, but a rough estimate is given in a footnote. It is clear that \(\psi(g)\) is in \(o(\lg^{*}g)\), where \(\lg^{*}\) is, essentially, the inverse of the Ackerman function \(n\mapsto F(4,n)\) (see [4], Section 3.2, pp. 58-59). In particular \(\lg^{*}\) is smaller than the \(m\)th-iterate \(\lg^{(m)}=\lg\circ\lg\circ\ldots\lg\), of the base-2 logarithmic function \(\lg\), for any \(m\geq 1\). Therefore \(\psi(g)\) is much smaller than the factor \(\sqrt{\ln g}\) in our denominator. Conversely, a rough lower bound \(\sim\pi g/\ln g\) has been found in [1]. According to _loc.cit._, this lower bound seems difficult to obtain. Intuitively, the difficulty comes from the fact that a small number \(N\) of filling systoles implies a relatively large systole length, at least \(O(g/N)\). However, as a loose general rule, the number of systoles increases with the systole length. ### Main ideas of the proof and organisation of the paper The surfaces studied in this paper were motivated by the examples in Theorem 36 of [15]. In the examples of _loc. cit._ the systoles cut the surface into regular right-angled polygons. However, here we will use the refined notion of _standard_ tesselations. In order to explain this, we first need to give some definitions. For simplicity, _we will only consider hexagonal tesselations_. A _decoration_ of the regular right-angled hexagon \(P\) is a cyclic indexing of the edges by \(\mathbb{Z}/6\mathbb{Z}\). Since a cyclic indexing of the edges defines an orientation of the hexagon, \(P\) admits exactly two decorations, up to orientation preserving isometry. By definition, a _standard_ tesselation of an oriented hyperbolic closed surface \(\Sigma\) is a tesselation by _decorated_ regular right-angled hexagons. By definition, the _curves_ of a standard tesselation \(\tau\) are the maximal geodesic components of the the 1-skeleton \(\tau_{1}\) of \(\tau\). The standard tesselation \(\tau\) is called _\(2k\)-regular_ if all its curves consist of exactly \(2k\) edges. Since each side of a regular right-angled hexagon has length \(\operatorname{arcosh}2\), all curves of a \(2k\)-regular standard tesselation are closed geodesics of length \(2k\operatorname{arcosh}2\). We also define the Coxeter group \(W(k)\) by the presentation \[\langle(s_{i})_{i\in\mathbb{Z}/6\mathbb{Z}}\mid s_{i}^{2}=1,\,(s_{i}s_{i+1})^{ 2}=1,\,\text{and}\,(s_{i}s_{i+2})^{k}=1,\,\forall i\in\mathbb{Z}/6\mathbb{Z}\rangle\] Let \(\epsilon:W(k)\to\mathbb{Z}/2\mathbb{Z}\) be the sign homomorphism, defined by \(\epsilon(s_{i})=-1\), for any \(i\in\mathbb{Z}/6\mathbb{Z}\), and set \(W(k)^{+}=\operatorname{Ker}\epsilon\). Before going to the core of the proof, we will explain the connection between the Coxeter group \(W(k)\) and the use of decorated tiles. The tesselations considered here are, somehow, "doubly regular": the tiles are regular hexagons, and all curves have the same length. These two properties appear in [15] and the subsequent papers [14][2]. The advantage of adding a decoration to the tiles is explained by the following observation, proved in Section 3. **Theorem12**.: _There is a one-to-one correspondence between_ 1. _the_ \(2k\)_-regular standard tesselations_ \(\tau\) _of closed oriented surfaces, and_ 2. _the finite index subgroups_ \(H\subset W(k)^{+}\) _satisfying (_11.1_) and (_11.2_)._ The assertions (11.1) and (11.2) are described in Lemma 11 of Section 3. These conditions, which are usually satisfied, are easy to check. We can now describe the main ideas of the proof. Using the previous statement, obtaining a bound on \(\operatorname{Fill}(g)\) is reduced to the theory of Coxeter groups. The theory of right-angled Coxeter groups is a classical tool to investigate the tesselations of the Poincare half-plane \(\mathbb{H}\)[5]. Our construction is similar, but the Coxeter groups \(W(k)\) are not right-angled. The proof contains three parts. In the first part, namely in Sections 2 and 3, we look at the delicate question - are the set of curves of a \(2k\)-regular standard tesselation \(\tau\) and the set of systoles of the corresponding surface identical? A partial answer is provided at the end of Section 3. **Criterion 18**.: _Assume that \(k\geq 4\) is even. Let \(H\) be a subgroup of \(W(k)^{+}\), any conjugate of which intersects \(B_{4k}\) trivially, where \(B_{4k}\) is the ball of radius \(4k\) in \(W(k)\). Then the set of curves of the corresponding tesselation \(\tau\) is the set of systoles._ The proof of the criterion is quite long, and it uses a new result on the combinatorics of Coxeter groups \(W\), namely the Theorem 9 proved in Section 2. We define the _Cayley complex_\(\operatorname{Cay}^{+}W\) of \(W\) by attaching some \(2\)-cells to its Cayley graph \(\operatorname{Cay}W\). Theorem 9 involves Coxeter groups endowed with right-angled partition. It shows that some"relatively short" loops of \(\operatorname{Cay}W\) are null-homotopic in \(\operatorname{Cay}^{+}W\). In the second part of the proof, i.e. Section 4, we find an upper bound for the index of a subgroup \(H\) satisfying Criterion 18: **Proposition 22**.: _For any \(k\geq 3\), there exists a normal subgroup \(H(k)\) of \(W(k)\) satisfying the criterion 18 with_ \[[W(k):H(k)]\leq 3^{72k\phi(2k)}\text{,}\] _where \(\phi\) is Euler totient function._ Its proof uses the Tits representation \(\rho:W(k)\to\operatorname{GL}_{6}(K)\)[17], where \(K\) is the number field \(\mathbb{Q}(\cos\pi/k)\). We have \(H(k)=\{w\in W(k)\mid\rho(w)\in\Gamma\}\), where \(\Gamma\) is a suitable congruence subgroup of \(\operatorname{GL}_{6}(K)\). The last part of the proof, in Section 5, explains the factor \(\frac{57}{\sqrt{\ln\ln\ln g}}\). It is based on Landau's Theorem [10][9] about the asymptotics of \(\phi(k)/k\), which is a corollary of the classical prime number theorem [6][8]. ## 2. The 2-dimensional Cayley Complex \(\operatorname{Cay}^{+}W\) Given a Coxter system \((W,S)\), the Tits combinatorics [18] describes the loops in its Cayley graph \(\operatorname{Cay}W\). In this section, we define a Cayley complex \(\operatorname{Cay}^{+}W\) obtained by attaching a collection of 2-cells to \(\operatorname{Cay}W\), for each commutative rank two parabolic subgroup of \(W\). This square complex \(\operatorname{Cay}^{+}W\) is unrelated with the well-known simplicial complexes of [5], like the Coxeter's complex and the Davis's complex. We also define the notion of right-angled partition of a Coxeter group \(W\). Theorem 9, proved in this section, states that for a Coxeter group \(W\) endowed with a right-angled partition, the "relatively short" loops of \(\operatorname{Cay}W\) are null-homotopic in \(\operatorname{Cay}^{+}W\). This result will be the main ingredient of the proof of Criterion 18 in Section 3. ### Coxeter Groups Let \(S\) be a set. A square matrix \(M=(m_{s,t})_{s,t\in S}\) is a _Coxeter matrix_ if it satisfies the following conditions: 1. \(m_{s,s}=1\), 2. for \(s\neq t\), \(m_{s,t}\) belongs to \(\mathbb{Z}_{\geq 2}\cup\{\infty\}\), 3. \(m_{s,t}=m_{t,s}\). The group \(W\) defined by the presentation \[\langle s\in S\mid(st)^{m_{s,t}}=1,\forall s,t\in S\operatorname{with}m_{s,t} \neq\infty\rangle\] is called the _Coxeter group_ associated with the Coxeter matrix \(M\). Unless stated otherwise, it will be assumed that the set \(S\) is finite. Its cardinality is called the _rank_ of \(W\). The pair \((W,S)\) is called a _Coxeter system_. Let \(\epsilon:W\to\{\pm 1\}\) be the group homomorphism uniquely defined by the property that \(\epsilon(s)=-1\) for any \(s\in S\). This is called the _signature homorphism_. Denote by \(\mathcal{W}_{S}\) the free monoid generated by \(S\). An element \(w\) of \(\mathcal{W}_{S}\) is a word \(w=s_{1}\ldots s_{n}\) where \(s_{1},\ldots,s_{n}\) belong to the alphabet \(S\). By definition \(l(w):=n\) is the length of \(w\). There is a natural monoid homomorphism \(\mathcal{W}_{S}\to W,w\mapsto\overline{w}\) whose restriction to \(S\) is the identity. The element \(w\) is called a _word representative_ of \(\overline{w}\). The _Bruhat length_ of an element \(u\in W\), denoted by \(l(u)\), is the minimal length of a word representative of \(u\). A word \(w\in\mathcal{W}_{S}\) is called _reduced_ if \(l(w)=l(\overline{w})\). ### The Tits word combinatorics For any distinct \(s,t\in S\) with \(m_{s,t}<\infty\), let \(w(s,t)\) be the word of length \(m_{s,t}\) starting with \(s\) and alternating the letters \(s\) and \(t\). The relation \((st)^{m_{s,t}}=1\) is in fact equivalent to \[\overline{w(s,t)}=\overline{w(t,s)}.\] The _subwords_ of a word \(w=s_{1}s_{2}\ldots s_{n}\in\mathcal{W}_{S}\) are the words \(s_{i_{1}}s_{i_{2}}\ldots s_{i_{k}}\), where \(1\leq i_{1}<i_{2}\cdots<i_{k}\leq n\) and the _substrings_ of \(w\) are the subwords \(s_{p}s_{p+1}\ldots s_{q}\) where \(1\leq p\leq q\leq n\). An _elementary reduction_[3] is a pair of words \((w,w^{\prime})\) such that \(w^{\prime}\) can be obtained from \(w\) by one of the following reductions: * _Reduction of first type_: deleting two consecutive identical letters in \(w\), or * _Reduction of second type_: replacing in \(w\) a substring \(w(s,t)\) by \(w(t,s)\). Consequently, we have \(l(w^{\prime})=l(w)-2\) for a reduction of the fist type, and \(l(w^{\prime})=l(w)\) otherwise. **Theorem 1**.: _(Tits) Let \(w,w^{\prime}\) be two words with \(\overline{w}=\overline{w}^{\prime}\). If \(w^{\prime}\) is reduced, one can transform \(w\) into \(w^{\prime}\) by a sequence of elementary reductions._ Besides the original reference in French [18], the reader can consult Davis's book [5], section 3.4. (The elementary reductions are called elementary \(M\)-operations in _loc.cit._.) Our presentation of the Tits Theorem is close to Casselman's webpage [3]. _2.3 Girth of \(W\)_ Set \(\gamma(W)=2\operatorname{Min}_{s\neq t}m_{s,t}\). The integer \(\gamma(W)\), possibly infinite, is the girth of the Cayley graph of \(W\), see [11] Lemma 2.1. It will also be called the _girth_ of \(W\). For any distinct \(s,t\in S\), we have \(l\big{(}w(s,t)\big{)}\geq\gamma(W)/2\). Therefore any word \(w\) with \(l(w)<\gamma(W)/2\) can be reduced only by reductions of the first type. A consequence of Theorem 1 is the following **Corollary 2**.: _Let \(w,w^{\prime}\) be two words with \(\overline{w}=\overline{w}^{\prime}\). Assume that \(l(w)<\gamma(W)/2\)._ 1. _if_ \(w\) _and_ \(w^{\prime}\) _are reduced, we have_ \(w=w^{\prime}\)_,_ 2. _if_ \(w\) _is not reduced, it contains a substring_ \(ss\) _for some_ \(s\in S\)_._ _2.4 Loops_ By definition, a _cyclic word_ on the alphabet \(S\) of length \(n\) is a word \(w=s_{1}\ldots s_{n}\) in \(\mathcal{W}_{S}\) modulo a cyclic permutation. For example the cyclic words \(s_{1}s_{2}s_{3}\) and \(s_{3}s_{1}s_{2}\) are equal. For a cyclic word \(w=s_{1}\ldots s_{n}\), it will be convenient to assume that the indices belong to \(\mathbb{Z}/n\mathbb{Z}\). The _substrings_ of length \(l\leq n\) of the cyclic word \(w=s_{1}s_{2}\ldots s_{n}\) are the words \(u=s_{i}s_{i+1}\ldots s_{i+l-1}\), for some \(i\in\mathbb{Z}/n\mathbb{Z}\). For example \(s_{4}s_{1}\) is a substring of the cyclic word \(s_{1}s_{2}s_{3}s_{4}\). A word \(w=s_{1}\ldots s_{n}\) in \(\mathcal{W}_{S}\) of length \(n>0\) is called a _loop_ if \(\overline{w}=1\). For a cyclic word \(w=s_{1}\ldots s_{n}\), the condition \(\overline{w}=1\) is independent of its representatives in \(\mathcal{W}_{S}\). It follows that \(w\) is called a _cyclic loop_ if any of its representatives in \(\mathcal{W}_{S}\) is a loop. Since \(\epsilon(s)=-1\) for any \(s\in S\), the length of any loop or cyclic loop is even. **Corollary 3**.: _Let \(w=s_{1}\ldots s_{2n}\) be a cyclic loop, with \(2n<\gamma(W)\). Then there are two distinct indices \(i,j\in\mathbb{Z}/2n\mathbb{Z}\) such that \(s_{i}=s_{i+1}\) and \(s_{j}=s_{j+1}\)._ Proof.: Set \(u=s_{1}s_{2}\ldots s_{n}\) and \(v=s_{2n}s_{2n-1}s_{n+1}\). We have \(\overline{u}=\overline{v}\) and \(l(u)=l(v)=n<\gamma(W)/2\). It follows that \(u\) is reduced iff \(v\) is reduced. If both \(u\) and \(v\) are reduced, it follows from the first assertion of Corollary 2 that \(u=v\), hence we have \(s_{n}=s_{n+1}\) and \(s_{2n}=s_{2n+1}\). Otherwise, we can transform \(u\) and \(v\) into reduced words by a sequence of elementary reductions of the first type. By the second assertion of Corollary 2, there exist \(i,j\) with \(1\leq i<n\) and \(n\leq j<2n\) such that \(s_{i}=s_{i+1}\) and \(s_{j}=s_{j+1}\). In both cases, the corollary is proved. _2.5 Parabolic subgroups_ For a subset \(I\) of \(S\), let \(W_{I}\subset W\) be the subgroup generated by \(I\), let \(\mathcal{W}_{I}\) be the set of words on the alphabet \(I\). The subgroups \(W_{I}\) are called the _parabolic subgroups_ of \(W\). It is well-known that \((W_{I},I)\) is a Coxeter system, with Coxeter matrix \((m_{s,t})_{s,t\in I}\), see e.g. [5] Section 4.1. It is clear from Theorem 1 that any reduced expression of an element \(w\in W_{I}\) is in \(\mathcal{W}_{I}\). It follows that \(W_{I}\cap W_{J}=W_{I\cap J}\) for any two subsets \(I,J\) of \(S\). Given \(w\in W\), the smallest subset \(I\) such that \(w\in W_{I}\) is called the _support_ of \(w\). For a subset \(I\) in \(S\) and \(t\in S\), set \[\begin{array}{c}I(t):=\{s\in I\mid m_{s,t}=2\}=\{s\in I\mid s\neq t\,\text{ and}\,st=ts\},\,\text{and}\\ t^{W_{I}}=\{t^{w}\mid w\in W_{I}\},\end{array}\] where, as usual, \(t^{w}:=wtw^{-1}\). **Lemma 4**.: _Let \(I\) be a subset of \(S\)._ 1. _For any_ \(t\in S\smallsetminus I\)_, the centraliser of_ \(t\) _in_ \(W_{I}\) _is_ \(W_{I(t)}\)_. In particular, we have_ \(t^{W_{I}}\simeq W_{I}/W_{I(t)}\)_._ 2. _For elements_ \(t\neq t^{\prime}\) _of_ \(S\smallsetminus I\)_, we have_ \(t^{W_{I}}\cap t^{\prime W_{I}}=\emptyset\)_._ This lemma appears to be well-known. Since we did not find an exact reference, we provide a quick proof. Proof of Assertion (1).: Set \[W_{I}^{I(t)}=\{w\in W_{I}\mid l(ws)>l(w)\,\forall s\in I(t)\}.\] Let \(w\in W_{I}^{I(t)}\) with \(w\neq 1\) and let \(w=s_{1}\dots s_{n}\) be any reduced expression for \(w\). Since \(t\) is not in the support of \(w\), the word \(s_{1}\dots s_{n}t\) is reduced. Since \(s_{n}\notin I(t)\), no reduction of second type involves \(t\). Therefore any reduced expression of \(wt\) ends with the letter \(t\), therefore \[wt\neq tw.\] By Section 4.5 of [5], any \(w\in W_{I}\) can be uniquely written as \(w=uv\), where \(u\in W_{I}^{I(t)}\) and \(u\in W_{I(t)}\). Thus the previous statement is equivalent to Assertion (1). Proof of Assertion (2).: Let \(t\neq t^{\prime}\) be elements in \(S\smallsetminus I\). The support of any element in \(w\in t^{W_{I}}\) (resp. \(w\in t^{\prime W_{I}}\)) contains \(t\) but not \(t^{\prime}\) (resp. contains \(t^{\prime}\) but not \(t\)). Therefore \(t^{W_{I}}\) and \(t^{\prime W_{I}}\) are disjoint. _2.6 The Cayley complex \(\operatorname{Cay}^{+}W\)_ By definition, the _Cayley graph_ of \(W\), denoted \(\operatorname{Cay}W\), is the graph whose vertices are the elements \(v\in W\) and whose edges are the pairs \((v,vs)\), for \(v\in W\) and \(s\in S\). We will now define the _Cayley complex_\(\operatorname{Cay}^{+}W\) by attaching some 2-cells to \(\operatorname{Cay}W\). Let \(\mathfrak{P}\) be the set of pairs \(I=\{s,t\}\) of commuting elements of \(S\). For \(I\in\mathfrak{P}\), any \(W_{I}\)-coset \(vW_{I}\) is a subgraph of \(\operatorname{Cay}W\) consisting of a 4-cycle, which can be seen as the boundary of a plain square \(\mathbf{c}(v,I)\). An example is the square \(\mathbf{c}_{1}\) shown in Figure 1. Therefore we can attach the 2-cell \(\mathbf{c}(v,I)\) along \(\partial\mathbf{c}(v,I)\) to the Cayley graph \(\operatorname{Cay}W\). By definition, the _Cayley complex_\(\operatorname{Cay}^{+}W\) is the 2-dimensional complex obtained by attaching the 2-cells \(\mathbf{c}(v,I)\), where \(I\) varies over \(\mathfrak{P}\) and \(v\) varies over a set of representatives of \(W/W_{I}\). It remains to add one remark to complete the definition of \(\operatorname{Cay}^{+}W\). The group \(W_{I}\) acts (by the right action) of the coset \(vW_{I}\), and this action can be extended to the cell \(\mathbf{c}(v,I)\). The two generators \(s\) and \(t\) of \(W_{I}\) are the median reflections of the square \(\mathbf{c}(v,I)\). We require that the \(2\)-cells \(\mathbf{c}(v,I)\) are glued compatibly with the \(W_{I}\)-action. It follows that the \(W\)-action on \(\operatorname{Cay}W\) extends naturally to \(\operatorname{Cay}^{+}W\). Note that any word \(w=s_{1}s_{2}\ldots s_{n}\) in \(\mathcal{W}_{S}\) defines a path, denoted by \(|w|\), in the Cayley graph \(\operatorname{Cay}W\). The \(n\) successive edges of \(|w|\) are \[(1,s_{1}),(s_{1},s_{1}s_{2})\ldots(s_{1}s_{2}\ldots s_{n-1},s_{1}s_{2}\ldots s _{n}).\] Let \(v\in W\). By definition the path \(|w|\) is based at \(1\) and the path \(v.|w|\) is based at \(v\). By \(W\)-equivariance, it is clear that * for any \(w_{1},w_{2}\in\mathcal{W}_{S}\), the paths \(|w_{1}|\) and \(|w_{2}|\) are homotopic in \(\operatorname{Cay}^{+}W\) iff \(v.|w_{1}|\) and \(v.|w_{2}|\) are homotopic, and * for any loop \(w\in\mathcal{W}_{S}\), \(|w|\) is null-homotopic in \(\operatorname{Cay}^{+}W\) iff \(v.|w|\) is. Therefore the next statement and its proof only involve loops based at \(1\). For \(s\in S\) and \(I\subset S\), set \[I(s)=\{t\in I\mid m_{s,t}=2\}.\] **Lemma 5**.: _Let \(u\in\mathcal{W}_{I(s)}\) be a reduced word. Then the paths \(|sus|\) and \(|u|\) are homotopic in \(\operatorname{Cay}^{+}W\)._ Proof.: By definition, we have \(u=t_{1}\ldots t_{n}\) where all \(t_{i}\) commute with \(s\) and \(n=l(u)\). For each integer \(i\) with \(1\leq i\leq n\), let \(I_{i}=\{s,t_{i}\}\) and \(\mathbf{c_{i}}=\mathbf{c}(t_{1}t_{2}\ldots t_{i-1},I_{i})\). Then \(\mathbf{c_{1}}\cup\mathbf{c_{2}}\cdots\cup\mathbf{c_{n}}\) is a rectangle of \(\operatorname{Cay}^{+}W\). As shown in Figure 1, the lower side of the rectangle is the path \(|t_{1}t_{2}\ldots t_{n}|\) and the three other sides represent the path \(|st_{1}t_{2}\ldots t_{n}s|\). It follows that inside \(\operatorname{Cay}^{+}W\), the path \(|sus|\) and \(|u|\) are homotopic. ### Gal's Theorem Following [7], a partition \(S=R\sqcup B\) of \(S\) is called a _Gal's partition_ if \(m_{s,t}\) is an even integer or is infinite for any \(s\in R\) and \(t\in B\). For a Gal's partition \(S=R\sqcup B\), there are group homorphisms \(\mu_{R}:W\to W_{R}\) and \(\mu_{B}:W\to W_{B}\) uniquely defined by \[\begin{array}{c}\mu_{R}(s)=s\text{ if }s\in R\text{ and }\mu_{R}(s)=1\text{ if }s \not\in R,\\ \mu_{B}(s)=1\text{ if }s\in B\text{ and }\mu_{B}(s)=s\text{ if }s\not\in B.\end{array}\] Set \[\operatorname{Ker}\mu_{R}:=\overline{W}_{B}.\] The notation \(\overline{W}_{B}\) is intended to emphasise that \(\operatorname{Ker}\mu_{R}\) is the normal closure of \(W_{B}\)[7]. Set \(\overline{B}=\cup_{t\in B}t^{W_{R}}\). **Theorem 6**.: _(Gal) Let \(S=R\sqcup B\) be a Gal's partition._ _Then the pair \((\overline{W}_{B},\overline{B})\) is a Coxeter system, possibly of infinite rank. Moreover, its Coxeter matrix \((m_{\sigma,\tau})_{\sigma,\tau\in\overline{B}}\) is defined by_ 1. _If_ \(\sigma=s^{w}\) _and_ \(\tau=t^{w}\) _for some_ \(s,t\in B\) _and_ \(w\in W_{R}\)_, then_ \(m_{\sigma,\tau}=m_{s,t}\)_,_ 2. _If_ \(\sigma=t^{w}\) _and_ \(\tau=t^{ws}\) _for some_ \(t\in B\)_,_ \(s\in R\) _and_ \(w\in W_{R}\)_, then_ \(m_{\sigma,\tau}=m_{s,t}/2\)_, and_ 3. _otherwise, we have_ \(m_{\sigma,\tau}=\infty\)_._ For the proof, see Proposition 2.1 and Corollary 3.1 of [7]. _2.8 Right-angled partitions_ In order to use Gal's Theorem, we will restrict ourselves to a certain type of Gal's partitions. Recall that a Coxeter group is called _right-angled_ if the non-diagonal entries of its Coxeter matrix are \(2\) or \(\infty\). By analogy, a partition \(S=R\sqcup B\) of \(S\) will be called a _right-angled_ partition if \(m_{s,t}=2\) or \(\infty\), for any \(s\in R\) and \(t\in B\). **Lemma 7**.: _Let \(S=R\sqcup B\) be a right-angled partition. Then we have_ \[\gamma(\overline{W}_{B})=\gamma(W_{B}).\] Proof.: Let \(\sigma,\tau\in\overline{B}\). If \(\sigma=t^{w}\), \(\tau=t^{ws}\) for some \(t\in B\), \(s\in R\) and \(w\in W_{R}\) such that \(m_{s,t}\) is finite, we have \(m_{s,t}=2\) and \(\sigma=\tau\). Hence \(m_{\sigma,\tau}\) is a diagonal entry of the Coxeter matrix of \(\overline{W_{B}}\). Otherwise, by Theorem 6, we have \(m_{\sigma,\tau}=m_{s,t}\) for some \(s,t\in B\) or \(m_{\sigma,\tau}=\infty\). It follows that \(\gamma(\overline{W}_{B})=\gamma(W_{B})\). _2.9 Loops for Coxeter groups with a right-angled partition_ Let \(I\) be a subset of \(S\). For any word or cyclic word \(w=s_{1}s_{2}\ldots s_{n}\in\mathcal{W}_{S}\), let \(l_{I}(w)\) be the number of its letters in \(I\). Therefore for any partition \(S=R\sqcup B\) of \(S\), we have \[l(w)=l_{R}(w)+l_{B}(w).\] **Lemma 8**.: _Let \(S=R\sqcup B\) be a right-angled partition. Let \(w\) be a cyclic loop on the alphabet \(S\) such that \(l_{R}(w)<\gamma(W_{R})\) and \(l_{B}(w)<\gamma(W_{B})\). Then one of the following assertions holds_ 1. \(w\) _contains a substring sus, where_ \(s\in B\) _and_ \(u\) _is a reduced word on the alphabet_ \(B(s)\)_, or_ 2. \(w\) _contains a substring_ \(tt\)_, where_ \(t\in R\)_._ Proof.: If \(l_{B}(w)=0\), then Assertion (2) holds by Corollary 3. From now on, let us assume that \(l_{B}(w)>0\). As \(w\) is a cyclic word, we can write it as \[w=u_{1}s_{1}u_{2}s_{2}\ldots u_{k}s_{k},\] where \(k=l_{B}(w)\), \(s_{1},s_{2}\ldots s_{k}\) are in \(B\) and \(u_{1},\ldots,u_{k}\) are in \(\mathcal{W}_{R}\). (It is not excluded that some words \(u_{i}\) have length zero.) As usual, the indices \(1,2\ldots k\) are viewed as elements of \(\mathbb{Z}/k\mathbb{Z}\). Set \(v_{1}:=\overline{u_{1}}\), \(v_{2}:=\overline{u_{1}u_{2}}\), \(v_{3}:=\overline{u_{1}u_{2}u_{3}}\dots\). For any index \(i\in\mathbb{Z}/k\mathbb{Z}\), set \(\sigma_{i}=s_{i}^{v_{i}}\). Since \(\mu_{R}(\overline{w})=1\), it follows that \(v_{n}:=\overline{u_{1}u_{2}\ldots u_{n}}=1\). Therefore the identity \(\overline{w}=1\) is equivalent to \[\sigma_{1}\sigma_{2}\ldots\sigma_{k}=1\] By definition, each \(\sigma_{i}\) belongs to \(\overline{B}\). Moreover, as a word on the alphabet \(\overline{B}\), the word \(\sigma_{1}\sigma_{2}\ldots\sigma_{k}\) is a loop. By Lemma 7 we have \[\gamma(\overline{W}_{B})=\gamma(W_{B}).\] Since \(k<\gamma(\overline{W}_{B})\), it follows from Corollary 3 that there are two indices \(i,j\in\mathbb{Z}/k\mathbb{Z}\) such that \(\sigma_{i}=\sigma_{i+1}\) and \(\sigma_{j}=\sigma_{j+1}\). We can choose \(i\) and \(j\) in such a way that \(l(u_{i+1})\leq l(u_{j+1})\). Since we have \(l(u_{i+1})+l(u_{j+1})\leq l_{R}(w)<\gamma(W_{R})\), we have \[l(u_{i+1})<\gamma(W_{R})/2.\] Set \(s=s_{i}\), \(s^{\prime}=s_{i+1}\) and \(u=u_{i+1}\). The equality \(\sigma_{i}=\sigma_{i+1}\) is equivalent to \[s=s^{\prime\overline{u}}.\] By Assertion (2) of Lemma 4, we have \(s=s^{\prime}\). Therefore \(w\) contains the substring \[sus,\] where \(l(u)<\gamma(W_{R})/2\). To finish the proof, let us consider two cases. _Case 1:_\(u\) is not reduced. By Lemma 2, the word \(u\) contains the substring \(tt\) for some \(t\in I\), therefore Assertion (2) holds. _Case 2:_\(u\) is reduced. Since \(s=s^{\overline{u}}\), it follows from the Assertion (1) of Lemma 4 that \(u\) belongs to \(W_{R}(s)\). Therefore \(u\) is a word on the alphabet \(R(s)\), and Assertion (1) holds. ### Homotopically trivial paths in \(\operatorname{Cay}^{+}W\) **Theorem 9**.: _Let \(S=R\sqcup B\) be a right-angled partition. Let \(w\) be a cyclic loop on the alphabet \(S\) such that \(l_{R}(w)<\gamma(W_{R})\) and \(l_{B}(w)<\gamma(W_{B})\)._ _Then \(|w|\) is null-homotopic in \(\operatorname{Cay}^{+}W\)._ Proof.: If \(w\) contains a substring \(tt\), \(|w|\) is homotopic in \(\operatorname{Cay}W\) to the loop \(w^{\prime}\) obtained by deleting this substring. Otherwise, by Lemma 8, \(w\) contains a substring \(sus\), where \(s\in B\) and \(u\) is a reduced word on the alphabet \(R(s)\). Hence by Lemma 5, the loop \(|w|\) is homotopic to the loop \(w^{\prime}\) obtained by replacing the substring \(sus\) by \(u\). In both cases, we have \(l(w^{\prime})=l(w)-2\), while \(l_{R}(w^{\prime})<\gamma(W_{R})\) and \(l_{B}(w^{\prime})<\gamma(W_{B})\). Therefore by induction, \(|w|\) is null-homotopic in \(\operatorname{Cay}^{+}W\). ## 3. Uniformization of \(2k\)-Regular Tesselations Let \(k\) be an integer. For simplicity, it will be assume that \(k\geq 3\) in the whole section. For \(k=1\) or \(2\), the theory is not difficult, but some details are slighty different. We define the notion of \(2k\)-regular standard hexagonal tesselation of a hyperbolic surface \(\Sigma\). Except where stated otherwise, all tesselations considered in the paper have hexagonal tiles, so we will skip the term hexagonal in what follows. We will first show a formal uniformization theorem for these tesselations. There is a universal surface \(\mathcal{H}\), endowed with such a tesselation, on which a certain Coxeter group \(W(k)\) acts, such that any closed surface with a \(2k\)-regular standard tesselation is isometric to \(\mathcal{H}/H\) for some finite index subgroup of \(W(k)\). Conversely, a finite index subgroup \(H\subset W(k)\) for which the tesselation on \(\mathcal{H}/H\) is \(2k\)-regular can be readily characterised. By definition, the _curves_ of a tesselation are the maximal geodesics containing the sides of the tiles. All curves of a \(2k\)-regular tesselation have length \(2k\operatorname{arcosh}2\). This leads to the question - are the set of systoles and the set of curves of the tesselation of \(\mathcal{H}/H\) identical? This is partly answered by the main result of Section 3, namely the Criterion 18. The surface \(\mathcal{H}\) has two realizations. The first one, denoted by \(\mathcal{H}(k)\), is a quotient of the Poincare half-plane \(\mathbb{H}\). The second realization is the Coxeter complex \(\operatorname{Cay}^{+}W(k)\). The proof of Criterion 18 uses these two realizations of \(\mathcal{H}\), and it combines standard hyperbolic trigonometry and Theorem 9 of the previous section. _3.1 The decorated hexagons \(\mathcal{P}\) and \(\overline{\mathcal{P}}\)_ Let \(\mathbb{H}\) be the Poincare half-plane, endowed with its hyperbolic metric. Recall that a hexagon \(P\subset\mathbb{H}\) is _regular_ if its automorphism group is flag-transitive, i.e. if \(\operatorname{Aut}P\) acts transitively on the pairs \((\mathbf{e},v)\), where \(\mathbf{e}\) is a side and \(v\in\mathbf{e}\) is a vertex of \(P\). The following lemma follows readily from hyperbolic trigonometry, e.g. [13] pp.90-96. **Lemma 10**.: _Up to isometry, there exists a unique hyperbolic right-angled hexagon \(P\) whose side lengths are all equal. Moreover \(P\) is regular and the common length of its sides is \(L:=\operatorname{arcosh}2\)._ By definition, the _decorated hexagon_\(\mathcal{P}\) is the oriented hexagon \(P\), whose sides, \(S_{1},S_{2}\ldots S_{6}\) are indexed by \(\mathbb{Z}/6\mathbb{Z}\) in an anti-clockwise direction. (The orientation of \(P\) induces an orientation of its sides.) Let \(\overline{\mathcal{P}}\) be the same hexagon with opposite orientation. By definition the _red sides_ are \(S_{1},S_{3}\) and \(S_{5}\), and the other three are called the _blue sides_. _3.2 The \(2k\)-regular standard tesselations_ Let \(\Sigma\) be an oriented hyperbolic surface, finite or infinite, and let \(\tau\) be a tesselation of \(\Sigma\) whose tiles are the decorated hexagons \(\mathcal{P}\) or \(\overline{\mathcal{P}}\). The tesselation \(\tau\) is called a _standard tesselation_ if it satisfies the following axioms (AX1) The tiles are glued along sides of the same index, (AX2) Each vertex of the tesselation has valence four. In this definition, it should be understood that the tiles of a standard tesselation _are always the decorated right-angled regular hexagons_\(\mathcal{P}\) and \(\overline{\mathcal{P}}\). The second axiom implies that the sum of the four angles at each vertex is \(2\pi\), so it is equivalent to the fact that \(\Sigma\) has no boundary. Let \(\tau\) be a standard tesselation. By definition, a _curve of the tesselation \(\tau\)_ is a maximal geodesic in the \(1\)-skeleton of \(\tau\). Since the angle between two adjacent edges of same index is \(\pi\), each curve \(C\) is a maximal geodesic of \(\Sigma\) consisting of a union of adjacent edges of the same index \(i\). This common index \(i\) is called the _index_ of the curve \(C\). Given a positive integer \(k\), a standard tesselation \(\tau\) is called \(k\)-_regular_ if it satisfies the following additional requirement (AX3) Each curve \(C\) of the tesselation is a closed geodesic of length \(kL\). For a standard tesselation \(\tau\) of \(\Sigma\) satisfying (AX3), each curve \(C\) of index \(i\) alternately meets curves of index \(i+1\) and curves of index \(i-1\), so \(k\) is an even integer. From now on, we will only speak about \(2k\)-regular tesselations to emphasise the fact that \(2k\) is an even integer. Let \(\mathcal{T}ess(\Sigma,2k)\) be the set of all \(2k\)-regular standard tesselations of \(\Sigma\). Once again, it is tacitly assumed that the tiles are the hexagons \(\mathcal{P}\) and \(\overline{\mathcal{P}}\). ### The universal tesselated surface \(\mathcal{H}(k)\) Let \(\mathbb{H}\) be the Poincare half-plane. By Lemma 10, there is a unique right-angled regular hexagon. Hence, by the Poincare polygon Theorem, there exists a unique (up to isometry) standard tesselation \(\tau_{\infty}\) of \(\mathbb{H}\). Let us choose one tile \(\tilde{T}\) of the tesselation \(\tau_{\infty}\) and let \(\tilde{S}_{i}\) be its side of index \(i\). The tile \(\tilde{T}\) will be called _the distinguished tile_ of \(\tau_{\infty}\). For \(i\in\mathbb{Z}/6\mathbb{Z}\), let \(\Delta_{i}\) be the line containing the side \(\tilde{S}_{i}\) and let \(s_{i}\) be the reflection across the line \(\Delta_{i}\). The subgroup of \(\mathrm{PGL}_{2}(\mathbb{R})\) generated by the six reflections \((s_{i})_{i\in\mathbb{Z}/6\mathbb{Z}}\) is the right-angled Coxeter group \(W(\infty)\) with presentation \[\langle(s_{i})_{i\in\mathbb{Z}/6\mathbb{Z}}\mid s_{i}^{2}=1\,\text{and}\,(s_{i }s_{i+1})^{2}=1,\,\forall i\in\mathbb{Z}/6\mathbb{Z}\rangle.\] Let \(k\geq 2\) and let \(N(k)\) be the normal subgroup of \(W(\infty)\) generated by the elements \((s_{i}s_{i+2})^{k}\), for \(i\in\mathbb{Z}/6\mathbb{Z}\), and all their conjugates. The group \(W(k):=W(\infty)/N(k)\) is the Coxeter group with presentation \[\langle(s_{i})_{i\in\mathbb{Z}/6\mathbb{Z}}\mid s_{i}^{2}=1,\,(s_{i}s_{i+1})^{ 2}=1,\,\text{and}\,(s_{i}s_{i+2})^{k}=1,\,\forall i\in\mathbb{Z}/6\mathbb{Z}\rangle\] Set \(W(\infty)^{+}:=W(\infty)\cap PSL_{2}(\mathbb{R})=\{w\in W(\infty)\mid\epsilon( w)=1\}\). It follows from the Poincare Theorem that \(W(\infty)\) acts freely and transitively on the set of tiles of \(\tau_{\infty}\). So a nontrivial element \(w\in W(\infty)^{+}\) is elliptic iff it is conjugate to \(s_{i}s_{i+1}\) for some \(i\in\mathbb{Z}/6\mathbb{Z}\). Since \(k\geq 2\), the subgroup \(N(k)\) acts freely on \(\mathbb{H}\). Set \(\mathcal{H}(k)=\mathbb{H}/N(k)\) and let \(\tau_{k}\) be the standard tesselation of \(\mathcal{H}(k)\) induced by \(\tau_{\infty}\). Note that \(s_{i-1}s_{i+1}(\Delta_{i})=\Delta_{i}\) and its restriction to the line \(\Delta_{i}\) is a translation of length \(2L\). It follows that \(\tau_{k}\) is a \(2k\)-regular tesselation. The _distinguished tile_ of \(\mathcal{H}(k)\), denoted by \(T\), is the image of \(\tilde{T}\) in \(\mathcal{H}(k)\). Set \(W(k)^{+}=\{w\in W(k)\mid\epsilon(w)=1\}\) and set \(t_{i}=s_{i-1}s_{i+1}\) for any \(i\in\mathbb{Z}/6\mathbb{Z}\). For any element \(t\in W(k)\), let \(t^{W(k)}\) be its conjugacy class. When a subgroup \(H\subset W(k)^{+}\) acts freely on \(\mathcal{H}(k)\), let \(\tau_{H}\) be the tesselation induced by \(\tau_{k}\) on the surface \(\mathcal{H}(k)/H\). Note that the condition \(H\subset W(k)^{+}\) ensures that \(\mathcal{H}(k)/H\) is orientable. **Lemma 11**.: _A subgroup \(H\subset W(k)^{+}\) acts freely on \(\mathcal{H}(k)\) iff_ _(11.1) \(H\cap(s_{i}s_{i+1})^{W(k)}=\emptyset\) for all \(i\in\mathbb{Z}/6\mathbb{Z}\)._ _Moreover assume that \(H\) satisfies the condition (11.1). Then the tesselation \(\tau_{H}\) is \(2k\)-regular iff the following condition holds_ _(11.2) \(H\cap(t_{i}^{l})^{W(k)}=\emptyset\) for all \(i\in\mathbb{Z}/6\mathbb{Z}\) and \(1\leq l<k\)._ Proof.: Let \(w\in H\). Since \(W(k)\) is tile-transitive, \(w\) has a fixed point in \(\mathcal{H}(k)\) iff \(w^{v}\) has a fixed point in \(T\), for some \(v\in W(k)\). By hypothesis, \(w^{v}\) is not a reflection therefore \(w^{v}=s_{i}s_{i+1}\) for some \(i\) in \(\mathbb{Z}/6\mathbb{Z}\), which proves the first assertion. Assume that \(\tau_{H}\) is not \(2k\)-regular. By assumption, there is a curve \(C\) of the tesselation \(\tau_{k}\) whose image in \(\mathcal{H}(k)/H\) has length less than \(kL\). Since \(W(k)\) is tile-transitive, we can assume that \(C\) contains the side \(S_{i}\) of the distinguished tile \(T\), for some \(i\in\mathbb{Z}/6\mathbb{Z}\). We have \(h(C)=C\), for some nontrivial \(h\in H\). Since it has no fixed points, \(h|_{C}\) is a rotation. It follows that \(h=t_{i}^{l}\) for some \(\in\mathbb{Z}/6\mathbb{Z}\), which proves the second assertion. Set \[\mathcal{T}ess(*,2k)=\cup_{\Sigma}\,\mathcal{T}ess(\Sigma,2k),\] where \(\Sigma\) varies over all oriented closed hyperbolic surfaces of genus \(g\geq 2\). In what follows, we will only use the previous Lemma 11. However we would like to briefly explaine that \(\mathcal{H}(k)\) is the universal cover of all \(2k\)-regular standard tesselations, as will now be shown. **Theorem 12**.: _The map \(H\mapsto\tau_{H}\) is a one-to-one correspondence between_ 1. _the finite index subgroups_ \(H\subset W(k)^{+}\) _satisfying (_11.1_) and (_11.2_), and_ 2. _the_ \(2k\)_-regular standard tesselations_ \(\tau\) _of closed oriented surfaces._ In the previous statement, it should be understood that the word "subgroups" refers to conjugacy classes of subgroups and "tesselations" refers to isometry classes of tesselations. Proof.: The Lemma 11 shows that this map is well-defined. We will now define the inverse map. Let \(\tau\in\mathcal{T}ess(*,2k)\). By definition, \(\tau\) is a tesselation of some oriented closed surface \(\Sigma\). The tesselation \(\tau\) induces a standard tesselation of its universal cover \(\mathbb{H}\). Since the induced tesselation is isometric to \(\tau_{\infty}\), there is an embedding \(\pi_{1}(\Sigma,p)\subset W(\infty)\), where \(p\in\Sigma\) is a base point. Since \(\tau\) is \(2k\)-regular, it follows that \(\pi_{1}(\Sigma,p)\supset N(k)\), and therefore \(\tau=\tau_{{}_{H}}\), where \(H=\pi_{1}(\Sigma,p)/N(k)\). it Remark. The universal tesselated surfaces \(\mathcal{H}(k)\) can be defined for \(k=1\) and \(2\). In fact \(\mathcal{H}(1)\) is the genus \(2\) surface of [15], which was the starting point of our paper. ### The homeomorphism \(Cay^{+}W(k)\simeq\mathcal{H}(k)\) Since \(W(k)\) acts freely and transitively on the set of tiles, the Cayley graph \(\operatorname{Cay}W(k)\) can be identified with the dual graph of the tesselation \(\tau_{k}\). Let us recall more precisely the definition of the _dual graph_\(\tau_{k}^{*}\) of \(\tau_{k}\). Let \(T\) be the distinguished tile of \(\mathcal{H}(k)\) and \(X\) be its center. For \(w\in W(k)\), set \(T(w)=w.T\), \(X(w)=w.X\) and, for any \(i\in\mathbb{Z}/6\mathbb{Z}\), let \(S_{i}(w)\) be the side of \(T(w)\) of index \(i\). By definition, \[V(\tau_{k}^{*}):=\{X(w)\mid w\in W(k)\}\] is the _set of vertices_ of \(\tau_{k}^{*}\). For \(w\in W(k)\) and \(i\in\mathbb{Z}/6\mathbb{Z}\), \[\mathbf{O}:=T(w)\cup T(ws_{i})\] is a right-angled ocogon with two sides of length \(2L\) and all others of length \(L\). Let \(\mathbf{e}(w,ws_{i})\) be the geodesic arc joining \(X(w)\) and \(X(ws_{i})\) in \(\mathbf{O}\). In this instance, the arcs are not oriented, so \(\mathbf{e}(w,ws_{i})=\mathbf{e}(ws_{i},w)\). By definition \[E(\tau_{k}^{*}):=\{\mathbf{e}(w,ws_{i})\mid w\in W(k)\,\text{and}\,i\in \mathbb{Z}/6\mathbb{Z}\}\] is the _set of edges_ of \(\tau_{k}^{*}\). The duality property means that each tile \(T(w)\) contains exactly one vertex of \(\tau_{k}^{*}\), namely \(X(w)\), and each of its sides \(S_{i}(w)\) meets exactly one edge of \(\tau_{k}^{*}\), namely \(\mathbf{e}(w,ws_{i})\). The natural homeomorphisms between an edge \(|(w,ws_{i})|\) of \(\operatorname{Cay}W(k)\) and an edge \(\mathbf{e}(w,ws_{i})\) of \(\tau_{k}^{*}\) provide a natural homeomorphism \[\operatorname{Cay}W(k)\simeq\tau_{k}^{*}.\] It follows that the topological graph \(\operatorname{Cay}W(k)\) is embedded in \(\operatorname{Cay}^{+}W(k)\) and in \(\mathcal{H}(k)\). In fact, the two embeddings of \(\operatorname{Cay}W(k)\) are the same, as it is shown in the next **Lemma 13**.: _The embedding \(\operatorname{Cay}W(k)\subset\mathcal{H}(k)\) extends to a homeomorphism \(\operatorname{Cay}^{+}W(k)\simeq\mathcal{H}(k)\)._ Proof.: By definition, \(\operatorname{Cay}^{+}W(k)\) is tesselated by quadrilaterals and \(\mathcal{H}(k)\) is tesselated by hexagons. Roughly speaking, it will be shown that these two tesselations are dual to each other, see Figure 2. Since \(k\geq 3\), the rank-two commutative parabolic subgroups of \(W(k)\) are the subgroups \(W_{I(i)}\), where \(I(i)=\{s_{i},s_{i+1}\}\) and \(i\in\mathbb{Z}/6\mathbb{Z}\). For \(v\in W(k)\), it is clear that \[\mathbf{H}=T(v)\cup T(vs_{i})\cup T(vs_{i+1})\cup T(vs_{i}s_{i+1})=W(k)_{I(i)}. T(v)\] is a right-angled 12-gon with four sides of length \(2L\) and all others of length \(L\). Let \(\mathbf{Q}(v,i)\) be the quadrilateral contained in \(\mathbf{H}\) whose set of vertices is \[W(k)_{I(i)}.X(v)=\{X(v),X(vs_{i}),X(vs_{i+1}),X(vs_{i}s_{i+1})\}\] as shown in Figure 2. Consequently, we have \(\mathbf{Q}(v,i)=\mathbf{Q}(v^{\prime},i)\) if \(v=v^{\prime}\operatorname{mod}W(k)_{I(i)}\). Set \(A_{i}(v)=S_{i}(v)\cap S_{i+1}(v)\) and let \(M_{i}(v)\) be the midpoint \(S_{i}(v)\) and \(\Omega_{i}(v)\subset T(v)\) be the convex quadrilateral with vertices given by \(M_{i}(v)\), \(A_{i}(v)\), \(M_{i+1}(v)\) and \(X(v)\). The tile \(T(v)\) is tesselated by the six quadrilaterals \(\Omega_{i}(v)\). Since \[\mathbf{Q}(v,i)\cap T(v)=\Omega_{i}(v)\] the set of quadrilaterals \[\{\mathbf{Q}(v,i)\mid i\in\mathbb{Z}/6\mathbb{Z}\,\text{and}\,v\in W(k)\}\] tesselates \(\mathcal{H}(k)\). Since \(\operatorname{Cay}W(k)\) has been identified with the graph \(\tau_{k}^{*}\subset\mathcal{H}(k)\), there is an equality \[\partial\mathbf{Q}(v,i)=\partial\mathbf{c}(v,I_{i}).\] which can be extended to a homeomorphism \[\mathbf{Q}(v,i)\simeq\mathbf{c}(v,I_{i}).\] Therefore \(\operatorname{Cay}^{+}W(k)\) is homeomorphic to \[\cup_{i,v}\mathbf{Q}(v,i)\] where \(i\) varies over \(\mathbb{Z}/6\mathbb{Z}\) and \(v\) over \(W(k)\). It follows that \(\operatorname{Cay}^{+}W(k)\) is homeomorphic to \(\mathcal{H}(k)\). Figure 2. The quadrilateral from the proof of Lemma 13. ### The combinatorial datum \(\omega(\gamma)\) associated to arcs in \(\mathcal{H}(k)\) We will now start to investigate the length of the closed geodesics of the surfaces \(\mathcal{H}(k)/H\). To do so, we will first look at the lengths of the arcs \(\gamma\) in \(\mathcal{H}(k)\). In this section, we will associate a word \(\omega(\gamma)\) over the letters \((s_{i})_{i\in\mathbb{Z}/6\mathbb{Z}}\). Its length will be called the combinatorial length of an arc. Then, we will find a lower bound for the combinatorial length of the closed geodesic. The relation between the hyperbolic lengths and the hyperbolic lengths will be examined in the next subsection. We will now provide the precise definition of the combinatorial datum \(\omega(\gamma)\) associated to some geodesic paths \(\gamma\). Let \(\operatorname{Arc}_{T}(\mathcal{H}(k))\) be the set of geodesic arcs \(\gamma:[0,l]\to\mathcal{H}(k)\) of length \(l\) such that 1. \(\gamma(t)\) lies in \(T^{0}\) for small \(t\neq 0\) 2. \(\gamma(0)\in C\) and \(\gamma(l)\in C^{\prime}\) for some curves \(C,C^{\prime}\) of \(\tau_{k}\), where \(T^{0}\) is the interior of the distinguished cell \(T\) of \(\tau_{k}\). Note that the first condition implies that \(\gamma\) is not an arc of a curve of \(\tau_{k}\). Of course, the previous conditions do not ensure that \(\gamma\) necessarily lifts a closed geodesic in some \(\mathcal{H}(k)/H\). It is the case only if the indices of \(C\) and \(C^{\prime}\) are equal together with some position and angle conditions for \(\gamma(0)\) and \(\gamma(l)\). In the notation \(\operatorname{Arc}_{T}(\mathcal{H}(k))\) the index \(T\) emphasizes that \(\gamma\) originates in \(T\). Now we are going to define a word \[\omega(\gamma)=s_{i_{1}}s_{i_{2}}\ldots s_{i_{N}},\] associated with the path \(\gamma\). First assume that \(\gamma\) does not meet any vertex of the tesselation. Then \(i_{1}\ldots,i_{N-1}\) are the indices of the curves successively crossed by \(\gamma\) and \(i_{N}\) is the index of the the curve \(C^{\prime}\) passing through \(\gamma(l)\). Note that \(\omega(\gamma)\) does not encode the index of the initial curve \(C\). When \(\gamma\) does meet some vertices \(v\), we need a convention to precise the order of the curves that \(\gamma\) meets. When \(\gamma\) crosses a vertex at the intersection of two curves of indices \(i\) and \(i+1\), we consider that \(\gamma\) first crosses the curve of index \(i\) and then the curve of index \(i+1\). Similarly, if \(\gamma(l)\) terminates at such point, we consider that \(\gamma\) first crosses the curve of index \(i\) and then terminates on a curve of index \(i+1\). As before, the datum \(\omega(\gamma)\) does not encode any information about \(\gamma(0)\). The integer \(N=l(\omega(\gamma))\) is called the _combinatorial length_ of \(\gamma\). **Lemma 14**.: _Let \(\gamma\in\operatorname{Arc}_{T}(\mathcal{H}(k))\) be a closed geodesic. Then \(\gamma\) is freely homotopic to the loop \(|\omega(\gamma)|\) in \(\operatorname{Cay}W(k)\)._ Proof.: Set \(\omega(\gamma)=s_{i_{1}}s_{i_{2}}\ldots s_{i_{N}}\), where \(N\) is the combinatorial length of \(\gamma\). Let \(0<t_{1}\leq\cdots\leq t_{n}\cdots\leq t_{N-1}\leq l\) be the successive time at which \(\gamma(t_{n})\) crosses a curve of index \(i_{n}\). Also set \(t_{0}=0\) and \(t_{N}=l\). For any \(n\) with \(1\leq n\leq N\), let \(\gamma_{n}\) be the restriction of \(\gamma\) to \([t_{n-1},t_{n}]\), so we have \[\gamma=\gamma_{1}*\gamma_{2}*\cdots*\gamma_{N},\] where the \(*\) denotes the concatenation of paths. Recall that, for \(v\in W(k)\), \(T(v)\) denotes the tile \(v.T\) and \(X(v)\) is its center. Let \(v_{0},v_{1}\ldots v_{N}\) be the elements of \(W(k)\) defined by \(v_{0}=1\) and \(v_{n}=\overline{s_{i_{1}}s_{i_{2}}\ldots s_{i_{n}}}\) for \(n\geq 1\). By definition, \(\gamma(t_{n})\) belongs to the side \(T(v_{n})\cap T(v_{n-1})\) for any \(0\leq n\leq N\). Let \(\delta_{n}\subset T(v_{n})\) be the oriented geodesic arc from \(\gamma(t_{n})\) to \(X(v_{n})\), and let \(\overline{\delta}_{n}\) be the same arc with the opposite orientation. Since \(\gamma\) is a loop, we have \(v_{N}=1\) and therefore \(\delta_{N}=\overline{\delta}_{0}\). It follows that \(\gamma\) is freely homotopic to \[\overline{\delta}_{0}*\gamma_{1}*\delta_{1}*\overline{\delta}_{1}*\gamma_{1}* \cdots*\gamma_{N}*\delta_{N}=\tilde{\gamma}_{1}*\tilde{\gamma}_{2}*\cdots* \tilde{\gamma}_{N},\] where \(\tilde{\gamma}_{n}=\overline{\delta}_{n-1}*\gamma_{n}*\delta_{n}\) for any \(n\geq 1\). This is illustrated in Figure 3. By definition \(\tilde{\gamma}_{n}\) is a path joining \(X(v_{n-1})\) to \(X(v_{n})\). Since \(\tilde{\gamma}_{n}\) lies in the convex octogon \(T(v_{n-1}\cup T(v_{n})\), it is homotopic to the geodesic arc \(\mathbf{e}(v_{n-1},v_{n})\) going from \(X(v_{n-1})\) to \(X(v_{n})\). Hence \(\gamma\) is freely homotopic to \[\mathbf{e}(v_{0},v_{1})*\mathbf{e}(v_{1},v_{2})\cdots*\mathbf{e}(v_{N-1},v_{N }),\] which is precisely the loop \(|\omega(\gamma)|\) in \(\operatorname{Cay}W(k)\). Set \(S=\{s_{i}\mid i\in\mathbb{Z}/6\mathbb{Z}\}\), \(R=\{s_{1},s_{3},s_{5}\}\) and \(B=\{s_{2},s_{4},s_{6}\}\). The elements of \(R\), respectively of \(B\), are called the _red letters_, respectively the blue letters. For any word \(w=s_{i_{1}}s_{i_{2}}\dots s_{i_{N}}\in\mathcal{W}_{S}\), let \(l_{R}(w)\) (respectively \(l_{R}(w)\)) be the number of red (respectively blue) letters in \(w\). **Corollary 15**.: _Assume that \(k\) is even. Let \(\gamma\in\operatorname{Arc}_{T}(\mathcal{H}(k))\) be a closed geodesic. Then we have_ \[l_{R}(\omega(\gamma))\geq 2k\text{ or }l_{B}(\omega(\gamma))\geq 2k.\] Proof.: Assume otherwise, i.e. \(l_{R}(\omega(\gamma))<2k\) and \(l_{B}(\omega(\gamma))<2k\). Since \(k\) is even, the partition \(S=R\cup B\) is right-angled. Therefore by Theorem 9, the path \(|\omega(\gamma)|\) is null-homotopic in \(\operatorname{Cay}^{+}W(k)\). By Lemma 13\(\operatorname{Cay}^{+}W(k)\) is homeomorphic to \(\mathcal{H}(k)\) and by Lemma 14\(\gamma\) is freely homotopic to \(|\omega(\gamma)|\). This contradicts the fact that no closed geodesic is null-homotopic on a hyperbolic surface. ### Combinatorial length versus geometric length Let \(\gamma\in\operatorname{Arc}_{T}(\mathcal{H}(k))\) be a geodesic arc of \(\mathcal{H}(k)\). In this section, we will compare the metric length \(L(\gamma)\) of \(\gamma\) with its combinatorial length \(l(\omega(\gamma))\). **Lemma 16**.: _Let \(\gamma\in\operatorname{Arc}_{T}(\mathcal{H}(k))\) be a geodesics arc. We have \(L(\gamma)>L\) whenever one of the following conditions is satisfied_ 1. \(l(\omega(\gamma))=1\) _and_ \(\gamma\) _joins two non-consecutive sides of a tile, or_ Figure 3. The concatenation of arcs from the proof of Lemma 14. _._ 2. \(l(\omega(\gamma))=2\)_._ Proof.: The first statement is well known, see e.g. [15][2]. Let \(\gamma\in\operatorname{Arc}_{T}(\mathcal{H}(k))\) be an arc and let \(\gamma=\gamma_{1}*\gamma_{2}\) be its factorization into arcs of combinatorial length \(1\). If \(\gamma_{1}\) or \(\gamma_{2}\) join two non-consecutive sides of a tile, then it is already proved that \(L(\gamma)>L\). Otherwise, after applying some isometry, we can assume that \(\gamma(0)\) belongs to \(S_{1}(1)\), then \(\gamma\) crosses \(S_{2}(1)\) and ends on \(S_{3}(s_{2})\), where \(S_{i}(v)\) denotes the side of index \(i\) of the tile \(T(v)\). Since \(T(1)\cup T(s_{2})\) is a convex octogon, we can lift it to the Poincare half plane. Let \(\Delta_{1}\) and \(\Delta_{3}\) be the line containing the arcs \(S_{1}(1)\) and \(S_{3}(s_{2})\). The lift of \(\gamma\) joins \(\Delta_{1}\) and \(\Delta_{3}\). Since \(S_{2}(1)=S_{2}(s_{2})\) is the common perpendicular to \(\Delta_{1}\) and \(\Delta_{3}\), we have \(L(\gamma)>L(S_{2}(1))=L\). **Lemma 17**.: _Let \(\gamma\in\operatorname{Arc}_{T}(\mathcal{H}(k))\) be a closed geodesic. If \(l_{R}(\omega(\gamma))\geq 2k\) or \(l_{B}(\omega(\gamma))\geq 2k\), then \(\gamma\) has length \(>2kL\)._ Proof.: We can assume that \(l_{R}(\omega(\gamma))\geq 2k\), and write \(\gamma=\gamma_{1}*\gamma_{2}*\cdots*\gamma_{N}\), where \(N=l_{R}(\omega(\gamma))\) and each \(\gamma_{n}\) is an arc joining a red side to another red side. If \(l(\gamma_{n})=1\), then \(\gamma\) joins two red sides of the same tile. Since these sides are not consecutive, we have \(L(\gamma_{n})>L\) by the part (1) of Lemma 16. Otherwise, we have \(l(\gamma_{n})\geq 2\) and \(L(\gamma_{n})>L\) by the part (2) of Lemma 16. Therefore each arc \(\gamma_{n}\) has length \(>L\). It follows that \[L(\gamma)=\sum_{1\leq n\leq N}\,L(\gamma_{n})>NL\geq 2kL.\] ### Systoles of \(\mathcal{H}(k)/H\) Assume now that \(k\) is even. It follows from Corollary 15 and Lemma 17 that the systoles of \(\mathcal{H}(k)\) are the curves of the tesselation. Let \(H\subset W(k)^{+}\) be a subgroup satisfying the conditions (11.1) and (11.2). Then all curves of the tessalation in \(\mathcal{H}(k)/H\) have length \(2kL\). Determining when these curves of \(\tau_{H}\) are the systoles is a delicate question. The next criterion provides a partial answer. For \(w\in W(k)\), set \(H^{w}=wHw^{-1}\). Also set \[B_{4k}:=\{w\in W(k)\mid l(w)\leq 4k\}.\] **Criterion 18**.: _Let \(H\subset W(k)^{+}\) be a subgroup such that \(H^{w}\cap B_{4k}=\{1\}\) for all \(w\in W(k)\)._ _Then \(H\) acts freely on \(\mathcal{H}(k)\) and the tesselation \(\tau_{H}\) is \(2k\)-regular. Moreover, the set of systoles of \(\mathcal{H}(k)/H\) is exactly the set of curves of \(\tau_{H}\)._ Proof.: By Lemma 11, \(H\) acts freely on \(\mathcal{H}(k)\) and the tesselation \(\tau_{H}\) is \(2k\)-regular. The curves of the tesselation \(\tau_{H}\) have length \(2kL\). Hence, it remains to prove that any closed geodesic \(\overline{\gamma}\) of \(\mathcal{H}(k)/H\), which is not a curve, has length \(L(\overline{\gamma})>2kL\). The choice of a distinguished tile \(\overline{T}\) was arbitrary, so we can assume that \(\overline{\gamma}\) intersects the interior of \(\overline{T}\). Hence there is a geodesic arc \(\gamma\in\operatorname{Arc}_{T}(\mathcal{H}(k))\) which lifts \(\overline{\gamma}\). Since \(L(\overline{\gamma})=L(\gamma)\), it is enough to show that \(L(\gamma)>2kL\). If \(l(\omega(\gamma))>4k\), then by Lemma 16, we have \(L(\gamma)>2kL\). Assume now that \(l(\omega(\gamma))\leq 4k\). Since we have \[H^{w}\cap B_{4k}=\{1\}\text{ for all }w\in W(k),\] it follows that \(\gamma\) is a closed geodesic of \(\mathcal{H}(k)\). By Corollary 15, we have \[l_{R}(\omega(\gamma))\geq 2k\text{ or }l_{B}(\omega(\gamma))\geq 2k,\] hence, by Lemma 16, we have \(L(\gamma)>2kL\). ## 4. The subgroup \(H(k)\) of \(W(k)\) For simplicity, we will assume that the integer \(k\) is \(\geq 3\). Set \(K:=\mathbb{Q}(\cos\pi/k)\) and let \(\mathcal{O}\) be the ring of integers of the field \(K\). In this section, we use the Tits representation \(\rho:W(k)\to GL_{6}(K)\) to find a subgroup \(H(k)\) of \(W(k)\) satisfying the hypothesis of Criterion 18 with index \([W(k):H(k)]\leq 3^{72k\phi(2k)}\), see Proposition 22. ### The Tits representation In [17], Tits defined a faithful representation of any Coxeter group. References are [17] or [5] Appendix D. Here we will describe his result for the groups \(W(k)\). Let \((\alpha_{i})_{i\in\mathbb{Z}/6\mathbb{Z}}\) be a basis of the six-dimensional vector space \(K^{6}\). There is a symmetric bilinear form \(B\) on \(K^{6}\) given by 1. \(B(\alpha_{i},\alpha_{i})=2\) 2. \(B(\alpha_{i},\alpha_{j})=0\), if \(i-j=\pm 1\), 3. \(B(\alpha_{i},\alpha_{j})=-2\cos(\pi/k)\), \(i-j=\pm 2\), and 4. \(B(\alpha_{i},\alpha_{j})=-2\), if \(i-j=\pm 3\). For any \(\alpha\in K^{6}\) with \(B(\alpha,\alpha)=2\), let \(s_{\alpha}\) be the hyperplane reflection, defined by \[s_{\alpha}(\lambda)=\lambda-B(\alpha,\lambda)\alpha,\] for any \(\lambda\in K^{6}\). The _Tits representation_ of \(W(k)\) is the group homomorphism \[\rho:W(k)\to\operatorname{GL}_{6}(K)\] defined on the generators by \(\rho(s_{i})=s_{\alpha_{i}}\), for any \(i\in\mathbb{Z}/6\mathbb{Z}\). **Theorem 19**.: _(Tits [17][5]) The representation \(\rho\) is faithful._ ### The \(l_{\infty}\)-norms of \(K\) and \(\operatorname{End}(K^{6})\) Let \(\mathcal{F}\) be the set of field embeddings \(v:K\to\mathbb{R},x\mapsto x_{v}\). The field \(K\) is totally real, and its degree is \(\phi(2k)/2\), where \(\phi\) is the Euler totient function. It follows that \(\operatorname{Card}\mathcal{F}=\phi(2k)/2\). For \(x\in K\), let \(\|x\|\) be its \(l_{\infty}\)_-norm_ defined by \[\|x\|=\operatorname{Max}_{v\in\mathcal{F}}|x_{v}|.\] We have \(\|xy\|\leq\|x\|\|y\|\). This norm should not be confused with the usual norm \(N_{K/\mathbb{Q}}(x):=\prod_{v\in\mathcal{F}}x_{v}\), which is a determinant. Let \(x\in\mathcal{O}\smallsetminus\{0\}\). Since \(|N_{K/\mathbb{Q}}(x)|\) is a positive integer, we have \[\|x\|\geq 1\text{ if }x\in\mathcal{O}\smallsetminus\{0\}.\] For \(i\), \(j\in\mathbb{Z}/6\mathbb{Z}\), let \(E_{i,j}\in\operatorname{End}(K^{6})\) be the linear map \[E_{i,j}:v\in K^{6}\mapsto\left\langle\alpha_{j}\mid v\right\rangle\alpha_{i}.\] Since \(B(\alpha_{i},\alpha_{j})\) is a circulant matrix, an easy computation shows that \[\det\,B(\alpha_{i},\alpha_{j})=64(3\cos^{4}(\pi/k)-4\cos^{3}(\pi/k)\] where \(\mu_{6}\) is the group of \(6^{th}\) roots of unity. Since \(k\geq 3\), the bilinear form \(B\) is nondegenerate and the set \[\left\{E_{i,j}\mid i,j\in\mathbb{Z}/6\mathbb{Z}\right\}\] is a basis of \(\operatorname{End}(K^{6})\). Therefore any element \(A\in\operatorname{End}(K^{6})\) can be written as \(A=\sum_{i,j\in\mathbb{Z}/6\mathbb{Z}}\,a_{i,j}\,E_{i,j}\), where \(a_{i,j}\in K\). Its _\(l_{\infty}\)-norm_\(\|A\|\) is defined by \[\|A\|=\operatorname{Max}_{i,j\in\mathbb{Z}/6\mathbb{Z}}\|a_{i,j}\|.\] For each \(i\in\mathbb{Z}/6\mathbb{Z}\), set \(F_{i}=E_{i,i}\) and for any word \(w=i_{1}\ldots i_{n}\) on the alphabet \(\mathbb{Z}/6\mathbb{Z}\), set \[F_{w}=F_{i_{1}}\ldots F_{i_{n}}.\] The \(l_{\infty}\)-norms of \(\operatorname{End}(K^{6})\) is not multiplicative, i.e. \(\|AB\|\) is not necessarily \(\leq\|A\|\|B\|\). Nevertheless, \(\|F_{w}\|\) can still be estimated, as shown in the next lemma. **Lemma 20**.: _Let \(w\) be a word of length \(n\) over the alphabet \(\mathbb{Z}/6\mathbb{Z}\). We have_ \[\|F_{w}\|\leq 2^{n}.\] Proof.: The Galois conjugates of \(\cos(\pi/k)\) are the numbers \(\cos(m\pi/k)\), for \(m\) prime to \(k\), hence we have \(\|B(\alpha_{i},\alpha_{j})\|\leq 2\) for any \(i,j\in\mathbb{Z}/6\mathbb{Z}\). Since \(E_{i_{1},j_{1}}E_{i_{2},j_{2}}=B(\alpha_{j_{1}},\alpha_{i_{2}})E_{i_{1},j_{2}}\), for any \(i_{1},j_{1},i_{2},j_{2}\in\mathbb{Z}/6\mathbb{Z}\), it can be proven by induction over \(n\) that \(F_{w}=a_{w}E_{i_{1},i_{n}}\) for some \(a_{w}\in\mathcal{O}\) with \(\|a_{w}\|\leq 2^{n-1}<2^{n}\), from which the lemma follows. ### The \(l_{\infty}\)-norm of the Tits representation Recall that \(\rho:W(k)\to\operatorname{GL}_{6}(K)\) denotes the Tits representation. **Lemma 21**.: _For any \(w\in W(k)\), we have_ \[\|\rho(w)-1\|<\,3^{l(w)}.\] Proof.: Set \(n=l(w)\) and let \(w=s_{i_{1}}\ldots s_{i_{n}}\) be a reduced decomposition of \(w\). Let \(\mathcal{V}\) be the collection of all nonempty subwords of the word \(s_{i_{1}}\ldots s_{i_{n}}\). For \(l>0\), set \(\mathcal{V}_{l}=\{v\in\mathcal{V}\mid l(v)=l\}\). Since some subwords appear more than once, \(\mathcal{V}\) and \(\mathcal{V}_{l}\) are sets with multiplicity. For example, if \(w=s_{1}s_{2}s_{3}s_{2}\), the subword \(s_{1}s_{2}\) appears twice. We have \(\rho(s_{i})=1-F_{i}\), for any \(i\in\mathbb{Z}/6\mathbb{Z}\). Hence we obtain \[\begin{array}{c}\rho(w)-1=\sum_{v\in\mathcal{V}}{(-1)^{l(v)}F_{v}},\text{ and}\\ \|\rho(w)-1\|\leq\sum_{v\in\mathcal{V}}\|F_{v}\|.\end{array}\] By Lemma 20, we have \[\|F_{v}\|\leq 2^{l(v)}.\] Since \(\operatorname{Card}\mathcal{V}_{l}=\binom{n}{l}\), we obtain \[\|\rho(w)-1\|\leq\sum_{1\leq l\leq n}(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: The group \(\mathcal{R}(k):=\mathcal{R}^{*}\cap(1+3^{k}\mathcal{R})\) is a normal subgroup of \(\mathcal{R}^{*}\), hence \(H(k)\) is normal. Since we have \(\det\rho(h)\equiv 1\,\mathrm{mod}\,3\), for any \(h\in H(k)\), the group \(H(k)\) lies in \(W(k)^{+}\). It remains to prove Assertions (1) and (2). _1. Proof that \(B_{4k}\cap H(k)=\{1\}\)._ Let \(w\in B_{4k}\cap H(k)\). By definition, we have \(\rho(w)-1=3^{k}A\), where \(A\) belongs to \(\mathcal{R}\).. By Lemma 20, we have \[\|3^{k}A\|=\|\rho(w)-1\|<3^{4k}=N,\] therefore \(\|A\|<1\). Since for any \(a\in\mathcal{O}\), we have \(a=0\) or \(\|a\|\geq 1\) and since \(\|A\|=\mathrm{Max}\|a_{i,j}\|\), we have \(A=0\). By Theorem 19, it follows that \(w=1\). _2. Proof that \(\mathrm{Card}\,W(k)/H(k)\leq 3^{72k\phi(k)}\)._ By definition, there is an embedding \[W(k)/H(k)\subset\mathcal{R}/N\mathcal{R}.\] As an abelian group, \(\mathcal{O}\) is isomorphic to \(\mathbb{Z}^{\phi(k)/2}\), hence \(\mathcal{R}/N\mathcal{R}\simeq\mathbb{Z}/N\mathbb{Z}^{18\phi(k)}\). It follows that \[\mathrm{Card}\,W(k)/H(k)\leq N^{18\phi(k)}=3^{72k\phi(k)}.\] ## 5. Bounds on \(\mathrm{Fill}(g)\) The last step of the proof of the bound on \(\mathrm{Fill}(g)\) stated in the Introductin involves the factor \(57/\sqrt{\ln\ln\ln g}\). This will now be shown to be a consequence of a 1904 result of E. Landau. _5.1 The set \(B\) and Landau's Theorem_ Let \(p_{1}<p_{2}<\dots\) be the ordered list of all odd prime numbers. For any \(n\geq 1\), set \(q_{n}=2\prod_{k=1}^{n}p_{i}\), and set \(B:=\{q_{1},q_{2}\dots\}=\{6,30,210\dots\}\). The following classical theorem is an improvement of the prime number Theorem of de la Vallee Poussin [6] and Hadamard [8]. **Theorem 23**.: _(Landau [10]) While \(k\) varies over \(B\), we have_ \[\phi(k)\sim e^{-\gamma}\frac{k}{\ln\ln k},\] _where \(\gamma=0.577\dots\) is Euler's constant._ For the proof, see [9], Theorem 328 p. 352. _5.2 The constant \(\delta=12\sqrt{e^{-\gamma}\ln 3}=9.42\dots\)_ Let \(k\geq 3\). By Proposition 22, there is a subgroup \(H(k)\subset W(k)^{+}\) such that 1. \(B_{4k}\cap H(k)=\{1\}\), and 2. \(\mathrm{Card}\,W(k)/H(k)\leq 3^{72k\phi(2k)}\). By Lemma 18, \(H(k)\) acts freely on \(\mathcal{H}(k)\). Let \(g_{k}\) be the genus of the closed surface \(\Sigma(k):=\mathcal{H}(k)/H(k)\) and let \(\mathrm{Sys}\,(\Sigma(k))\) be the set of systoles of \(\Sigma(k)\). Set \[\delta=12\sqrt{e^{-\gamma}\ln 3}.\] **Lemma 24**.: 1. _We have_ \[\lim_{k\to\infty}g_{k}=\infty\] _._ 2. _Let_ \(\delta^{+}>\delta\)_. For almost all_ \(k\in B\)_, we have_ \[\operatorname{Card\,\,Sys}\left(\Sigma(k)\right)\leq\frac{6\delta^{+}}{\sqrt{\ln \ln\ln g_{k}}}\,\,\frac{g_{k}}{\sqrt{\ln g_{k}}}.\] Proof.: By Lemma 18, \(\operatorname{Sys}\left(\Sigma(k)\right)\) is exactly the set of curves of the tesselation \(\tau_{H(k)}\). Since the systole length of \(\Sigma(k)\) tends to \(\infty\) as \(k\) tends to \(\infty\), so is its genus \(g_{k}\), which proves the first assertion. In the proof of the second assertion, the integer \(k\) varies over \(B\). Set \[\delta(k)=3^{72k\phi(2k)}=3^{144k\phi(k)}.\] By Theorem 23, we have \(\ln\delta(k)\sim 144e^{-\gamma}\ln 3k^{2}\,/\ln\ln k\), hence \[(\ln\ln k)\,\ln\delta(k)\sim\delta^{2}k^{2}. \tag{24.1}\] It follows that \[k>\sqrt{\ln\delta(k)},\,\text{for}\,\,k>>0. \tag{24.2}\] When \(k\) tends to infinity, we have \(\ln\ln\ln\delta(k)\sim\ln\ln\ln\sqrt{\delta(k)}\). Hence Equation (24.2) implies that \[(\delta^{+})^{2}\ln\ln k>\delta^{2}\ln\ln\ln\delta(k)\,\,\text{for}\,\,k>>0. \tag{24.3}\] Combining Equations (24.1) and (24.3) we get that \[(\delta^{+}k)^{2}>\ln\delta(k)\,\ln\ln\ln\delta(k)\,\,\text{for}\,\,k>>0,\] thus we have \[\tfrac{1}{k}<\tfrac{\delta^{+}}{\sqrt{\ln\ln\ln\delta(k)}}\,\tfrac{1}{\sqrt{ \ln\delta(k)}},\,\text{for}\,\,k>>0. \tag{24.4}\] Let \(f_{0},f_{1}\) and \(f_{2}\) be the number of vertices, edges and tiles of the tesselation \(\tau_{H(k)}\). Since it is a hexagonal tesselation and each vertex has valence \(4\), we have \(f_{1}=3f_{2}\) and \(f_{0}=f_{1}/2\). Since \(2(g_{k}-1)=f_{1}-f_{2}-f_{0}\), we have \(2(g_{k}-1)=f_{2}/2=[W(k):H(k)]/2\), hence we have \(g_{k}\leq\delta(k)\). It follows that \[\tfrac{1}{k}<\tfrac{\delta^{+}}{\sqrt{\ln\ln\ln g_{k}}}\,\tfrac{1}{\sqrt{\ln g _{k}}},\,\text{for}\,\,k>>0. \tag{24.5}\] The number of systoles is \(f_{1}/2k=6(g_{k}-1)/k<6g_{k}/k\). It follows from equation (24.5) that \[\operatorname{Card\,\,Syst}(\Sigma(k))<\tfrac{6\delta^{+}}{\sqrt{\ln\ln\ln g _{k}}}\,\tfrac{g_{k}}{\sqrt{\ln g_{k}}},\,\text{for}\,\,k>>0.\qed \tag{24.6}\] ### The bound for \(\operatorname{Fill}(g)\) The following statement is a stronger form of the theorem stated in the introduction. **Theorem 25**.: _There exists an infinite set \(A\) of integers \(g\geq 2\) and, for any \(g\in A\), a closed oriented hyperbolic surface \(\Sigma_{g}\) of genus \(g\), endowed with a standard hexagonal tesselation \(\tau_{g}\), satisfying the following assertions_ 1. _the set of curves of_ \(\tau_{g}\) _is the set of systoles of_ \(\Sigma_{g}\)_, and_ 2. _we have_ \[\operatorname{Card\,\,Syst}(\Sigma_{g})\leq\tfrac{57}{\sqrt{\ln\ln\ln g}}\,\, \tfrac{g}{\sqrt{\ln g}}.\] Proof.: Let us use the notations of Subsection 5.2 and set \(\delta^{+}=9.5=57/6\). By the first assertion of Lemma 24, there is an infinite subset \(B^{\prime}\subset B\) such that the map \(k\in B^{\prime}\mapsto g_{k}\in\mathbb{Z}\) is injective, Set \(A:=\{g_{k}\mid k\in B^{\prime}\}\) and, for \(g\in A\) set \(\Sigma_{g}=\Sigma(k)\) and \(\tau_{g}=\tau_{H(k)}\) where \(k\in B^{\prime}\) is uniquely defined by \(g_{k}=g\). It follows from the second assertion of Lemma 24 that \[\text{Card }\text{Sys}\left(\Sigma_{g}\right)\leq\tfrac{57}{\sqrt{\ln\ln\ln g}}\ \tfrac{g}{\sqrt{\ln g}},\text{ for any }g\in A.\qed\] ### Final remark The constant \(57\) in the theorem can be replaced by any real number \(a>6\delta=56.547\dots\). This constant can be improved in two ways. First, one can use the fact that the Tits representations lies inside the orthogonal group \(O_{6}(K,q)\), where \(q\) is the quadratic form defined by \(B\). Second, the results concerning hexagon tesselations clearly extend to \(2p\)-gon tesselations, for any \(p\geq 3\). Using octogons instead of hexagons provides a marginally better bound. We have restricted ourselves to this version in order to keep the paper as elementary as possible. The paper [1] and our result suggests that \(\text{Fill}(g)\) should be of "order of magnitude" \(g/(\ln g)^{\alpha}\) for some \(\alpha\) with \(1/2\leq\alpha\leq 1\), but we cannot formulate a precise conjecture at this stage.
2306.16906
Numerical Data Imputation for Multimodal Data Sets: A Probabilistic Nearest-Neighbor Kernel Density Approach
Numerical data imputation algorithms replace missing values by estimates to leverage incomplete data sets. Current imputation methods seek to minimize the error between the unobserved ground truth and the imputed values. But this strategy can create artifacts leading to poor imputation in the presence of multimodal or complex distributions. To tackle this problem, we introduce the $k$NN$\times$KDE algorithm: a data imputation method combining nearest neighbor estimation ($k$NN) and density estimation with Gaussian kernels (KDE). We compare our method with previous data imputation methods using artificial and real-world data with different data missing scenarios and various data missing rates, and show that our method can cope with complex original data structure, yields lower data imputation errors, and provides probabilistic estimates with higher likelihood than current methods. We release the code in open-source for the community: https://github.com/DeltaFloflo/knnxkde
Florian Lalande, Kenji Doya
2023-06-29T12:55:58Z
http://arxiv.org/abs/2306.16906v2
# Numerical Data Imputation for Multimodal Data Sets: ###### Abstract Numerical data imputation algorithms replace missing values by estimates to leverage incomplete data sets. Current imputation methods seek to minimize the error between the unobserved ground truth and the imputed values. But this strategy can create artifacts leading to poor imputation in the presence of multimodal or complex distributions. To tackle this problem, we introduce the \(k\)NN\(\times\)KDE algorithm: a data imputation method combining nearest neighbor estimation (\(k\)NN) and density estimation with Gaussian kernels (KDE). We compare our method with previous data imputation methods using artificial and real-world data with different data missing scenarios and various data missing rates, and show that our method can cope with complex original data structure, yields lower data imputation errors, and provides probabilistic estimates with higher likelihood than current methods. We release the code in open-source for the community1. Footnote 1: [https://github.com/DeltaFolfo/kmnxkde](https://github.com/DeltaFolfo/kmnxkde) ## 1 Background and related work As sensors are now ubiquitous and the Internet of Things has become widespread and found numerous applications, Big Data is often referred to as the "Gold of the 21st Century". However, along with the proliferation of numerical databases, missing data has become a pervasive problem: they can introduce a bias, lead to wrong conclusions, or even prevent from using data analysis tools that require complete data sets. To mitigate this issue, data imputation algorithms have been developed. From the straightforward mean/mode imputation (Little and Rubin, 2014) to recent generative adversarial networks (GAN) models (Yoon et al., 2018), a wide range of tools are available to impute incomplete data sets. As the variety and specificity of available data imputation algorithms can be overwhelming for practitioners, flexible packages like DataWig allow optimal imputation results by sweeping through several methods and automatically perform hyper-parameter tuning (Biessmann et al., 2019). Data imputation most popular application consists of recovering missing parts of an image, also known as inpainting. Deep learning methods have shown promising results for image inpainting and are therefore the preferred solutions for image recovery (Xiang et al., 2023). However, typical image features differ from tabular data. This study focuses on tabular numerical data sets, that is numerical real-valued data arranged in rows and columns in a form of a matrix. For numerical data sets, recent benchmarks argue that deep-learning imputation methods do not perform better than simple traditional algorithms (Bertsimas et al., 2018; Poulos and Valle, 2018; Jadhav et al., 2019; Woznica and Biecek, 2020; Jager et al., 2021; Lalande and Doya, 2022; Grinsztajn et al., 2022). These studies show that the \(k\)NN-Imputer (Troyanskaya et al., 2001) and MissForest (Stekhoven and Buhlmann, 2012), in spite of being simple algorithms, generally perform better over a large range of data sets in various missing data scenarios. In the presence of linear dependencies, Multiple Imputation using Chained Equations (MICE) and its variants (van Buuren and Groothuis-Oudshoorn, 2011; Khan and Hoque, 2020) can show good imputation performances. We denote \(x\in\mathbb{R}^{D}\) the complete ground truth for an observation in dimension \(D\geq 2\), and \(m\in\{0,1\}^{D}\) the missing mask. The observed data is presented as \(\tilde{x}=x\odot m\), where \(\odot\) denotes the element wise product. Data may be missing because it was not recorded, the record has been lost, degraded, or data may alternatively be censored. The exercise now consists in retrieving \(x\) from \(\tilde{x}\), while allowing incomplete data for modeling, and not only complete data. The probability distribution of the missing mask, \(p(m)\) is referred to as the missing data mechanism (or missingness mechanism), and depends on missing data scenarios. Following the usual classification of Little and Rubin, missing data scenarios are split into three types (Little and Rubin, 2014): missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). In MCAR the missing data mechanism is assumed to be independent of the data set and we can write \(p(m|x)=p(m)\). In MAR, the missing data mechanism is assumed to be fully explained by the observed variables, such that \(p(m|x)=p(m|\tilde{x})\). The MNAR scenario includes every other possible scenarios, where the reason why data is missing may depend on the missing values themselves. Numerical data imputation methods are usually evaluated using the normalized RMSE (NRMSE) between the imputed value and the ground truth. The higher the average NRMSE, the poorer the imputation results. This approach is intuitive, but is too restrictive for multimodal data sets: it assumes that there exists a unique answer for a given set of observed variables, which is not true for multimodal distributions. For multimodal data sets, density estimation methods like the Kernel Density Estimation (KDE) (Rosenblatt, 1956; Parzen, 1962) appear of interest for data imputation. But despite some attempts (Titterington and Mill, 1983; Leibrandt and Gunnemann, 2018), density estimation methods with missing values remain computationally expensive and not suitable for practical imputation purposes, mostly because they do not generalize well to real-world data sets in spite of an interesting theoretical framework. Alternatively, other works have developed Gaussian mixture density estimates with Expectation-Maximization (EM) training (Delalleau et al., 2012; McCaw et al., 2020) as well as Gaussian processes for Kernel Principal Component Analysis (KPCA) (Sanguinetti and Lawrence, 2006), but these methods also do not generalize well do heterogeneous numerical data sets in practice. Also, if the mathematical framework of the Missingness Aware Gaussian Mixture Models (MGMM) of McCaw et al. (2020) is interesting, it requires to manually search for the optimal number of Gaussians in the mixture, and is primarily focused on classification tasks. More recently, variants of collaborative filtering algorithms for Matrix Completion problems have been developed (Lee et al., 2016; Li et al., 2020) and can be used for numerical data imputation as well. However, these methods do not seem to perform better than the traditional SoftImpute algorithm (Hastie et al., 2015) for Matrix Completion. This work focuses on concurrently learning from incomplete data to model and recover missing numerical values. We first look at three simple data sets to illustrate the shortcomings of current data imputation methods with multimodal distributions. We address these issues by introducing a local density estimator that is flexible to accommodate multimodal data structures. By leveraging the convenient properties of the \(k\)NN-Imputer and the KDE framework, we develop the \(k\)NN\(\times\)KDE: a simple yet efficient algorithm for density estimation and data imputation of missing values in numerical data sets. Using heterogeneous real-world and simulated data sets, we show that our method performs equally or better than state-of-the-art numerical imputation methods, while providing better density estimates for missing values. The code and data used in this work are provided in open-access for the community. Problems of current imputation methods with multimodal data sets In this section, we illustrate problems of current numerical data imputation methods with multimodal data sets. For this purpose, we generate three synthetic data sets in two-dimensional space and qualitatively discuss the imputation performances of four state-of-the-art numerical imputation algorithms with two benchmark methods (column mean and column median). ### Three synthetic data sets The first data set, called 2d_linear, is a noisy linear distribution. \(x_{1}\) is sampled from a mollified uniform distribution on \([0,1]\) with standard deviation \(\sigma=0.05\). Then \(x_{2}=x_{1}+\varepsilon\), where \(\varepsilon\sim\mathcal{N}(0,0.1)\). The second data set, 2d_sine, is a sine wave with noise. We sample \(x_{1}=4\pi u\), where \(u\) is drawn from a mollified distribution on \([0,1]\) with standard deviation \(\sigma=0.05\). Then \(x_{2}=\sin x_{1}+\varepsilon\), where \(\varepsilon\sim\mathcal{N}(0,0.2)\). The noisy surjection allows to show that most imputation algorithms perform well in the unambiguous case (when \(x_{2}\) is missing), but not with multimodal distributions (when \(x_{1}\) is missing). Finally, 2d_ring displays a ring with noise. It has been generated in polar coordinates where \(\theta\sim\mathcal{U}[0,2\pi]\) and \(r=1.0+\varepsilon\), with \(\varepsilon\sim\mathcal{N}(0,0.1)\). Euclidean coordinates are \(x_{1}=r\cos\theta\) and \(x_{2}=r\sin\theta\). These three simple data sets have \(N=500\) observations and are plotted in Figure 1. The code used for generation and the data sets themselves are available on the online repository. We have used a mollified uniform distribution for \(x_{1}\) in 2d_linear and 2d_sine to prevent from zero likelihood computation problems at the edges of the uniform distribution. ### Five state-of-the-art numerical data imputation methods Here, we present four data imputation methods used in this work: the \(k\)NN-Imputer, MissForest, MICE and GAIN. This choice is of course arbitrary, but illustrates well the current state of affairs regarding tabular data imputation (Bertsimas et al., 2018; Poulos and Valle, 2018; Yoon et al., 2018; Jadhav et al., 2019; Woznica and Biecek, 2020; Jager et al., 2021; Lalande and Doya, 2022; Grinsztajn et al., 2022) **The \(k\)NN-Imputer**(Troyanskaya et al., 2001) computes distances between pairs of observations using the NaN-Euclidean distance, which can handle missing values. It imputes missing cells one column at a time by averaging over the \(k\) nearest neighbors that have an observed value for the given feature. Therefore, different neighbors can be used to impute various missing entries for the same observation. The hyperparameter \(k\) for the number of neighbors is to be optimized. **MissForest**(Stekhoven and Buhlmann, 2012) is an iterative imputation algorithm. MissForest starts by filling all missing values with initial estimates (typically the column mean), and loops through all columns, Figure 1: **Three basic synthetic data sets with \(N=500\) observations.** 2d_linear is a bijection, 2d_sine is a surjection, and 2d_ring displays a ring and is therefore not a function in the euclidean space. one at a time, performing a regression of that specific column onto all other columns using Random Forests. It stops when the imputed data set is stable enough (following a user-defined threshold) or when a fixed number of iterations has been performed. The number of trees used in the Random Forest algorithm is the hyperparameter to be tuned. **MICE** stands for Multiple Imputation Chained Equations (van Buuren and Groothuis-Oudshoorn, 2011). Similar to MissForest, it is an iterative imputation algorithm. MICE strictly refers to the algorithmic method which consists of filling missing values using iterative series of regression models one variable at a time. In this work, we use the standard version of MICE that uses linear regressions as a regressor to predict each column successively. This algorithm has no hyperparameter to optimize. MICE has shown good imputation results and is appreciated for its simplicity and absence of hyperparameter tuning, but it fails at capturing non-linear dependencies. **SoftImpute** is a matrix completion algorithm (Hastie et al., 2015). It works by finding a low-rank approximation of the matrix with missing values while promoting sparsity through a regularization term with coefficient \(\lambda\). The algorithm uses an iterative procedure to minimize the objective function. In each iteration, the observed entries of the matrix are used to estimate the missing entries. The estimated entries are then used to update the low-rank approximation of the matrix. This process is repeated until convergence. Finally, **GAIN** is a GAN artificial neural network tailored for tabular numerical data imputation which claims state-of-the-art numerical data imputation results (Yoon et al., 2018). GAIN smartly revisits the GAN architecture by working with individual cells rather than entire observations. It has recently benefited from a lot of attention for numerical data imputation. However, recent benchmarks show that its performances are mediocre in practice (Jager et al., 2021; Lalande and Doya, 2022; Grinsztajn et al., 2022). GAIN has several hyperparameters to tune: batch size, hint rate (amount of correct labels provided to the discriminator), number of training iterations, and weight parameter \(\alpha\) used in the generator loss. ### Imputation results We introduce missing values in MCAR setting with 20% missing rate. If an observation has both features removed, we repeat the process until at least one feature is present. After missing values have been inserted, we normalize the data set in the range \([0,1]\) using min/max normalization. For each data imputation algorithm and for each data set represented as a matrix of size \((N,D)\), we perform a grid search of the hyperparameter than best minimizes the NRMSE: \[\text{NRMSE}=\sqrt{\frac{1}{N_{\text{miss}}}\sum_{i=1}^{N}\sum_{j=1}^{D}(x_{ ij}-\widehat{x}_{ij})^{2}\left(1-m_{ij}\right)} \tag{1}\] where \(m_{ij}=1\) if cell \((i,j)\) is observed (\(m_{ij}=0\) if missing) and \(N_{\text{miss}}=\sum_{i=1}^{N}\sum_{j=1}^{D}(1-m_{ij})\) is the total number of missing entries in the data set. Imputation results provided by the best hyperparameters are plotted in Figure 2. Figure 2 provides a concise insight into the current state of numerical data imputation. The scientific consensus is that the \(k\)NN-Imputer and MissForest overall provide the best numerical data imputation quality, which is somewhat recovered here. MICE uses linear regression between features and cannot capture non-linear dependencies. SoftImpute uses low-rank matrix completion, hence the straight lines as well. Despite its flexible architecture, GAIN performs poorly, even on 2d_linear. GAIN, like all generative adversarial networks, is difficult to optimize because of training instabilities, mode collapse problems, potential impossibility to converge, or not well defined loss function (Saxena and Cao, 2020). Both the \(k\)NN-Imputer and MissForest average over several predictions. This is why the imputation of \(x_{1}\) for the 2d_sine data set lies between the two sine waves, and imputed values for both \(x_{1}\) and \(x_{2}\) for the 2d_ring data set are inside the ring. While averaging over several predictions often leads to better estimates, this strategy deteriorates the imputation quality if the missing values distribution is not unimodal. MICE performs imputation by assuming linear relations between features of the data set. It is therefore no surprise that MICE can very well impute data set 2d_linear, but fails at imputing data sets 2d_sine and 2d_ring. Similarly, SoftImpute uses linear combinations of the observed values as a matrix completion algorithm. GAIN provides surprisingly disappointing imputation results. While deep-learning models are flexible methods, the generator and the discriminator of GAIN fail to capture the relationship between \(x_{1}\) and \(x_{2}\) in all data sets. Yet innovative, the complex architecture of GAIN (and GANs is general) is problematic to train. This leads to bad imputation results as well as large variability between runs. ## 3 The \(k\)NN\(\times\)KDE algorithm To address the above-mentioned issues related to multimodal distributions, we propose a local stochastic imputation algorithm inspired by the \(k\)NN-Imputer and kernel density estimation. We adapt the KDE algorithm to missing data settings such that the conditional density of missing features given observed features is estimated. We use a methodology analogous to the \(k\)NN-Imputer to look for neighbors, but we work with missing patterns instead of working column by column. The reason of this choice is that working with one column at a time may lead to imputation artifacts as the selected neighbors for various imputed features can be different. Therefore, imputed observations may be incompatible with the original data structure. On the contrary, we are guaranteed to preserve the original data structure if we impute all missing features of an observation at once. Figure 2: **Imputation results for the three synthetic data sets by the four selected imputation methods with optimized hyperparameters. Missing data have been injected in MCAR scenario with 20% missing rate. Blue dots correspond to complete observations; orange dots have observed \(x_{2}\) and imputed \(x_{1}\); red dots have observed \(x_{1}\) and imputed \(x_{2}\). The \(k\)NN-Imputer, MissForest and MICE perform well on 2d_linear. For 2d_sine, the \(k\)NN-Imputer and MissForest can impute \(x_{2}\), but fail at recovering \(x_{1}\). No method can properly impute 2d_ring.** For a data set with \(D\) columns, we have up to \(2^{D}-2\) possible missing patterns. Indeed, each cell may either be missing or not (hence \(2^{D}\) choices) but we do not account for complete cases (nothing to impute) and completely unobserved cases (without even an observed feature). We first normalize each column of the data set to fit within the range of \([0,1]\). We refer to this process as the min-max normalization. For imputation of the data in row \(i\), we compute the distance \(d_{ij}\) with all other rows \(j\), using the distance \[d_{ij}=\sqrt{\sum_{k\in\mathcal{D}_{\mathrm{obs}}}(x_{ik}-x_{jk})^{2}\ \ +\sum_{k\in\mathcal{D}_{\mathrm{mis}}}\sigma_{k}^{2}} \tag{2}\] where \(\mathcal{D}_{\mathrm{obs}}=\{k\in[\![1,D]\!]\ |\ m_{ik}=m_{jk}=1\}\) is the set of indices for commonly observed features in observations \(i\) and \(j\), \(\mathcal{D}_{\mathrm{miss}}=\{k\in[\![1,D]\!]\ |\ m_{ik}m_{jk}=0\}\) is the set of indices for features where at least one observation \(i\) or \(j\) is missing, and \(\sigma_{k}\) is the standard deviation of feature \(k\) computed over all observed cells. We call this new distance metric the NaN-std-Euclidean Distance, in contrast to the original NaN-Euclidean Distance used by the \(k\)NN-Imputer (Dixon, 1979). See Appendix D for a discussion on this metric properties. The pairwise distances are then passed to a softmax function to define probabilities: \[p_{ij}=\frac{e^{-d_{ij}/\tau}}{\sum_{j}e^{-d_{ij}/\tau}} \tag{3}\] We use the "soft" version of the \(k\)NN algorithm, and introduce the temperature hyperparameter \(\tau\) which can be interpreted as the effective neighborhood diameter. Instead of selecting a fixed number of neighbors per observation, we consider all observations but give nearest neighbors a stronger weight. In a similar fashion as Frosst et al. (2019), the notion of temperature controls the tightness of each observation's neighborhood. See Appendix A.1 for a discussion on the temperature hyperparameter. Given a missing pattern, we first select all rows to impute and all the rows corresponding to potential donors. The data to impute is the subset of data which has the current missing pattern, and potential donors are the subset of data where at least all columns in the current missing pattern are observed. For an incomplete observation \(i\) in the subset of data to impute, \(p_{ij}\) is the probability of choosing observation \(j\) from the subset of potential donors. We have \(\sum_{j}p_{ij}=1\). Algorithm 1 shows the pseudo-code of the \(k\)NN\(\times\)KDE. ``` Hyper-parameters:Softmax temperature \(\tau\); Kernel bandwidth \(h\); Nb draws \(N_{\mathrm{draws}}\) Data: Incomplete numerical data set \(X\) min-max normalization in the interval \([0,1]\); for each missing patterndo \(X_{\mathrm{imp}}\leftarrow\)data_to_impute\((X,\mathrm{missing\ pattern})\); \(X_{\mathrm{don}}\leftarrow\)potential_donors\((X,\mathrm{missing\ pattern})\); \(d_{ij}\leftarrow\)NaN_std_Euclidean_Distance\((X_{\mathrm{imp}},X_{\mathrm{don}})\); \(p_{ij}\leftarrow\)softmax\((-d_{ij}/\tau)\); for each row in \(X_{\mathrm{imp}}\)do \(r\leftarrow\) sample \(N_{\mathrm{draws}}\) rows from \(X_{\mathrm{don}}\) with probabilities \(p_{ij}\); \(e\leftarrow\) sample noise \(N_{\mathrm{draws}}\) times from \(e\sim\mathcal{N}(0,h)\) with dimension \(K\); \(\text{ imputation\_samples}\leftarrow\)\(X_{\mathrm{don}}[r]+e\); end for end for min-max denormalization; Return:imputations_samples ``` **Algorithm 1**Pseudo-code for the \(k\)NN\(\times\)KDE The \(k\)NN\(\times\)KDE has three hyperparameters: the temperature \(\tau\) for the softmax probabilities, the (shared) standard deviation \(h\) of the Gaussian kernels, and \(N_{\text{draws}}\) the number of imputed samples to draw for each missing cell. The effects of these three hyperparameters are discussed in Appendix A. For observation \(i\) with a missing value in column \(k\), the probability distribution of the missing cell \(x_{ik}\) is given by \[p(x_{ik})=\sum_{j=1}^{N}p_{ij}\,\mathcal{N}\left(x_{ik}|x_{jk};h\right) \tag{4}\] where \(p_{ij}\) are the softmax probabilities defined in Equation 3, with \(p_{ij}=0\) if observation \(j\) is not in the subset of potential donors for observation \(i\), and \(\mathcal{N}\left(.|\mu;\sigma\right)\) denotes the density function of a univariate Gaussian with mean \(\mu\) and standard deviation \(\sigma\): \[\mathcal{N}\left(x|\mu;\sigma\right)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\,e^{- \frac{1}{4}\left(\frac{x-\mu}{\sigma}\right)^{2}} \tag{5}\] If observation \(i\) has \(K\) missing values, in columns \(k_{1},k_{2},...,k_{K}\), then the subset of potential donors will likely be smaller and the joined probability distribution for all missing values is given by \[p(x_{ik_{1}},x_{ik_{2}},...,x_{ik_{K}})=\sum_{j=1}^{N}p_{ij}\,\prod_{\kappa=1} ^{K}\mathcal{N}\left(x_{ik_{\kappa}}|x_{jk_{\kappa}};h\right) \tag{6}\] where the index \(\kappa\) runs from \(1\) to \(K\) to denote the successive missing columns, and \(p_{ij}=0\) if observation \(j\) is not in the subset of potential donors for observation \(i\) like above. As can been seen from Equation 6, the weights \(p_{ij}\) are shared such that imputed cells for the same observation have a joined probability that reflects the structure of the original data set. Note that the pseudo-code of the \(k\)NN\(\times\)KDE presented in Algorithm 1 uses \(N_{\text{draws}}\) samples for each missing cell. We could instead use the softmax probabilities \(p_{ij}\) as weights for the mixture of Gaussians with all potential donors, which would ideally lead to direct probability distributions. We have tried this approach but found that this requires a much larger computational cost, and is only tractable in practice with small data sets. We therefore continue to sample \(N_{\text{draws}}\) times to show the returned probability distributions of the \(k\)NN\(\times\)KDE. ## 4 Results on the synthetic toy data sets We show that the pseudo-code of the algorithm presented the proposed method provides imputation samples that preserve the structure of the original data sets. For now, missing data are inserted in MCAR scenario with 20% missing rate, and the hyperparameters of the \(k\)NN\(\times\)KDE are fixed to their default values: \(h=0.03\), \(1/\tau=50.0\) and \(N_{\text{draws}}=10,000\). The upper panels of Figure 3 show the imputation with a sub-sampling size \(N_{\text{ss}}=10\). The sub-sampling size is only used for plotting purposes. If \(x_{1}\) is missing, we sample \(N_{\text{ss}}\) possible values given \(x_{2}\) (see the orange horizontal trails of dots), and if \(x_{2}\) is missing, we draw \(N_{\text{ss}}\) possible values given \(x_{1}\) (see the red vertical trails of dots). Of course, it is worth mentioning that if we decide to average over the returned samples by the \(k\)NN\(\times\)KDE, then similar artifacts as the ones presented in Figure 2 will arise again. For instance, single point estimates for the 2d_ring data set will fall inside the ring. Another way to visualize the imputation distribution for each missing value is to look at the univariate density provided by the \(k\)NN\(\times\)KDE algorithm. For each data set, we have selected two observations: one with missing \(x_{1}\) and one with missing \(x_{2}\). The lower panels of Figure 3 show the univariate densities returned by the \(k\)NN\(\times\)KDE algorithm with default hyperparameters. The upper left corner of each panel shows the observed value and a thick dashed line indicates the (unknown) ground truth to be imputed. We see that the ground truth always falls in one of the modes of the estimated imputation density. For the 2d_sine data set, when \(x_{1}\) is missing (central middle panel of Figure 3), the \(k\)NN\(\times\)KDE returns a multimodal distribution. Indeed, given the observed \(x_{2}=-0.88\), three separate ranges of values could correspond to the missing \(x_{1}\). Similarly, the 2d_ring data set shows bimodal distributions both for \(x_{1}\) or \(x_{2}\), corresponding to the two possible ranges of values allowed by the ring structure. ## 5 Performances on heterogeneous data sets Now, we assess the practical performances of our method on larger data sets, using both synthetic and real-world data sets from UCI and other repositories. See Appendix B for a comprehensive description of the data sets. We present imputation results using two metrics: Subsection 5.1 presents the normalized root mean square errors (NRMSE) commonly used for comparing numerical data imputation methods; Subsection 5.2 shows the mean log-likelihood score of the (unknown) ground truth under each imputation model computed over the normalized data in the range \([0,1]\) for fair comparison. In both cases we test four missing data settings: 'Full MCAR', 'MCAR', 'MAR', and 'MNAR', and six missing rates: 10%, 20%, 30%, 40%, 50%, and 60%. While 'Full MCAR' includes missing data from multiple columns as defined in Section 1, 'MCAR' assumes only one column missing, as in Jager et al. (2021). See Appendix C for missing data scenario details. For each data set, each missing data setting, and each missing rate, we repeat the imputation NB_REPEAT=20 times to compute the mean and the standard deviation of the chosen metric. Figure 3: **Imputation results from the \(k\)NN\(\times\)KDE algorithm on the three synthetic data sets.** Missing data are inserted in MCAR setting with 20% missing rate. Each missing entry has been imputed by the \(k\)NN\(\times\)KDE with default hyperparameters \(N_{\text{ss}}=10\) times for plotting purposes. The imputed values follow the structure of the original data sets. The histograms in the lower panels have \(N_{\text{draws}}=10000\) samples. Thick dashed lines correspond to the (unobserved) ground truth and the observed value is shown in the the upper-left corner. The \(k\)NN\(\times\)KDE returns a probability distribution for each missing cell which captures the original data multi-modality structure. ### Imputation results with NRMSE This subsection presents the imputation results evaluated by the NRMSE, as defined in Equation (1). For the \(k\)NN-Imputer, MissForest, MICE, the Mean, and the Median imputation schemes, we use the implementation provided by the Python package sklearn2Pedregosa et al. (2011). For GAIN, we use the original GitHub repository3 of the authors of GAIN Yoon et al. (2018). As the original package for SoftImpute is in R, we use a more recent Python4 implementation provided by Muzzlec et al. (2020). Footnote 2: [https://scikit-learn.org/stable/modules/impute.html](https://scikit-learn.org/stable/modules/impute.html) Footnote 3: [https://github.com/jsyoon0823/GAIN](https://github.com/jsyoon0823/GAIN) Footnote 4: [https://github.com/BorisMuzzlec/MissingDataOT](https://github.com/BorisMuzzlec/MissingDataOT) When minimizing the NRMSE for a given data set, a given missing data scenario, and a given missing rate, we perform a hyperparameter search except for MICE, Mean, and Median imputation methods, which do not have hyperparameters. We consider the following lists for the other 5 methods' hyperparameters: * For \(k\)NN\(\times\)KDE, the inverse temperature \(1/\tau\in[10,25,50,100,250,500,1000]\) * For the \(k\)NN-Imputer, the number of neighbors \(k\in[1,2,5,10,20,50,100]\) * For MissForest, the number of regression trees \(N_{\text{trees}}\in[1,2,3,5,10,15,20]\) * For SoftImpute, the regularization term \(\lambda\in[0.1,0.2,0.5,1.0,2.0,5.0,10.0]\) * For GAIN, the number of training epochs \(N_{\text{iter.}}\in[100,200,400,700,1000,2000,4000]\) When computing the NRMSE for the \(k\)NN\(\times\)KDE, we impute with the imputation sample mean. Tables 1, 2, 3, and 4 show the mean imputation NRMSE for each method and each data set with the missing rate 20%. For each data set, the top three methods that achieve lowest imputation NRMSE have been colored in green, yellow, and orange. We provide the numerical results for the 20% missing rate case as this is often the default missing rate for tabular data imputation benchmarks. The results for all missing rates are available in the online repository for this project. In order to provide a more concise overview of the imputation NRMSE results, we rank the proposed methods from 1 (best) to 8 (worst) for each data set. For example, looking at the 4th row of Table 1, we have for the geyser data set in Full MCAR setting with 20% missing rate: the \(k\)NN-Imputer (1), the \(k\)NN\(\times\)KDE (2), MICE (3), MissForest (4), SoftImpute (5), GAIN (6), Mean (7) and Median (8). Now, for each missing data setting and each missing rate, we compute the mean and the standard deviation for each method ranks over the 15 data sets. Figure 4 shows the average rank for each method. These results reinforce the previous reports that Deep Learning methods do not perform better than traditional methods on tabular numerical data sets (Bertsimas et al., 2018; Poulos and Valle, 2018; Jadhav et al., 2019; Woznica and Biecek, 2020; Jager et al., 2021; Lalande and Doya, 2022; Grinsztajn et al., 2022). The proposed \(k\)NN\(\times\)KDE consistently achieves the best rank, tightly followed by MissForest. The \(k\)NN-Imputer and MICE come next. SoftImpute, GAIN, the column Mean, and the column Median are always in the group of the four last methods. Rankings for MissForest show large error bars because this method is confident in the provided imputation. In other words, the performances of MissForest can vary a lot depending on the nature of the data set (see the standard deviation in the reported NRMSE results in Tables 1 to 4). Alternatively, the \(k\)NN\(\times\)KDE and the \(k\)NN-Imputer have lower rank error bars, indicating that these methods are more consistent across data sets. This can been seen in the lower NRMSE standard deviation in Tables 1 to 4. It is worth noting that the \(k\)NN\(\times\)KDE seems to suffer from the curse of dimensionality, especially in the 'Full MCAR' scenario at high missing rate. Looking at Table 1, the \(k\)NN\(\times\)KDE has higher NRMSE compared to other methods for the breast and the sylvine data sets. Indeed, in 'Full MCAR' scenario at high missing rates for data sets in high dimension, the subset of potential donors for a specific missing pattern can be very low, or even empty therefore preventing from sampling. While Figure 4 provides the overall ranks, note that the imputation NRMSE can vary greatly between two consecutive ranks. Using again the abalone data set NRMSE provided in the first row of Table 1 to exemplify, the NRMSE remains below 4.00 for the top four methods, then jumps to 5.29 for GAIN, and finally gets close to 15.00 for the column Mean and Median imputation methods. Finally, we stress that even though the \(k\)NN\(\times\)KDE overall provides minimal NRMSE, the framework used here computes the distribution mean to return a point estimate. Calculating a point estimate brings back the original problem of choosing a single estimate to impute missing data, depicted in Figure 2. For instance, looking at the 2d_ring data set in Table 1, we see that the \(k\)NN\(\times\)KDE does not perform much better than the \(k\)NN-Imputer or MissForest, which are considered state-of-the-art numerical imputation methods. Therefore, we decide to also measure the performances of the imputation methods with the log-likelihood score. Figure 4: **Average NRMSE rank for each data imputation method in various missing data settings and missing rates.** Our proposed method, the \(k\)NN\(\times\)KDE, is consistently the best method, regardless of the missing rate of missing data setting. Second comes MissForest. The \(k\)NN-Imputer and MICE come next. Besides the column mean or median numerical imputation methods, GAIN and SoftImpute invariably under-perform every other methods. ### Performances by log-likelihood score Next, we look at the log-likelihood of missing values under the probabilistic model provided by each method. For the \(k\)NN\(\times\)KDE, a probability distribution for each missing cell is obtained as described in Section 3 and illustrated in Section 4. For the \(k\)NN-Imputer, we compute the mean and the standard deviation of the \(k\) selected neighbors, and calculate the log-likelihood of the ground truth assuming a Gaussian distribution. Similarly for the Mean imputation method, we compute the column mean and standard deviation and assume a Gaussian distribution. For MICE and MissForest, the stochastic nature of these two Iterative Imputer methods allows us to repeat the imputation \(N=5\) times, compute the mean and the standard deviation for each missing value, and calculate the log-likelihood of the ground truth assuming a Gaussian distribution. Despite being a generative model, GAIN systematically returns a unique value once trained, such that the variability in GAIN's predictions cannot be taken into account. Therefore, we decided not to include GAIN for the likelihood comparative study. We also do not consider the column Median anymore, as we already use the Mean for likelihood computation. We finally discard SoftImpute from this section as well because it showed mediocre performances on the NRMSE rankings and there is no straightforward way to implement a probabilistic version of the SoftImpute algorithm. When computing the log-likelihood, we do not perform hyperparameter tuning for MissForest, the \(k\)NN\(\times\)KDE, and \(k\)NN-Imputer. Instead, we choose the hyperparameter that best minimized the imputation NRMSE in the previous subsection. Following the same approach as Subsection 5.1, we present the average log-likelihood scores (computed over all missing cells) for each data set and each method with 20% missing rates in Tables 5, 6, 7, and 8. The numerical results for other missing rates are available online. In a similar fashion as before, we compute the ranks of the proposed methods using the mean log-likelihood for each missing data scenario and missing rate. For example, looking at the abalone data set in Full MCAR mean log-likelihood provided in the first raw of Table 5, we have the following rankings: \(k\)NN-Imputer (1), \(k\)NN\(\times\)KDE (2), MICE (3), Mean (4), and MissForest (5). We average the ranks over all 15 data sets, and present the aggregated results in Figure 5 The \(k\)NN\(\times\)KDE provides the overall best mean log-likelihood score, and the \(k\)NN-Imputer comes next. In 'Full MCAR' missing data setting at high missing rates, the \(k\)NN-Imputer model returns a higher likelihood score than the \(k\)NN\(\times\)KDE. A tentative explanation is that high missing rates in 'Full MCAR' missing setting create sparse observations from which sampling with the softmax probabilities of the \(k\)NN\(\times\)KDE can become challenging. In contrast, the \(k\)NN-Imputer uses independent Gaussian distributions for each column, which may lead to better results when a lot of cells are missing. On a similar note, notice how the column Mean provides greater log-likelihood scores than the MICE algorithm at high missing rates in 'Full MCAR' scenario. As before, the \(k\)NN\(\times\)KDE algorithm can suffer from high missing rates in 'Full MCAR' scenario for high dimensional data sets (see Table 5 for instance) as the subset of potential donors can be small, or even empty. But contrary to the NRMSE case, the log-likelihood score is not as severely affected. Data sets that exhibit a multi-modality structure tend to have much better log-likelihood score results under the \(k\)NN\(\times\)KDE probability distribution. These data sets can be identified by looking at the average Dip test \(p\)-value for the test of unimodality (see Appendix B). Despite MissForest showing interesting results with the NRMSE as performance metric, it now always scores last when averaging over multiple data sets. This is because the estimates provided by MissForest have a low variability over different runs. As a consequence, the standard deviation used for the Gaussian distribution to model the probability distribution for each missing cell is small, and the resulting shape of the probability distribution is therefore very narrow. In the rare cases where the ground truth falls within \(1\sigma\) or \(2\sigma\) of the mean provided by MissForest, the likelihood will be high; but in most cases, the ground truth is more than \(3\sigma\) away from the MissForest mean, therefore leading to small likelihood of the (unknown) ground truth under the MissForest model. Looking at Tables 5 to 8, we see that for 20% missing rate, the \(k\)NN\(\times\)KDE provides the best log-likelihood score, especially for data sets with smaller dimension. As mentioned earlier, both the \(k\)NN\(\times\)KDE and the \(k\)NN-Imputer can suffer from the curse of dimensionality because of the computation of the Euclidean distance in large dimensional spaces. ## 6 Discussion This work proposes the \(k\)NN\(\times\)KDE, a new approach using a soft version of the \(k\)NN algorithms to derive weights for the Kernel Density Estimation method. The \(k\)NN\(\times\)KDE has been developed for numerical data imputation, especially for low dimensional data sets in the presence of multimodality or complex Figure 5: **Average log-likelihood scores rank for each data imputation method in various missing data settings and missing rates. The \(k\)NN\(\times\)KDE ranks best in all cases, except for the Full MCAR scenario at high missing rates, where the \(k\)NN-Imputer is best. MissForest consistently returns the lowest log-likelihood score, because its predictions do not allow for much variability.** dependencies. Here, we discuss of the limits and strengths of the \(k\)NN\(\times\)KDE, conclude our work, and provide directions for subsequent works. ### Limits & Strengths A substantial drawback is that the \(k\)NN\(\times\)KDE becomes computationally expensive in the presence with large data sets. However, it remains faster than MissForest in practice, since it works with missing patterns instead of looping through the data set column by column. This strategy enables to compute only necessary pairwise distances. See Appendix E for quantitative results on computation time. Another drawback of the \(k\)NN\(\times\)KDE is that it cannot impute certain data sets with too many features in 'Full MCAR' and when the missing rate is high. Indeed, in 'Full MCAR' scenario with 60% missing rate for instance, the subset of potential donors (see Algorithm 1) may be empty. In such cases, working on a column-by-column basis, like the standard \(k\)NN-Imputer, may be an interesting solution. Now, the great advantage of the \(k\)NN\(\times\)KDE is that it preserves the original data structure, which is of major importance when working with multimodal data sets. Our method returns an imputation sample that provides information about the missing data distribution, which is better than a point estimate. By working with missing patterns and imputing all missing features at the same time, the \(k\)NN\(\times\)KDE provides a sample of entirely imputed observations that are consistent with the original data set, which is not the case with Iterative Imputation methods (like MissForest and MICE) or the \(k\)NN-Imputer. Finally, even though our method consistently achieves the average best imputation NRMSE in all missing data scenarios and at all considered missing rates (see Figure 4), using the sample mean of the returned imputation samples brings the original problem with multimodal distributions back. Looking at the 2d_ring data set in Table 2, we see that the \(k\)NN\(\times\)KDE does not perform better than other methods because of the imputation sample mean. However, we see on Table 6 that the \(k\)NN\(\times\)KDE is the only method capable of providing a good density estimation (and therefore a high log-likelihood score) for the 2d_ring data set. This problem essentially boils down to asking why imputation is needed in the first place: are we interested in subsequent downstream regression or classification tasks ; or are we solely interested in estimating missing values? The common approach of first imputing and then performing downstream tasks may be sub-optimal depending on the choosing imputation strategy (Le Morvan et al., 2021). Instead, the conditional probability distributions returned by the \(k\)NN\(\times\)KDE allow to postpone the decision of imputing or not to a later stage. Imputation can subsequently be performed freely: with the mean (to minimize the root mean square error), with the mode (to minimize the absolute mean error), by random sampling (which would prevent from artifacts in the presence of multimodal datasets), or with any other relevant statistic. ### Future work We decided to derive a kernel version on the traditional \(k\)NN-Imputer, and developed the proposed \(k\)NN\(\times\)KDE. Alternatively, it could be interesting to look into another kernel method (or at least any other way to perform density estimation) using Random Forests, since MissForest achieves good results even in its current form. Another possible extension of this work would be to include an end-to-end treatment of categorical variables within the framework of the \(k\)NN\(\times\)KDE. As this study makes use of numerical imputation methods that cannot handle categorical features (e.g. GAIN or SoftImpute), we decided to exclude categorical variables from the scope of this paper. However, tabular data imputation can include numerical and categorical variables in practice and further work may be needed in this direction. Finally, the NaN-std-Euclidean metric appear to yield better results compared to the commonly used NaN-Euclidean metric. A possible explanation is that this new metric penalizes sparse observations (with a large number of missing values) by using the feature standard deviation when the entry is missing, therefore preventing to use artificially close neighbours for imputation (see Appendix D). Further investigation of this metric, and experimental results with the standard \(k\)NN-Imputer may yield interesting insights. ### Conclusion The motivation behind this work was to design an algorithm capable of imputing numerical values in data sets with heterogeneous structures. In particular, multimodality makes imputation ambiguous, as distinct values may be valid imputations. Now, if minimizing the imputation RMSE is an intuitive objective for numerical data imputation, it does not capture the complexity of multimodal data sets. Instead of averaging over several possible imputed values like traditional methods, the \(k\)NN\(\times\)KDE offers to look at the probability density of the missing values and choose how to perform the imputation: sampling, mean, median, etc. Ultimately, this work advocates for a qualitative approach of numerical data imputation, rather than the current quantitative one. The online repository for this work5 provides all algorithms, all data, and few Jupiter Notebooks to test the proposed method, and we recommend trying it for practical numerical data imputation in various domains. Footnote 5: [https://github.com/DeltaFofflo/knnxkde](https://github.com/DeltaFofflo/knnxkde) ## Acknowledgments We would like to thank the three anonymous referees for their time and helpful remarks during the review of our manuscript. In addition, we would like to express our gratitude to Alain Celisse and the good people at the SAMM (Statistique, Analyse et Modelisation Multidisciplinaire) Seminar of the University Paris 1 Pantheon-Sorbonne, for their insightful comments during the development of the \(k\)NN\(\times\)KDE. This research was supported by internal funding from the Okinawa Institute of Science and Technology Graduate University to K. D. ## References ## References
2303.02175
Bayesian uncertainty quantification of perturbative QCD input to the neutron-star equation of state
The equation of state of neutron-star cores can be constrained by requiring a consistent connection to the perturbative Quantum Chromodynamics (QCD) calculations at high densities. The constraining power of the QCD input depends on uncertainties from missing higher-order terms, the choice of the unphysical renormalization scale, and the reference density where QCD calculations are performed. Within a Bayesian approach, we discuss the convergence of the perturbative QCD series, quantify its uncertainties at high densities, and present a framework to systematically propagate the uncertainties down to neutron-star densities. We find that the effect of the QCD input on the neutron-star inference is insensitive to the various unphysical choices made in the uncertainty estimation.
Tyler Gorda, Oleg Komoltsev, Aleksi Kurkela, Aleksas Mazeliauskas
2023-03-03T19:00:06Z
http://arxiv.org/abs/2303.02175v2
# Bayesian uncertainty quantification of perturbative QCD input ###### Abstract The equation of state of neutron-star cores can be constrained by requiring a consistent connection to the perturbative Quantum Chromodynamics (QCD) calculations at high densities. The constraining power of the QCD input depends on uncertainties from missing higher-order terms, the choice of the unphysical renormalization scale, and the reference density where QCD calculations are performed. Within a Bayesian approach, we discuss the convergence of the perturbative QCD series, quantify its uncertainties at high densities, and present a framework to systematically propagate the uncertainties down to neutron-star densities. We find that the effect of the QCD input on the neutron-star inference is insensitive to the various unphysical choices made in the uncertainty estimation. ## I Introduction The determination of the equation of state (EoS) of neutron-star (NS) cores is one of the grand questions of nuclear astrophysics [1; 2]. The EoS determines many of the macroscopic properties of neutron stars and its features may give a unique inroad into determining the phase structure of Quantum Chromodynamics (QCD) at large baryon number densities [3; 4; 5; 6]. In the past years there has been an extremely rapid evolution in NS observations, e.g. [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19], combined with maturing theoretical and statistical techniques, e.g. [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35], to constrain and infer the EoS using a variety of observational and theoretical inputs. Among the theoretical inputs are the _ab-initio_ calculations determining the EoS directly from the Lagrangian of QCD using perturbation theory [36; 37; 38; 39; 40; 41; 42; 43]. These calculations rely on the asymptotic freedom of QCD dictating that at high densities the EoS can be expanded in powers of the strong coupling constant \(\alpha_{s}\). At sufficiently high densities, well above the density range reached in stable NSs, perturbation theory gives a good approximation of the true EoS. It has furthermore been recently shown that these calculations--combined with the requirement that the EoS be mechanically stable, causal, and thermodynamically consistent (SCC) at all densities--give robust constraints to the EoS down to a few saturation densities \(n\sim 2.3n_{s}\)[44], with \(n_{s}\approx 0.16\,\mathrm{fm}^{-3}\). The interaction between the astrophysical and the QCD constraints has also been studied, showing that the QCD input leads to a softening of the EoS at the highest densities reached inside the cores of stable NSs [3; 4; 45; 46; 47; 48; 49] (cf. [50]). This feature has been interpreted as a sign of loss of hadronic structure, and a phase change to quark matter [3; 4; 5; 34]. The importance of the theoretical inputs in the EoS inference necessitates reliable and statistically interpretable uncertainty estimation of the calculations. In the low-density nuclear regime, theoretical uncertainty estimation includ Figure 1: The pressure normalized by that of free Fermi gas of quarks as a function of chemical potential. The green bands correspond to the N\({}^{2}\)LO pQCD calculations whose uncertainties are estimated by the _abc_ model using the scale-marginalization prescription for the renormalization scale \(X\); the darker and the lighter bands represent \(68\%\) and \(95\%\)-credible intervals, respectively. The relative confidence in the _abc_ model is qualified by the marginalized likelihood illustrated by the black dashed line (and the fading of the green bands). The hatched purple band represents the standard error estimation of pQCD results obtained by renormalization scale variation by a factor of 2. Colored lines are the sample from the ensemble of NS EoSs used in [33] conditioned with astrophysical observations and QCD input for the scale-marginalization prescription for \(X\) in the range \([1/2,2]\) and \(\mu_{\text{QCD}}\) in the range \([2.2,3]\) GeV. The coloring of individual EoSs corresponds to the posterior likelihood. The higher likelihood is associated with darker shades of red.
2310.01544
GW190521: tracing imprints of spin-precession on the most massive black hole binary
GW190521 is a remarkable gravitational-wave signal on multiple fronts: its source is the most massive black hole binary identified to date and could have spins misaligned with its orbit, leading to spin-induced precession -- an astrophysically consequential property linked to the binary's origin. However, due to its large mass, GW190521 was only observed during its final 3-4 cycles, making precession constraints puzzling and giving rise to alternative interpretations, such as eccentricity. Motivated by these complications, we trace the observational imprints of precession on GW190521 by dissecting the data with a novel time domain technique, allowing us to explore the morphology and interplay of the few observed cycles. We find that precession inference hinges on a quiet portion of the pre-merger data that is suppressed relative to the merger-ringdown. Neither pre-merger nor post-merger data alone are the sole driver of inference, but rather their combination: in the quasi-circular scenario, precession emerges as a mechanism to accommodate the lack of a stronger pre-merger signal in light of the observed post-merger. In terms of source dynamics, the pre-merger suppression arises from a tilting of the binary with respect to the observer. Establishing such a consistent picture between the source dynamics and the observed data is crucial for characterizing the growing number of massive binary observations and bolstering the robustness of ensuing astrophysical claims.
Simona J. Miller, Maximiliano Isi, Katerina Chatziioannou, Vijay Varma, Ilya Mandel
2023-10-02T18:37:30Z
http://arxiv.org/abs/2310.01544v2
# GW190521: tracing imprints of spin-precession on the most massive black hole binary ###### Abstract GW190521 is a remarkable gravitational-wave signal on multiple fronts: its source is the most massive black hole binary identified to date and could have spins misaligned with its orbit, leading to spin-induced precession--an astrophysically consequential property linked to the binary's origin. However, due to its large mass, GW190521 was only observed during its final 3-4 cycles, making precession constraints puzzling and giving rise to alternative interpretations, such as eccentricity. Motivated by these complications, we trace the observational imprints of precession on GW190521 by dissecting the data with a novel time domain technique, allowing us to explore the morphology and interplay of the few observed cycles. We find that precession inference hinges on a quiet portion of the pre-merger data that is suppressed relative to the merger-ringdown. Neither pre-merger nor post-merger data alone are the sole driver of inference, but rather their combination: in the quasi-circular scenario, precession emerges as a mechanism to accommodate the lack of a stronger pre-merger signal in light of the observed post-merger. In terms of source dynamics, the pre-merger suppression arises from a tilting of the binary with respect to the observer. Establishing such a consistent picture between the source dynamics and the observed data is crucial for characterizing the growing number of massive binary observations and bolstering the robustness of ensuing astrophysical claims. _Introduction_ -- At a total mass of \(\sim\)150 \(M_{\odot}\), GW190521 [1; 2] is the current record-holder among massive black hole binaries confidently detected through gravitational waves by LIGO [3] and Virgo [4]. Such high-mass systems are essential probes of the role of hierarchical mergers [5; 6; 7; 8; 9] and pair-instability physics [10; 11; 12; 13; 14; 15] in binary formation and evolution. Observationally, massive binaries merge toward the low edge of the detectors' bandwidth and are only detectable for a short time. One third of the binaries in the the latest gravitational-wave catalog have a median detector-frame total mass \(>100\) \(M_{\odot}\)[16], corresponding to \(\sim\)5 signal cycles at more than half a standard deviation above the noise. The short duration makes characterizing these signals and inferring astrophysical properties such as spin challenging. Spin is a key signature of the physics behind angular momentum transport in stellar interiors, black-hole formation, black-hole retention in dense environments, and more [17; 18; 19; 20; 21; 22]. Gravitational-waves provide one of the few ways to measure spins for stellar-mass black holes directly. Spin components _parallel_ to the binary's orbital angular momentum affect the signal duration [23] and are approximately conserved during the inspiral as the "effective spin" [24; 25]. Spin components _perpendicular_ to the orbital angular momentum, i.e., _in the orbital plane_, cause the binary to precess, leading to signal modulations as the emission pattern varies relative to the line of sight [26; 27]. Although typically weak [28; 29; 30], this effect is highly sought-after: spin-induced precession and the associated in-plane spins could differentiate between dynamical and field binary formation, e.g., [31; 32; 17]. The elusiveness of precession is exacerbated for heavy systems. The precession timescale can be longer than the observed inspiral for large masses [33], making modulations difficult to identify. The exact imprint of precession on the ensuing merger and ringdown remains poorly understood and analytically intractable, although numerical-relativity and data-analysis studies suggest that imprints do exist [34; 35; 36; 37; 38; 39]; for example, Ref. [35] suggests, based on simulations, that high-frequency data typically associated with the merger-ringdown can constrain precession. This uncertainty makes it difficult to distinguish precession from eccentricity, another highly-valuable binary property [40; 41; 42]. Interpretation is further complicated by high sensitivity to the system's true parameters [34; 35; 43; 44] and the priors [45]. In light of this, the massive GW190521 system stands out for its informative precession constraints. Precession is quantified by the effective precessing spin \(\chi_{\rm p}\)[46; 47; 48] that is motivated by inspiral dynamics. A value of zero (one) indicates no (maximal) precession. Under the assumption of a quasicircular orbit, GW190521 has \(\chi_{\rm p}=0.68^{+0.25}_{-0.37}\) at 90% credibility [1; 2], the largest inferred \(\chi_{\rm p}\) and the one whose posterior is the most informative to date [49; 50; 16];1 similar conclusions are reached under alternative parametrizations for precession [54; 55].2 The combined high mass and large in-plane spin make GW190521 an essential probe of hierarchical black hole mergers [56; 7; 57], dense stellar environments such as nuclear star clusters [58; 19; 59], active galactic nuclei disks [60; 61; 62], and more [63; 64; 2; 65; 66; 67]. Footnote 2: These and further ways to quantify precession are elaborated upon in the Supplementary Material. Footnote 3: Frequency truncation enables consistency checks [81; 82], investigations of data-quality issues [53; 53], or alternative studies of the measurability of precession in simulated data [35], but this is not equivalent to cuts in time. The high mass of GW190521 and its few observable cycles open the door to competing astrophysical interpretations. Romero-Shaw _et al._[40] and Gayathri _et al._[68] find that the data are consistent with eccentricity, though this interpretation is not supported by Iglesias _et al._[69] and Ramos-Buades _et al._[70]. Gamba _et al._[71] propose a hyperbolic capture scenario. Nitz and Capano [67] suggest a highly asymmetric binary interpretation. More exotic explanations include boson stars [72] and cosmic strings [73]. Any of these alternatives would have important implications if confirmed [74; 75]. Additionally, random detector noise can have an outsized impact on the inference of poorly-constrained effects, although Biscoveanu _et al._[35] and Xu and Hamilton [76] show that the inference of \(\chi_{\rm p}\) away from zero in GW190521-like systems cannot be due to Gaussian noise alone. The fact that full-scale parameter estimation allows for competing interpretations suggests that different physical effects can result in similar observational imprints over GW190521's few cycles. Similarly to precession and eccentricity, these imprints are often not analytically tractable. Toward bolstering the interpretation of massive binaries, it is essential to gain intuition about the observable imprint of physical effects of interest and how their measurability is affected by mismodeling. Lacking analytical equations for precession in the merger phase, we introduce a novel approach that traces its imprint along the signal and identifies the role of each _cycle_ on the \(\chi_{\rm p}\) constraint. We dissect the data in the _time-domain_ and compare inference between different data subsets. We provide a cycle-by-cycle physical picture of source dynamics and explore the interplay of different data regions. _Methods --_ Gravitational-wave parameter estimation is typically conducted in the frequency-domain to leverage the stationarity of detector noise for computational efficiency [77; 78]. However, frequency-domain methods are non-local in time; thus isolating temporal features of source dynamics and their imprint on the data requires nontrivial likelihood modifications [79; 80].3 We instead adopt direct time-domain inference to isolate different signal cycles, an approach originally conceived for black hole ringdowns [84; 85; 86; 87]. We truncate data from LIGO Livingston, LIGO Hanford, and Virgo at different times ranging from \(t=-50\,M_{\rm ref}\) to \(50\,M_{\rm ref}\) with respect to coalescence.4 We independently infer the signal properties solely from data before and after each cutoff as well as the full span of data. Footnote 3: We define \(t\) with respect to geocenter GPS time 1242442967.405764 s. Under geometric units we adopt the median detector-frame remnant mass scale \(M_{\rm ref}=1.27\,\)ms [2]; in standard units \(M_{\rm ref}=258.3\,M_{\odot}\). The choice of remnant rather than total mass, was inspired by ringdown analyses [88]. We model the signal with the numerical relativity surrogate model NRSur7dq4 [51], which assumes quasi-circular orbits and includes precession and higher-order modes. Within its region of validity, NRSur7dq4 displays the lowest mismatches against numerical relativity among existing models [51]. We adapt the time-domain inference code from Isi _et al._[86] and sample the multidimensional posterior for the binary masses, spin magnitudes and tilt angles, azimuthal inter-spin angle, azimuthal precession cone angle, inclination, luminosity distance, and phase of coalescence. The time of coalescence, right ascension, declination, and polarization angle are fixed for computational efficiency.5 Footnote 5: We have verified that these choices do not affect our conclusions. All parameter estimation settings, priors, and consistency checks are given in the Supplementary Material. We report precession constraints using the canonical effective precessing spin, \(\chi_{\rm p}\)[46; 47; 48]: \[\chi_{\rm p}=\max\left[\chi_{1}\sin\theta_{1},\left(\frac{3+4q}{4+3q}\right)q \chi_{2}\sin\theta_{2}\right]\in\left[0,1\right). \tag{1}\] Here \(\chi_{i}\in[0,1)\) are the dimensionless spin magnitudes and \(\theta_{i}\) are the tilt angles between the spin and orbital angular momentum vectors. Subscripts \(i\in\{1,2\}\) denote each black hole with mass \(m_{i}\) and \(q\equiv m_{2}/m_{1}\leq 1\). _Results --_ Figure 1 shows inferred GW190521 properties from data before (blue) and after (orange) five representative cutoff times (vertical lines) as well as the full signal (black) for comparison. Results for further cutoff times are included in the Supplementary Material. Insets in the left panels visualize the truncation in LIGO Livingston, selected as the detector in which GW190521 is the loudest. The left column shows the posterior for \(\chi_{\rm p}\) at a reference frequency of \(11\,\)Hz [1; 2]. For the earliest cutoff time (top row), the post-\(t=-40\,M_{\rm ref}\)\(\chi_{\rm p}\) posterior is almost identical to that of the full analysis, while the pre-cutoff one is identical to the prior. This is due to the fact that the post-cutoff analysis includes the full available signal, while none of it is contained in the pre-cutoff data, see inset. As the cutoff moves to later times (top to bottom), the pre- and post-cutoff posteri Figure 1: Evolution of GW190521 inference for representative cutoff times \(t\in\{-40,-20,-10,20,30\}\,M_{\rm ref}\) from the time of coalescence (top to bottom; vertical black dashed lines where applicable). _Left:_ Posterior for \(\chi_{\rm p}\) from the pre- (blue), post-cutoff (orange), and full (black solid) analysis and prior (gray dotted). The inset shows the whitened maximum-posterior waveform (with \(\chi_{\rm p}=0.62\), detector-frame total mass \(M=267\,M_{\odot}\), and \(q=0.89\)) from the full analysis (black) along with the whitened LIGO Livingston data (gray). The blue/orange shaded regions highlight the data informing the same-color \(\chi_{\rm p}\) posterior. _Center and Right:_ Waveform reconstruction draws for LIGO Livingston from the pre- (center, blue) and post-cutoff (right, orange) analyses and maximum-posterior waveform for the full analysis (black). Median and 90% credible intervals for the matched-filter network SNRs are given in-figure. Gray shading denotes data excluded from each analysis. See Ref. [89] for an animation of this figure including more cutoff times. exchange places as the data preceding each cutoff become more informative and the data following become less so. _The \(\sim\!\!40\,\)ms between \(t=-40\,M_{\rm ref}\) and \(-10\,M_{\rm ref}\) are crucial to constrain precession for GW190521_. This region roughly corresponds to the final cycle before the onset of merger. The \(\chi_{\rm p}\) posterior obtained from data after \(t=-40\,M_{\rm ref}\) (first row, orange) is consistent with that from the full analysis, i.e., precession _is_ constrained. On the other hand, data after \(t=-10\,M_{\rm ref}\) (third row, orange), result in a \(\chi_{\rm p}\) posterior that is nearly identical to the prior, i.e., uninformative. Between these times, the posterior shifts smoothly between the full measurement and the prior; e.g., the post-\(t=-20\,M_{\rm ref}\) analysis (second row, orange). The reduction in the signal-to-noise ratio (SNR) from excluding data is negligible between \(-40\,M_{\rm ref}\) and \(-10\,M_{\rm ref}\) suggesting this qualitative change in precession inference is not due to an SNR drop, see Fig. 8 in the Supplementary Material. _Neither the inspiral nor the merger/ringdown data alone are fully responsible for precession constraints in GW190521_. The data both pre- and post-\(t=-10\,M_{\rm ref}\) alone are uninformative about precession (third row, orange and blue). Moreover, the pre-\(t=30\,M_{\rm ref}\) analysis that excludes the final ringdown cycle (fifth row, blue) is consistent with the full analysis. It is therefore not solely the final pre-merger cycle that informs precession, but rather its combination with the subsequent 2 merger and early-ringdown cycles. This does not rule out ringdown imprints of precession that are too weak to discern at this SNR or with this waveform. The center and right columns of Fig. 1 investigate _features_ of the waveforms. The blue and orange waveforms are informed only by data in the unshaded regions and extended coherently into the shaded regions. As progressively less data are analyzed (center bottom to top, right top to bottom), the waveform reconstructions agree less with the full-analysis waveform, eventually becoming incoherent. The right column reveals the morphological imprint of precession on the signal during the transition from an informative \(\chi_{\rm p}\) posterior (first row) to the prior (third row). When the \(\chi_{\rm p}\) inference returns (close to) the prior, the final pre-merger cycle is extrapolated to be _larger_ than when \(\chi_{\rm p}\) is constrained to take higher values, c.f., the waveform peak at \(t\sim-30\,M_{\rm ref}\) and trough at \(t\sim-15\,M_{\rm ref}\). Again there is a progression: the post-\(t=-40\,M_{\rm ref}\) inferred waveforms (orange) are consistent with the full analysis (black), while the final pre-merger cycle subtly increases in strength toward post-\(t=-10\,M_{\rm ref}\) (top row to third row). To further explore the pre-merger waveform suppression, we compare the full analysis (black) in which precession _is_ constrained and the post-\(t=-10\,M_{\rm ref}\) analysis where the data are uninformative about precession in Fig. 2. In order to focus on waveform features that are informative compared to the noise, we plot the Figure 2: Results from the full (black) and from the post-\(t=-10\,M_{\rm ref}\) analysis (orange) that are informative and uninformative about precession respectively. _Top:_ 50% credible intervals for the whitened waveforms in LIGO Livingston in units of standard deviations of the noise. Data are plotted in gray. Gray shading denotes data excluded from the post-\(t=-10\,M_{\rm ref}\) analysis. The inset zooms in around \(t=-15\,M_{\rm ref}\) (blue dashed line), the minimum of the final pre-merger cycle. _Bottom:_ Posteriors for \(\chi_{\rm p}\), the absolute value of the whitened strain \(|\hat{h}|\), and the inclination angle relative to edge-on configurations \(|\iota-\pi/2|\). Quantities labeled in blue are plotted at \(t=-15\,M_{\rm ref}\). Contours denote 50% and 90% credible regions. The whitened strain is anticorrelated with \(\chi_{\rm p}\) and correlated with \(|\iota-\pi/2|\). Large \(\chi_{\rm p}\) is paired with smaller pre-merger signal and more edge-on configurations. whitened waveform.6 In the top panel the grayed-out region denotes data available to the full analysis but not the post-\(t=-10\,M_{\rm ref}\) one. The inset focuses around \(t=-15\,M_{\rm ref}\), an extremum of the final pre-merger cycle. The reconstructions are inconsistent at the 50% credible level, with the post-\(t=-10\,M_{\rm ref}\) analysis resulting in a larger amplitude (in absolute value). This inconsistency only occurs at the extrema of the final pre-merger cycle, i.e. the peak around \(t\sim-30\,M_{\rm ref}\) and trough at \(t\sim-15\,M_{\rm ref}\), see Fig. 9 in the Supplementary Material. Footnote 6: Whitened waveforms are obtained by dividing the Fourier-domain waveform by the noise amplitude spectral density and then inverse Fourier-transforming. Its value depends on the sampling rate. Though we sample the data at 2048 Hz for inference, we use 1024 Hz when plotting for comparison to Fig. 1 of Ref. [1]. The bottom panel shows marginal posteriors for select quantities: the effective precessing spin \(\chi_{\rm p}\), the absolute value of the whitened strain \(|\hat{h}|\) (units of standard deviations \(\sigma\) of the noise), and the difference between \(\iota\)--the angle between the direction of maximum signal emission and the line of sight [91]--and \(\pi/2\) at \(t=-15\,M_{\rm ref}\). As expected from Fig. 1, \(\chi_{p}\) and \(|\hat{h}|\) are anticorrelated (albeit weakly): when the full data are analyzed (black), the final cycle is constrained to be weaker, resulting in a larger \(\chi_{\rm p}\); when this cycle is excluded from the analysis (orange), the extrapolated waveform is not required to have such a small value, obviating the need for a higher \(\chi_{\rm p}\). In summary, _precession in GW190521 is informed by a suppression of the gravitational-wave in the observed waveform's last cycle before merger_. This is also the region in which the waveform is overall quietest: the whitened signal is less than 1-\(\sigma\) above the noise, compared to the subsequent merger cycles that are over 2-\(\sigma\). The origin of the signal suppression can be attributed to the evolution of the emission direction. As a binary precesses, the angle between the dominant emission direction and the line-of-sight evolves, changing the amplitude of the observed signal. Systems with \(\iota\sim\pi/2\) are quieter than those with \(\iota\sim 0\) or \(\sim\pi\). To explore these dynamics, we plot the absolute difference between the inclination angle and \(\pi/2\) at the time \(t=-15\,M_{\rm ref}\) in Fig. 2. As expected, \(\chi_{\rm p}\) and \(|\iota-\pi/2|\) are anti-correlated, while \(|\hat{h}|\) and \(|\iota-\pi/2|\) are correlated. When precession is constrained, \(\iota\) is found to be closer to edge-on at the last pre-merger cycle than when precession is unconstrained, leading to a suppressed cycle and smaller \(|\hat{h}|\). We investigate the impact of data truncation on other source parameters in Fig. 3. Information about \(\chi_{\rm p}\), \(M\) and \(q\) is not lost or gained in lockstep as a function of cutoff time. At post-\(t=-20\,M_{\rm ref}\) (dark orange), the \(\chi_{\rm p}\) posterior shifts away from the full analysis posterior; however, for \(M\) and \(q\), this does not happen until multiple cycles later. Post-\(t=-10\,M_{\rm ref}\) (shaded light orange), i.e., at the end of the final pre-merger cycle, the \(\chi_{\rm p}\) posterior is close to the prior, while \(M\) and \(q\) both resemble the posteriors from the full analysis. Thus, the lack of an informative \(\chi_{\rm p}\) posterior post-\(t=-10\,M_{\rm ref}\) does not simply arise from poor parameter constraints over-all due to lower SNR; rather, the suppression of the final pre-merger cycle is informative _specifically_ about precession. We also confirm that the \(\chi_{\rm p}\) inference is not driven by a conditional measurement based on the typically better-measured aligned spins in the Supplementary Material. _Conclusions_ -- Precession inference for the massive, distant binary black hole signal GW190521 is subtle. It originates from contrasting a \(\sim\)40 ms slice of data from the final pre-merger cycle between \(t=-40\,M_{\rm ref}\) and \(-10\,M_{\rm ref}\) with the loud merger cycles following it. The merger of GW190521 is 2 loud cycles that reach 2.5\(\sigma\) above the noise and is informative about the source's masses. However, precession is only constrained away from the prior when the merger is observed in tandem with the final pre-merger cycle, which does not rise more than 1\(\sigma\) above the noise. The measurement is linked to a relative suppression of the aforementioned final cycle, caused by the binary tilting toward an edge-on configuration due to precession. This picture qualitatively agrees with the interpretation posited in Ref. [1] by comparing precessing and spin-aligned waveform reconstructions for the full signal, and is supported by simulations [42]. Figure 3: Posteriors for \(\chi_{\rm p}\), \(M\), and \(q\) for the same cutoff times as Fig. 1. Posteriors from the full analysis and priors are plotted in black solid and dotted respectively. Contours denote 50% credible regions. Information about spin-precession is lost post-\(t=-10\,M_{\rm ref}\) (shaded light orange), while the total mass and mass ratio posteriors are informative at post-\(t=20\,M_{\rm ref}\) (green). See Ref. [90] for an animation showing corner plots for mass and spin parameters for the pre- and post-cutoff analyses at more cutoff times. Siegel _et al._[88] carried out a complementary study seeking a description of the the GW190521 ringdown consistent with the NRSur7dq4 full analysis.7 Ringdown mode content encodes information about the preceeding binary dynamics [95, 96, 38], meaning it is (in theory) possible to identify signatures of precession in the ringdown. Siegel _et al._[88] found support for the presence of at least two modes; consistency with NRSur7dq4 suggests a configuration including the 220 and 210 fundamental modes. A large 210 mode amplitude could be expected under strong precession [88]. The fact that past GW190521 ringdown-only analyses cannot unequivocally infer precession is consistent with our finding that a post-peak analysis is not sufficient to constrain precession. Footnote 7: Capano _et al._[80, 92] provided an alternative interpretation based on waveform models of the Phenom family [94, 93]. Our study highlights the delicate nature of precessional imprints on observed signals, providing a new view of spins in massive systems beyond the frequency domain [1, 76, 35]. For GW190521, the difference between the most informative precession inference to date and the prior boils down to a single, quiet pre-merger cycle that needs to be measured to better than half a standard deviation, _cf._, the difference between the black and orange waveforms in Fig. 2. This is quantitatively in agreement with the conclusions of Payne _et al._[53] who explored the impact of data quality on our ability to obtain an unbiased measurement at that level. Our novel time-domain approach of tracing the observational imprint of interesting physical effects cycle-by-cycle can provide physical intuition about how key source properties are inferred in relation to observed data features. In anticipation of further massive observed signals, such a correspondence between source dynamics and observed data can help pinpoint the most informative data in order to assess data quality and waveform systematics, and enable us to morphologically study competing physical interpretations that are likely to keep arising. _Data Release --_ Posterior samples can be found on Zenodo at Ref. [97]. Scripts to generate the waveform reconstructions and inclination angles are on Github at Ref. [98], as are notebooks to generate all figures. _Acknowledgements --_ We thank Harrison Siegel for helpful discussions on ringdown analyses of GW190521, Sylvia Biscoveanu for insights about inference on high-mass gravitational-wave sources, Sophie Hourihane for assistance whitening gravitational-wave signals, Davide Gerosa for insight on alternative measures of precession, and Will Farr for insights about time-domain inference. We also extend thanks to Jacob Lange, Christopher Berry, Carlos Lousto, Juan Calderon Bustillo, and Salvatore Vitale for their helpful comments on our manuscript. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. S.M. and K.C. were supported by NSF Grant PHY-2110111. K.C. acknowledges support from the Sloan Foundation. The Flatiron Institute is funded by the Simons Foundation. V.V. acknowledges support from NSF Grant No. PHY-2309301, and the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 896869. I.M. is a recipient of the Australian Research Council Future Fellowships (FT190100574). The authors are grateful for computational resources provided by Cardiff University and supported by STFC grant ST/I006285/1. Software: emcee[99], LALSuite[100], numpy[101], scipy[102], h5py[103], matplotlib[104], seaborn[105], ringdown[84, 85], gvtools[106].
2308.03119
Autonomous Choreography of WebAssembly Workloads in the Federated Cloud-Edge-IoT Continuum
The concept of the federated Cloud-Edge-IoT continuum promises to alleviate many woes of current systems, improving resource use, energy efficiency, quality of service, and more. However, this continuum is still far from being realized in practice, with no comprehensive solutions for developing, deploying, and managing continuum-native applications. Breakthrough innovations and novel system architectures are needed to cope with the ever-increasing heterogeneity and the multi-stakeholder nature of computing resources. This work proposes a novel architecture for choreographing workloads in the continuum, attempting to address these challenges. The architecture that tackles this issue comprehensively, spanning from the workloads themselves, through networking and data exchange, up to the orchestration and choreography mechanisms. The concept emphasizes the use of varied AI techniques, enabling autonomous and intelligent management of resources and workloads. Open standards are also a key part of the proposition, making it possible to fully engage third parties in multi-stakeholder scenarios. Although the presented architecture is promising, much work is required to realize it in practice. To this end, the key directions for future research are outlined.
Piotr Sowinski, Ignacio Lacalle, Rafael Vano, Carlos E. Palau
2023-08-06T13:57:01Z
http://arxiv.org/abs/2308.03119v1
# Autonomous Choreography of WebAssembly Workloads in the Federated Cloud-Edge-IoT Continuum ###### Abstract The concept of the federated Cloud-Edge-IoT continuum promises to alleviate many woes of current systems, improving resource use, energy efficiency, quality of service, and more. However, this continuum is still far from being realized in practice, with no comprehensive solutions for developing, deploying, and managing continuum-native applications. Breakthrough innovations and novel system architectures are needed to cope with the ever-increasing heterogeneity and the multi-stakeholder nature of computing resources. This work proposes a novel architecture for choreographing workloads in the continuum, attempting to address these challenges. The architecture that tackles this issue comprehensively, spanning from the workloads themselves, through networking and data exchange, up to the orchestration and choreography mechanisms. The concept emphasizes the use of varied AI techniques, enabling autonomous and intelligent management of resources and workloads. Open standards are also a key part of the proposition, making it possible to fully engage third parties in multi-stakeholder scenarios. Although the presented architecture is promising, much work is required to realize it in practice. To this end, the key directions for future research are outlined. computing continuum, scheduler, orchestration, WebAssembly ## I Introduction The concept of the computing continuum has gained popularity in the last few years [1]. It embraces the idea of assembling a wide variety of heterogeneous computing resources as a single manageable entity (spanning IoT devices, edge nodes, private or public clouds, etc.). Technologically, the continuum approach should achieve better management of the widespread resources to simplify the execution of workloads (applications, services, workflows...) leveraging aspects such as network virtualization, energy management, performance, dynamic demand by on-going services, and more. This would have many benefits, including increased efficiency, longer useful lifespan of computing equipment, effective collaboration among stakeholders, new use cases, and improved privacy and sovereignty, among others. On top, the fact of addressing all the previous based on intelligence/AI mechanisms, is what derives the so-called cognitive computing continuum [2]. However, nowadays, such continuum is far from being seamlessly realized in practice. It appears complicated for code developers, system owners, and end users alike. Underlying heterogeneity in such environments remains unresolved, with devices running different operating systems and having varied computing capabilities. Existing technologies do not fit together seamlessly or do not cover all complexities, requiring deep knowledge and expertise to barely grasp their surface. Also, applications depend heavily on the underlying libraries, CPU architectures, and resource management frameworks. As long as there is a lack of _continuum-native_ tools and mechanisms, organizations will struggle to use it, due to the high associated costs and risks. Furthermore, current applications are unnecessarily heavy, due to bloated container images and ineffective use of available resources, resulting in high energy and infrastructure costs. Yet, overcoming these realities seems quite challenging. First, whenever a service needs to work with data, deep knowledge of data formats and models is required, hindering fast and actionable use cases to take place dynamically. Second, current mechanisms and solutions are not prepared for scalability and adaptability, requiring deep and complex updates whenever new innovations emerge. Third, seamless cooperation among nodes is still unrealized, as only a few of them can autonomously apply AI capabilities in a controlled manner, while the continuum does not stretch well to the vast majority of them. Their variety (unaddressed from an abstraction perspective) prevents the formation of a truly decentralized swarm. This challenge is very tough due to varied operating systems and their restrictions, limited compute capabilities, and diverse compilation mechanisms. This work proposes an architecture for autonomous choreography (advanced form of orchestration) of workloads, network, and data in a federated cognitive continuum. A vision is presented in which the orchestration of containerized and WebAssembly workloads is managed autonomously by AI-driven schedulers, enabling smooth choreography in nodes across the continuum, optimizing parameters, such as energy use, carbon footprint, or QoS. The rest of the paper is organized as follows: Section II outlines the background of the various technical fields involved, understanding the current challenges and on-going initiatives. Section III describes the proposed architecture, how the different ideas glue together and a theoretical implementation approach. Finally, Section IV draws early conclusions on the viability and usefulness of the architecture and envisions the next steps to have it materialized. ## II Background Coping with heterogeneous computing resources and frameworks when creating applications is not a new field of research. For instance, developers around the globe have widely made use of the Java Virtual Machine (JVM) [3] as an execution environment that can be used in the continuum, supporting servers and some embedded devices. However, JVM's programming language support is limited, and it has a significant performance overhead. On another note, containers are a very good tool for abstracting underlying resources and delivering generic applications [4]. However, they can still be large, do not support the smallest of devices, and are built for specific architectures and operating systems. The emerging WebAssembly (Wasm) promises to enable truly portable software that can execute with minimal performance losses and small memory footprints, even on IoT devices. The Wasm ecosystem is developing rapidly [5], with initiatives like WASI1 (portable system interface) and WAMR2 (lightweight and portable runtime). However, Wasm is a very new technology and its potential in the continuum is yet to be realized [6]. Footnote 1: [https://wasi.dev/](https://wasi.dev/) Footnote 2: [https://github.com/bytccodalliance/wasm-micro-runtime](https://github.com/bytccodalliance/wasm-micro-runtime) Another relevant field for this work is the orchestrated deployment of workloads in a computing continuum across various environments and stakeholders. Existing cloud and edge paradigms are mainly monolithic, siloed, and often constrained to a single vendor's ecosystem. Here, cloud providers are suggesting new ways of multi-clustering strategies [7]. Also, serverless solutions (that address scaling, boot time, and over-management concerns) came out with Serverless4IoT [8] and OpenWolf [9], among others. However, they either do not cover IoT devices, or are limited in the types of workloads they can manage. New platforms are now emerging with enhanced capabilities to support edge clusters, such as RedHat OpenShift3, OKD4, Kubernetes-based deployments (k0s5 and K36), and multi-cluster management such as Nivala/NvlaBox7, OCM8, Fleet9, and LIQO10. In the context of cloud computing, related work also considers the need for operating systems specifically designed to dynamically manage datacenter resources, with some software solutions already in existence, such as Apache Mesos11 or Mesosphere DC/OS12. All in all, there is currently no available solution for supporting hyper-distributed, heterogeneous, collaborative systems able to deploy services in IoT devices, edge nodes, and cloud providers alike, spanning multiple management domains (hybrid cloud). Footnote 3: [https://www.redhat.com/en/technologies/cloud-computing/openshift](https://www.redhat.com/en/technologies/cloud-computing/openshift) Footnote 4: [https://okd.io/](https://okd.io/) Footnote 5: [https://k0sproject.io/](https://k0sproject.io/) Footnote 6: [https://k3s.io/](https://k3s.io/) Footnote 7: [https://sisx.com/](https://sisx.com/) Footnote 8: [https://open-cluster-management.io/](https://open-cluster-management.io/) Footnote 9: [https://fleet.rancher.io/](https://fleet.rancher.io/) Footnote 10: [https://docs.lipio.io/en/v0.7.2/](https://docs.lipio.io/en/v0.7.2/) The capacity to dynamically orchestrate such deployed workloads is also pursued in the literature. It seems not longer possible to optimize energy efficiency, cost, carbon footprint, and other factors manually or with a rigid algorithm deciding where to deploy workloads [10]. In a multi-stakeholder, federated environment assuming hierarchical control over every node in the continuum is unfeasible, therefore, autonomy is needed [11]. Thus, recent works indicate that flexible, robust, and intelligent solutions are needed that can promptly and autonomously manage workloads. Here is where AI-driven workload scheduling emerges as a very active research area, but with practical implementations still in their infancy. Narrow formulations of the task were proposed, based on the Function as a Service (FaaS) paradigm [12]. Otherwise, small lab demonstrators [13] or mostly theoretical proposals of dynamic scheduling of workloads to optimize energy are the main explored topics [14]. However, for the solution to orchestration and deployment to be applicable in a truly federated environment, it must be based on open standards and allow anyone to bring their own scheduler implementation that would still be compatible with the rest of the continuum. These capabilities are currently not present in state of the art research. Managing data in a unified way across the continuum is also an open research topic. Diverse data types, formats, computing capabilities or the usage of diverse tools exacerbate the complexity, asking developers to have extensive knowledge of the underlying data sources, formats, APIs, permissions, reliability, and other details. The application must also be aware of the network environment, which can vary greatly between platforms. Commercial solutions for creating a "data fabric" exist, but focus on vendor-locked, cloud-only environments (e.g., IBM, K2View, Talend). In the literature, MEDAL [15] is a promising concept with no implementation yet, where data applications are managed with "Data Fibers", however, it does not employ AI. Some research projects such as RE4DY13 and aerOS14 propose "Virtual Data Containers" and "Data Fabric" concepts, that are, by now, only theoretical and do not promise to support hyper-distributed, multi-stakeholder systems with all outlined requirements. Eclipse Zenoh15 is a relevant open-source project with a data management suite that includes its own networking layer that was tested in IoT use cases [16]. Regarding network abstraction for data services, PuzzleMesh [17] and SDFog-Mesh [18] can be found in the literature. Most significant advances in this aspect are in the field of cloud-native open solutions like Open Service Mesh16, Calico17, Cilium18 or flannel19, but those are limited to specific container frameworks (K8s), and do not care about data, privacy, or governance concerns. There is currently no solution that supports data management and sovereignty in the continuum acknowledging the privacy, networking, and holistic coverage needs. Footnote 14: [https://aeros-project.eu](https://aeros-project.eu) Footnote 15: [https://newsroom.eclipse.org/eclipse-newsletter/2021/july/eclipse-zenoh-edge-data-fabric](https://newsroom.eclipse.org/eclipse-newsletter/2021/july/eclipse-zenoh-edge-data-fabric) Footnote 16: [https://www.cnf.io/projects/open-service-mesh/](https://www.cnf.io/projects/open-service-mesh/) Footnote 17: [https://www.cncf.io/online-programs/calico-networking-with-ebpf/](https://www.cncf.io/online-programs/calico-networking-with-ebpf/) Footnote 18: [https://cilium.io/](https://cilium.io/) Footnote 19: [https://github.com/flannel-io/flannel](https://github.com/flannel-io/flannel) Footnote 20: [https://renode.io/](https://renode.io/) Lastly, growing attention is also being paid to making the continuum more secure and actionable by developers and users. Developing a cloud application is radically different to IoT and everything in between, as are the execution environments, available software, network stacks, hardware capabilities, and developer tools. There exist several attempts at solving these problems, with the most prominent trend being to apply the highly successful cloud-native computing principles to the edge. However, this approach cannot scale to the smallest of devices due to hardware limitations. To counter this issue, the use of WebAssembly throughout the continuum is often cited as the most promising solution, but the technology is still in its early stages [5]. Realizing continuous integration / continuous deployment (CI/CD) in such a continuum is similarly challenging, with current solutions limited to only a small part of it (such as the plentiful cloud-oriented DevOps solutions [19] or Renode20 focusing on IoT devices), or function only within a single closed ecosystem (e.g., Azure DevOps21 for Azure IoT Edge). Security-wise, a promising trend are trusted execution environments (TEE) that isolate code execution in hardware. However, the different proprietary TEE solutions pose portability challenges. Scontain [20] and Azure Sphere22 have proposed using confidential containers, but these solutions are platform-specific and can be resource-intensive. Footnote 21: [https://learn.microsoft.com/en-us/azure/iot-edge/how-to-continuous-integration-continuous-deployment-classic?view=noiedge-1.4](https://learn.microsoft.com/en-us/azure/iot-edge/how-to-continuous-integration-continuous-deployment-classic?view=noiedge-1.4) Footnote 22: [https://azure.microsoft.com/en-us/products/azure-sphere/](https://azure.microsoft.com/en-us/products/azure-sphere/) Ideally, there should be a single streamlined execution environment with consistent interfaces enabling the same code to be run on any machine in the cognitive computing continuum, and that is what this work aims at proposing. ## III Proposed Architecture In this section a novel architecture is proposed, one that can comprehensively tackle the aforementioned challenges. The base idea for the architecture is "any code, anywhere", where computational workloads can be flexibly and intelligently scheduled on almost every device in the Cloud-Edge-IoT continuum, including the usually neglected resource-constrained devices. Figure 1 presents an overview of the proposed architecture. The base building block of applications in the proposed concept is the unified compute module - a platform-agnostic software package, that can run anywhere in the federated continuum. The modules can either utilize "classic" containerization, or the more portable, secure, and lightweight alternative of WebAssembly modules. The modules are then deployed in sandboxed environments using a variety of compatible runtimes. The continuum, naturally, consists of a wide variety of device types and execution environments, ranging from tiny IoT devices up to hyperscale data centres, and multiple CPU architectures (x86, Arm, RISC-V) [6, 21]. To efficiently manage and scale such overwhelmingly heterogeneous deployments, the architecture employs a decentralized approach to workload choreography and scheduling, using autonomous smart schedulers. The schedulers are responsible for managing the workloads in their domain (e.g., a data centre, IoT deployment, an edge server) and use state-of-the-art AI techniques to optimize energy use, QoS, latency, etc. The schedulers can use AI techniques such as reinforcement learning to effectively adapt to environments where access to resources is determined by sophisticated and unpredictable factors. Hybrid AI techniques Fig. 1: Overview of the proposed architecture. can also be used to fuse machine learning with semantic, robust knowledge about the environment [22, 23]. The schedulers choreograph the workloads and resources (compute, storage, sensors...) in their domain, assigning workloads to specific compute resources while ensuring the consistency of the application. A scheduler can either run the workload in its domain or offload it to peer nodes. The schedulers act and communicate using the open scheduler protocol, which enables third-party implementations for any current and future platforms. The nodes are in a network with a dynamically changing topology (hierarchical or peer-to-peer), forming the Cognitive Continuum Federation \(-\) capable of adapting its structure to the changing requirements and available resources. Communication between the workloads is handled by the end-to-end network and data fabric, enabling effortless communication with the rest of the system. The fabric abstracts away the underlying network and data exchange mechanisms. Here, the unified API of the unified compute modules plays a crucial role, providing the developer with a consistent interface, that works the same in the entire continuum. The data fabric has built-in provenance and active metadata tracking capabilities to cater for the requirements of data spaces. ### _Unified Compute Modules_ The proposition for composing and deploying workloads across the continuum revolves around the idea of the unified compute module (Fig. 2) - a software package that, in principle, can run on any device and platform in the continuum, irrespective of the CPU's architecture, or the operating system. The key technology behind this innovation is WebAssembly, a lightweight, universal binary format for application code that is then executed within isolated runtimes, giving much better security guarantees out-of-the-box, as compared to traditional containers. Two workload packaging formats are supported: WebAssembly and Open Container Initiative (OCI) containers23 (e.g., Docker), catering for a wider range of workloads. While WebAssembly is a much lighter paradigm, it is also limited in terms of supported interfaces and capabilities, and thus not all workloads can be easily converted to it, hence the need for containers. On the other hand, WebAssembly is improving rapidly and can support a much wider range of platforms [5, 6, 24, 25]. Footnote 23: [https://opencontaniers.org/](https://opencontaniers.org/) The unified API gives the applications robust access to the rest of the platform's capabilities (e.g., the network and data fabric), regardless of where in the continuum will the workload be deployed. The unified API should be ported or given bindings for several programming languages, to make it applicable as many applications as possible. ### _Network and Data Fabric_ The end-to-end network and data fabric (Fig. 3) is to be capable of connecting data services, abstracting the underlying complexity of trust, sovereignty, network connectivity, data types, and formats. To achieve the latter, the nodes in the fabric shall seamlessly exchange representations of metadata, data producers, and data consumers, in the form of knowledge graphs, using portable protocols (e.g., Jelly [26]). The graphs will be explicitly aligned with active metadata (description of the data) that will be governed by flexible rules and standards (e.g., NGSI-LD API). Active metadata implies that self-description of data will be facilitated by AI mechanisms, ontologies, and annotations, introducing automated data integration. Achieving seamless knowledge sharing between nodes (both peer-to-peer and in hierarchical domains) will allow producers and consumers (services) to be decoupled from specific formats, types, conversions, and topologies. In addition, data exchange will follow publish/subscribe mechanisms (similar to the MQTT protocol) supporting reliability, dynamic discovery, data cataloguing, and ownership. The Context Broker (assisted by intermediate data aggregation) will expose data to consumers. This will also allow to deploy the data spaces principles in the data fabric. Data will, by default, remain in every node's scope if this is what the application requires. Shared metadata will also be used to facilitate connections between consumers and producers' data storage facilities. The data fabric will then enable on-demand data retrieval based on policies and security credentials. On another note, active metadata will be connected to the governance solution in the data fabric, including data source and endpoint mapping to security credentials and privacy labels. This directly links with another feature included in the data fabric: trust and governance. A self-sovereign identity management structure will be channeled through federated IdMs that can reside on different domains and be owned by different stakeholders. The end-to-end network fabric for data services is also be needed to realize the presented vision. Inspiration can be drawn from current trends such as eBPF24 connectivity of services (tied to K8s deployments) and the Istio ambient mesh25. The proposed concept relies on the creation of secure tunnels (waypoints) connecting data services among themselves, based on service names. The network fabric can also include the automated management of pipelines, drawing from previous concepts and potential transfer of Apache Streamipes26 or Fig. 2: The unified compute module concept. Kiali's approaches27. Together, the network and data fabric will enable data plan management across the continuum, focusing on simplified, automated operations, compatibility, agility, and reduced costs. Footnote 27: [https://kiali.io/](https://kiali.io/) ### _Smart Schedulers_ The proposed architecture aims to choreograph the resources of continuum ecosystems formed by the cloud, edge computing infrastructures (far, near), and IoT devices with different CPU architectures (including RISC-V), as well as operating systems (Fig. 4). Here, choreography diverges from classic orchestration, as modern applications should not only rely on a central orchestrator to be deployed and to function [27]. They must have the capacity to act independently and thus be able to better adapt to the changing resources and requirements. The technical proposal here is two-fold: (1) to establish a federation choreography framework that will manage resource and service descriptions along with their proposed allocation, and (2) to rely on open, smart schedulers living on each node that will be responsible for deciding whether to run or offload workloads, in a decentralized way. Through the first, the architecture will handle complex applications as workflows of stateless and stateful workloads, aiming to meet user-defined KPIs. It will implement an abstraction layer to describe the characteristics of custom applications composed of unified compute modules. Similarly, all available resources belonging to the continuum and their characteristics (including current performance, availability, trustworthiness, etc.) will be registered together with the data sources (e.g., sensors). This continuously updated registry will be dynamic and distributed among the nodes so that the federated entities (nodes) will have information about each other (relaxing the need for centralization). Regarding deployment, the framework will propose an initial deployment based on policy-like KPIs tailored to the application (eco-efficiency, performance, reliability, quality of service, etc.), aligning application's requirements with available resources. AI will make the framework cognitive in three ways: (i) by interpreting and predictively managing KPIs of the applications, (ii) by optimizing workload distribution based on KPIs, prior knowledge, and heuristics, and (iii) by predicting application behaviour based on trends to make adjustments pre-emptively. At runtime, all monitored data will be used by the choreography framework, through AI, to learn the relationship between the behaviour of applications and their resources, and between application's characteristics and its KPIs. The AI then will issue recommendations and imperative requests to adjust the deployment. The second part of the technical proposition starts with designing a protocol for the open schedulers that will define the types of interactions the schedulers can engage in. The protocol will make few assumptions on the topology and authority structure, focusing on formulating an open specification that developers (including third parties) may use to build their own implementations. The schedulers will be able to use any AI methods, classical or machine learning (from deep neural networks to rule engines), to decide whether to take over the workload or to offload it to a peer within the federation. The open and decentralized-by-design protocol will facilitate the development of novel AI methods for continuum governance. To help accelerate adoption, a compatibility test kit will be provided, that will check if a given scheduler implementation is compatible with the specification. The creation of the protocol should follow a process in which the broader community (research and industry is included), eliciting crucial feedback. ## IV Conclusion and Future Work The proposed architecture attempts to tackle the very relevant problem of fully exploiting the potential of the federated Cloud-Edge-IoT continuum. The ever-increasing heterogeneity of computational resources (emerging CPU architectures, different operating systems, networking capabilities, etc.) require a fundamental change in how modern applications are designed. In the presented concept, this is embodied in the use of lightweight and portable WebAssembly modules, which can be easily deployed anywhere in the continuum. This flexibility is enabled in part by the unified API for modules. Fig. 4: Smart schedulers cooperating via the open scheduler protocol. Fig. 3: The network and data fabric. In this setting, the applications communicate via the cross-tier network and data fabric, greatly simplifying building the application, data governance, schema management, and more. Finally, to allow the workloads to be scheduled and adjusted, the smart schedulers are proposed, along with the open scheduler protocol. The protocol and the schedulers are what enables the vision of truly decentralized, autonomous choreography in the federated continuum. Although the concept is promising, a significant amount of work is required to fully realize it. The unified compute module and unified API vision is already partially realized, with community's work underway on integrating WebAssembly with orchestration frameworks, container runtimes, and standardized interfaces such as WASI. However, these efforts are still at an early stage. For the network and data fabric, several of the required components were already demonstrated in some form in past research. However, the solutions are not integrated yet and largely not production-ready. For the smart schedulers, although a lot of research was published on the subject of AI-driven orchestration, few of these designs were implemented in practice. More work is especially required to standardize the scheduler's interfaces, aiming for an approach such as the proposed open scheduler protocol. In all of these efforts, the practice of open standards-first, implementation-second should be followed, to ensure the developed solutions are modular and can be extended by third-parties to suit their needs.
2304.14968
Motional effects in dynamics of fluorescence of cold atomic ensembles excited by resonance pulse radiation
We report the investigation of the influence of atomic motion on the fluorescence dynamics of dilute atomic ensemble driven by resonant pulse radiation. We show that even for sub-Doppler temperatures, the motion of atoms can significantly affect the nature of both superradiation and subradiation. We also demonstrate that, in the case of an ensemble of moving scatterers, it is possible to observe the nonmonotonic time dependence of the fluorescence rate. This leads to the fact that, in certain time intervals, increasing in temperature causes not an decrease but increase of the fluorescence intensity in the cone of coherent scattering. We have analyzed the role of the frequency diffusion of secondary radiation as a result of multiple light scattering in an optically dense medium. It is shown that spectrum broadening is the main factor which determines radiation trapping upon resonant excitation. At later time, after the trapping stage, the dynamics is dominated by close pairs of atoms (dimers). The dynamics of the excited states of these dimers has been studied in detail. It is shown that the change in the lifetime of the given adiabatic term of the diatomic quasi-molecule induced by the change in the interatomic distance as well as possible non-adiabatic transitions between sub- and superradiant states caused by atomic motion can lead not to the anticipated weakening of subradiation effect but to its enhancement.
A. S. Kuraptsev, I. M. Sokolov
2023-04-28T16:43:45Z
http://arxiv.org/abs/2304.14968v1
Motional effects in dynamics of fluorescence of cold atomic ensembles excited by resonance pulse radiation ###### Abstract We report the investigation of the influence of atomic motion on the fluorescence dynamics of dilute atomic ensemble driven by resonant pulse radiation. We show that even for sub-Doppler temperatures, the motion of atoms can significantly affect the nature of both superradiation and subradiation. We also demonstrate that, in the case of an ensemble of moving scatterers, it is possible to observe the nonmonotonic time dependence of the fluorescence rate. This leads to the fact that, in certain time intervals, increasing in temperature causes not an decrease but increase of the fluorescence intensity in the cone of coherent scattering. We have analyzed the role of the frequency diffusion of secondary radiation as a result of multiple light scattering in an optically dense medium. It is shown that spectrum broadening is the main factor which determines radiation trapping upon resonant excitation. At later time, after the trapping stage, the dynamics is dominated by close pairs of atoms (dimers). The dynamics of the excited states of these dimers has been studied in detail. It is shown that the change in the lifetime of the given adiabatic term of the diatomic quasi-molecule induced by the change in the interatomic distance as well as possible non-adiabatic transitions between sub- and superradiant states caused by atomic motion can lead not to the anticipated weakening of subradiation effect but to its enhancement. pacs: 31.70.Hq, 32.70.Jz, 42.50.Ct, 42.50.Nn ## I Introduction Atomic ensembles cooled to sub-Doppler temperatures in special traps are currently of great interest both because of the number of their unique physical properties, and because of the wide range of their possible practical applications in problems of quantum metrology, frequency standardization, quantum information applications [1; 2; 3]. Almost all proposed schemes for the use of cold atomic ensembles as well as most diagnostic methods are based on interaction of these ensembles with electromagnetic radiation. This interaction has a number of features associated with collective polyatomic effects due to low speed of the atoms. These effects are due, first, to the large resonant cross sections for light scattering by each separate atoms and, consequently, to the large optical depth of the ensembles even at low atomic densities. The second reason is random spatial disorder, in which the formation of atomic clusters, or quasi-molecules, consisting of several atoms, randomly located at distances of the order of the resonance radiation wavelength from each other, is possible. Dipole-dipole interatomic interaction causes the formation of collective sub- and superradiant states, which can essentially affect the optical properties of cold gases. The main approach to the description of collective effects is now the so-called method of coupled oscillators. To date, several variants of this method have been developed [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. The main difficulty in using this method is accounting for the motion of atoms in real physical systems. Therefore, in the overwhelming majority of works, the approximation of fixed scatterers is used. The displacement of atoms is taken into account by averaging the observables over a random spatial distribution of atoms. In papers [21; 22; 23] an attempt was made to refine the immobile atom approximation. In the refined model, the Doppler shift was modeled by introducing a random shift in the frequencies of atomic transitions, different for different atoms. The effect of continuous displacement of atoms in dilute media was considered in the framework of the scalar approximation in [24]. A more detailed experimental and theoretical analysis is given in [25]. The main result of this work was the assertion that subradiative states are sufficiently resistant to thermal decoherence at the temperatures of magneto-optical trap (MOT). Similar stability is predicted up to temperatures on the order of milkevin. We came to a different conclusion in our group when considering dense atomic ensembles with a strong dipole-dipole interatomic interaction [26]. For clouds in which the average interatomic distance is comparable with the wavelength of resonant radiation, we observed the destruction of subradiative states even at temperatures several times lower than the typical MOT temperatures. The essential influence of motion on another collective effect, on the effect of single-photon superradiance, was discovered in the framework of the study of the flash effect in the works [27; 28]. Here, in particular, it was shown that the subradiation rate in the direction of the exciting pulse increases upon heating. For a flat layer of atoms for an infinitesimal time interval after the end of the excitation pulse, it was even possible to obtain analytical expressions confirming this growth. At the same time, theoretical studies of superradiance outside the cone of coherent forward scattering, carried out in [29], lead to opposite conclusions. Heating manifests itself in a negative way, weakening the superradiance in these directions. Thus, the available data indicate the complex nature of the influence of atomic motion on collective optical effects. This influence depends both on the nature of the effect and on the conditions of observation. The main purpose of this work is to study unexplored case of dilute atomic ensembles cooled to sub-Doppler temperatures excited by resonance pulse radiation. Within the framework of a unified approach based on the coupled oscillator method accounting for continuous displacement, we will consider atomic fluorescence in a wide time interval and study the features of both superradiance and subradiation at different temperatures. We will show that even when the characteristic Doppler frequency shifts are smaller than the natural width of atomic transitions, motion can significantly affect the fluorescence dynamics. ## II Basic assumptions and approach In our theoretical description of time-dependent fluorescence we use the coupled dipoles model, which is traditional for this class of the problems [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. We consider a disordered atomic ensemble of \(N\) identical two-level atoms. All atoms have a ground state \(\left|g\right\rangle\) with the total angular momentum \(J_{g}=0\), an excited state \(\left|e\right\rangle\) with \(J_{e}=1\), a transition frequency \(\omega_{0}\), and a natural lifetime of all excited Zeeman sublevels (\(m=-1,0,1\)) is \(\tau_{0}=1/\gamma\). Our specific calculations is based on approach developed earlier in [12; 26]. In accordance with this approach we study the properties of a closed system consisting of all atoms and an electromagnetic field, including a vacuum reservoir. We seek the wave function \(\psi\) of this system as an expansion over the eigenfunctions \(\psi_{l}\) of the Hamiltonian of noninteracting atoms and light \(\psi=\sum_{l}\beta_{l}\psi_{l}\). Assuming that the exciting radiation is weak, which is typical in experiments [30; 31], we take into account only states with no more than one photon in the field. In such a case for the amplitudes \(\beta_{e}\) of one-fold excited atomic states \(\psi_{e}=\left|g\cdots e\cdots g\right\rangle\) we have the following differential equations \[\frac{\partial\beta_{e}}{\partial t}=\left(i\delta-\frac{\gamma}{2}\right) \beta_{e}-\frac{i\Omega_{e}}{2}+\frac{i\gamma}{2}\sum_{e^{\prime}\neq e}V_{ ee^{\prime}}\beta_{e^{\prime}}. \tag{1}\] Here, the index \(e\) shows both the number of the atom which is excited in the state \(\psi_{e}=\left|g\cdots e\cdots g\right\rangle\), and specific Zeeman sublevel populated in this state; \(\Omega_{e}\) is the Rabi frequency of the external laser field in the point where atom \(e\) locates, \(\delta\) is the detuning of the field from resonance atomic frequency. The last term in Eq. (1) corresponds to dipole-dipole interatomic interaction and is responsible for collective effects in the considered atomic ensemble. The matrix \(V_{ee^{\prime}}\) is \[\begin{split} V_{ee^{\prime}}&=-\frac{2}{\gamma} \sum_{\mu,\nu}\mathbf{d}_{eg}^{\mu}\mathbf{d}_{ge^{\prime}}^{\nu}\frac{e^{ik_{ 0}r_{ij}}}{\hbar r_{ij}^{3}}\\ &\times\left\{\delta_{\mu\nu}\left[1-ik_{0}r_{ij}-\left(k_{0}r_{ ij}\right)^{2}\right]\right.\\ &-\left.\frac{\mathbf{r}_{ij}^{\mu}\mathbf{r}_{ij}^{\nu}}{r_{ij}^ {2}}\left[3-3ik_{0}r_{ij}-\left(k_{0}r_{ij}\right)^{2}\right]\right\}.\end{split} \tag{2}\] Here we assume that in the states \(e\) and \(e^{\prime}\) atoms \(i\) and \(j\) are excited; \(\mathbf{d}_{eg}\) is the matrix element of the dipole moment operator for the transition \(g\to e\), \(\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}\), \(r_{ij}=\left|\mathbf{r}_{i}-\mathbf{r}_{j}\right|\) and \(k_{0}=\omega_{0}/c\) is the wavenumber associated to the transition, with \(c\) the vacuum speed of light. The indexes \(\mu\) and \(\nu\) denote projections of vectors on the axes of the reference frame. From the values of \(\beta_{e}(t)\) computed on the basis of Eq. (1) we can find the amplitudes of all other states which determine the wave function \(\psi\) (for more detail see [12]), which in its turn gives us information about the properties of the secondary radiation as well as about the the properties of atomic ensemble. In particular, the intensity \(I_{\alpha}(\mathbf{\Omega},t)\) of the light polarization component \(\alpha\) that the atoms scatter in a unit solid angle around direction of the wave vector \(\mathbf{k}\) determined by radius-vector \(\mathbf{r}\) (\(\mathbf{\Omega}=\theta,\varphi\)) reads \[I_{\alpha}(\mathbf{\Omega},t) =\frac{c}{4\pi}\left\langle\psi\right|E_{\alpha}^{(-)}(\mathbf{r} )E_{\alpha}^{(+)}(\mathbf{r})\left|\psi\right\rangle r^{2} \tag{3}\] \[=\frac{c}{4\pi}\left|k_{0}^{2}\sum_{e}\left(\mathbf{u}_{\alpha}^ {*}\mathbf{d}_{ge}\right)\beta_{e}(t)\exp\left(-i\mathbf{k}\mathbf{r}_{i} \right)\right|^{2}.\] Here \(E_{\alpha}^{(\pm)}(\mathbf{r})\) are the positive and negative frequency parts of the electric field operator; \(\mathbf{u}_{\alpha}\) is the unit polarization vector of the scattered light. In this paper, while analyzing the role of atomic motion, we will not conduct a detailed study of the angular distribution of fluorescence. The main attention will be attend to the study of the influence of temperature on the dynamics of the total radiation of the ensemble. This value can be obtained by integrating the expression (3) over the total solid angle and summing the contributions of the various polarization components. It can also be calculated on the basis of the law of energy conservation, taking into account that the total radiation energy is equal to the decrease in the excitation energy of the atomic system, i.e. it can be determined from the rate of decrease in the total population \(P_{ex}(t)\) of the excited states of all atoms. The latter can be found as follows \[P_{ex}(t)=\sum_{e}\left|\beta_{e}(t)\right|^{2}. \tag{4}\] In the next section, based on the relations (1)-(4), we will calculate the rate of decay of the total radiation intensity of an ensemble of moving atoms at different temperatures. We will look for a non-stationary solution to Eq. (1), taking into account the displacement of atoms with time explicitly. We will consider the temperature ranges typical for MOT and higher, at which the momenta of atoms are much greater than the momenta of a photon. For this reason, and also taking into account the weakness of the excitation, we will not take into account the recoil effects and will describe the motion as a classical uniform and rectilinear motion \(\mathbf{r}_{i}=\mathbf{r}_{i0}+\mathbf{v}_{i}(t-t_{0})\). In order not to take into account departure of atoms from the considered volume and associated change in their densities, we will assume that the volume of the cloud is surrounded by imaginary surfaces which scatters the atoms elastically without modification of its internal states. For simplify we consider an ensemble having the shape of a cube with edge equal to \(L\). The distribution of atoms at initial time \(t=t_{0}\) is considered random, but spatially homogeneous on average. The atomic medium is optically dense but dilute. The density of atoms \(n\) in all calculations will be the same \(nk_{0}^{-3}=0.005\). The average distance between atoms in this case exceeds the wavelength of quasi-resonant radiation. The velocities of the atoms at \(t=t_{0}\) are also considered as random variables. All their projections \(v_{\mu}\) are assumed to be distributed according to the gaussian law. \[f(v_{\mu})=1/\sqrt{2\pi v_{0}^{2}}\exp(-v_{\mu}^{2}/2v_{0}^{2}). \tag{5}\] The dispersion of the velocities \(v_{0}\) and the wave number \(k_{0}\) determine the Doppler broadening of the line \(\Delta_{D}=2\sqrt{2\ln 2}k_{0}v_{0}\). All fluorescence parameters calculated in this paper will be obtained by averaging over random variables \(\mathbf{r}_{i0}\) and \(\mathbf{v}_{i}\). The radiation pulse that excites fluorescence will be considered rectangular, its carrier frequency is resonant to the transition in a free atom \(\delta=0\). For definiteness, we choose it to be right-circularly polarized. ## III Results As the main quantity characterizing the dynamic of the fluorescence, we will use the current (instantaneous) radiation delay time \(\tau(t)=1/\Gamma(t)\), where \(\Gamma(t)=d\ln(I(t))/dt\), and \(I(t)\) is the total intensity of the secondary radiation of the atomic ensemble. The dependence \(\tau(t)\) after the end of the excitation pulse with the duration \(\gamma T=50\) at different temperatures of the ensemble containing \(N=625\) atoms is shown in Fig. 1. As for immobile atoms, [32; 33] several characteristic stages of fluorescence can be distinguished. First, at times \(t<1/\gamma\) after the end of the excitation pulse, the superradiance effect is observed. The dependence \(\tau(t)\) at this stage is shown on an enlarged scale in the inset to Fig. 1. The decay rate \(\Gamma(t)\) here is greater than the natural width \(\gamma\), and \(\gamma\tau(t)<1\). Then comes the stage of radiation trapping, which is due to the diffusion of photons in an optically dense medium. It can be divided into two parts. Initially, the decay rate decreases, and the trapping time increases. Here, radiation diffusion is described by multimode dynamics. Further, the diffusion regime becomes single-mode, when the afterglow decay is described with good accuracy by a single-exponential law. This regime corresponds to rectilinear, almost horizontal segments on the \(\tau(t)\) curves. Finally, after the one-exponential phase, a noticeable increase in the trapping time \(\tau(t)\) and a decrease in the decay rate are observed. Here we are dealing with the radiation of clusters randomly formed in the considered disordered ensemble. These clusters have long-lived states that are responsible for the "classical" subradiation process predicted by Dicke [34]. Next, we consider in more detail those features of the fluorescence dynamics that result from taking into account the motion of atoms at each of these Figure 1: Dynamics of instantaneous fluorescence delay time at various \(k_{0}v_{0}\) (temperatures). The number of atoms is \(N=625\). Excitation pulse duration is \(\gamma T=50\). main stages. ### Influence of motion on the nature of single-photon superradiance As already mentioned, the effect of single-photon superradiance has been studied in sufficient detail. The dynamics of fluorescence in the cone of coherent forward scattering has been studied especially detailed. In particular, in the experiment [27] it was found that the rate of superradiance in this direction increases with heating. This unexpected effect has been explained as the result of the dephasing effect from the motion of the atoms [28]. In the model of a flat layer, infinite in the transverse direction, for the rate of the initial stage corresponding to the time \(t=0^{+}\) immediately after the abrupt switching off of the excitation, an analytical expression is obtained, which in the case of resonant excitation has the form \[\Gamma(0^{+})=\frac{b_{0}\gamma}{2(1-exp(-b(v_{0})/2))}. \tag{6}\] Here \(b(v_{0})\) and \(b_{0}\) are the optical thickness of the medium at a given temperature, and at a temperature tending to zero respectively. Fig. 2 shows the results of our numerical calculation for a cloud of finite size. Here, the dependence on temperature (more precisely, on the value of \(k_{0}v_{0}\)) of the average fluorescence rate \(\Gamma\) is shown. Averaging was carried out for a finite time interval of duration \(0.01\,\gamma\) after the end of the excitation pulse. The black solid line was obtained for the total radiation intensity over all directions and polarizations. On a different scale this curve demonstrates what corresponds to the region of small times in Fig. 1. The temperature dependence of \(\Gamma\) for radiation in the forward scattering lobe is shown in Fig. 2 by the red dotted line. In the considered range of parameters, this fluorescence rate is more than one and a half times higher than the average in all directions. The red dash-dotted line is calculated based on the Eq. (6). It can be seen that the flat layer model reproduces the qualitative temperature dependence of \(\Gamma(t)\) quite well, but leads to noticeable quantitative differences for the scatterer ensemble with finite transverse dimensions. Numerical analysis of the forward fluorescence process revealed an important feature of the time dependence of \(\Gamma(t)\) on time intervals of the order of the natural lifetime of an atomic excited state. Accounting for motion leads to a qualitative change in the dynamics of fluorescence. Its speed is not maximum at \(t=0^{+}\). After the excitation is turned off, it changes nonmonotonically (see Fig. 3). It can even change the sign. I.e. at certain time intervals, the intensity of secondary radiation in the coherent forward lobe does not decrease, but increases. The velocity \(\Gamma\) reaches its maximal negative values at \(k_{0}v_{0}\sim\gamma\). To the best of our knowledge, this effect has not been observed for immobile atoms. In our opinion the oscillation in the afterglow of the atomic ensemble connectes with quantum beating and is caused by interference of light scattering through different collective states [26; 35]. Figure 2: Dependence of the average fluorescence rate \(\Gamma\) in the time interval \(\Delta t=0-0.01\gamma\) on temperature (\(k_{0}v_{0}\)). Number of atoms is \(N=625\), density is \(nk_{0}^{-3}=0.005\). The black solid line corresponds to the total radiation intensity in all directions and polarizations, the red dotted line corresponds to forward radiation, in the direction of the wave vector of the exciting light. The red dash-dotted line is calculated based on the Eq. (6). Figure 3: Time dependence of the fluorescence rate in the forward direction \(\Gamma(t)\) at different temperatures (\(k_{0}v_{0}\)). ### Influence of motion on diffusion trapping As follows from Fig. 3, for the ensembles under study, the transient processes end at the time \(\gamma t\sim 5-6\) after the superradiance stage. Then the trapping stage begins. Here the trapping time changes slightly with temperature, which, in our opinion, is precisely what was observed in the experiment [25]. This result seems quite natural, since, as is known, the trapping time \(\tau_{d}\), given by the horizontal segment in Fig. 1 is proportional to the square of the optical thickness \(b\). For immobile atoms and clouds of large optical thickness \(b_{0}\gg 1\) this time is well described by the following simple relation \[\tau_{d}=\frac{3b_{0}^{2}}{\alpha\pi^{2}}\tau_{0}, \tag{7}\] where the parameter \(\alpha\) depends on the shape of the cloud. For cubic volume \(\alpha=3\). The decrease in \(\tau_{d}\) is associated with a change in the mean free path due to the Doppler effect. The role of this effect is relatively small at sub-Doppler temperatures. However, numerical calculations show that this decrease turns out to be more significant than the relation (7) predicts if \(b_{0}\) is replaced by \(b(v)\) for moving atoms. The solid and dotted black lines in Fig. 4 show the calculated dependences of \(\tau_{d}\) on the atomic velocity for two sizes of the atomic ensemble \(k_{0}L=50\) and \(k_{0}L=60\). The density of atoms in both cases is the same and equal to \(nk_{0}^{-3}=0.005\). The red dashed-dotted lines show how the time \(\tau_{d}\) would change if it was calculated using the formula (7) taking into account the dependence of the optical thickness on temperature. For the convenience of comparison, the results calculated by the Eq. (7) were renormalized so that they coincided with the results of numerical calculations for immobile atoms. The need for renormalization is due to the fact that for not very large optical thicknesses, it is necessary to substitute in the formula (7) not \(b_{0}\), but a slightly larger value. The difference is because of the extrapolation length for the boundary conditions of the radiative diffusion equation [36]. Figure 4 demonstrates a noticeable discrepancy between the results of the two calculations, which increases with an increase in the size of the ensemble. With heating, the discrepancy decreases, which, however, does not indicate a better applicability of the relation (7). On the contrary, with heating, the optical thickness decreases and the diffusion approximation ceases to work. It formally predicts that \(\tau_{d}\) tends to zero, while for optically thin media \(\tau_{d}\) tends to the natural lifetime of atoms. Our analysis shows that the detected discrepancy can be explained by the photon frequency drift during multiple scattering inside the cloud [37; 38]. In the multiple scattering regime, a photon acquires a random frequency shift of order \(k_{0}v_{0}\) at each scattering and its frequency performs a random walk in the frequency space. This frequency drift leads to the appearance of nonresonant photons, which have a large mean free path and, consequently, a shorter lifetime in the ensemble. We analyzed the role of frequency diffusion by calculating the shape of the secondary radiation spectrum. The broadening of the spectrum of the secondary radiation was determined by a short-term Fourier transform [39] with a rectangular window of duration \(\gamma\Delta t=30\). The center of the window was at times \(\gamma t=20\) after the end of the excitation pulse. The calculation results are shown in Fig. 5. After the end of the excitation, the atoms begin to radiate at their own frequency. At times where the main mechanism is diffusion radiation trapping, there is a noticeable broadening of the spectrum due to multiple scattering. An increase in the size of the cloud and an increase in temperature enhance the effect of frequency drift, which explains the observed acceleration of fluorescence. ### Influence of motion on subradiation of dimers The role of motion manifests itself most unexpectedly at the stage of subradiation of diatomic clusters. As can be seen from Fig. 1 for all considered temperatures the motion reduces the duration of the trapping stage, and also, at certain time intervals, leads not to a weakening, but to an Figure 4: Dependence of \(\tau_{d}\) on \(k_{0}v_{0}\) (temperature) for two sizes of the atomic ensemble \(k_{0}L=50\) and \(k_{0}L=60\) respectively. The solid and dotted black lines give the results of the numerical calculation. Red dash-dotted lines are brown on the base of Eq. (7). increase in the subradiation effect. The influence of dimers begins to dominate when the diffusion stage is completed. For the conditions for which Fig. 1 is drawn, this is the case for comparatively long times. The relative role of clusters can be enhanced if the diffusion effect is weakened. This can be done by reducing the optical thickness, since the influence of dimers do not need to be revealed against the background of diffusion trapping. This is well demonstrated be the Fig. 6, which shows the dependence \(\tau=\tau(t)\) for a fixed temperature corresponding to \(k_{0}v_{0}=0.025\gamma\) and a fixed atomic density \(nk_{0}^{-3}=0.005\), but for ensembles of different sizes having small optical thickness. Figure 6 shows another effect that appears when the motion is taken into account. For small systems, a nonmonotonic time dependence of the decay rate of the total fluorescence intensity is observed. At large times the curves \(\tau=\tau(t)\) for ensembles of different sizes go to the same asymptote. This is due to the fact that the characteristic lifetime of long-lived excited states of atomic clusters depends on the average distance between atoms in them and does not depend on the size of the ensemble itself. The main features of the influence of motion on cluster subradiation can be understood if we consider the temporal evolution of the excited state of a specific pair of atoms with a change in the distance between them. It is known that a system of two two-level atoms has six one-fold excited collective states. Two pairs of states are degenerate. The frequency shifts \(\Delta_{c}\) and the width \(\Gamma_{c}\) of the four distinct states of the stationary dimer can be found as follows \[\frac{\Delta_{c}}{\gamma}=\frac{3\epsilon}{4}\left(q\,\left(\frac{\cos(kr)}{( kr)^{3}}+\frac{\sin(kr)}{(kr)^{2}}\right)-\frac{p\cos(kr)}{kr}\right), \tag{8}\] \[\frac{\Gamma_{c}}{\gamma}=1-\frac{3\epsilon}{2}\left(q\,\left(\frac{\sin(kr)} {(kr)^{3}}-\frac{\cos(kr)}{(kr)^{2}}\right)-\frac{p\sin(kr)}{kr}\right),\] where \(\epsilon=\pm 1\); \(p_{0}=0\); \(q_{0}=-2\); \(p_{\pm 1}=1\); \(q_{\pm 1}=1\). Let us consider how the total intensity as well as the population of the excited state of a diatomic quasimolecule changes with time, if atoms move and dimer is excited when interatomic distance is equal to given \(r_{0}\). Figures 7 shows the evolution of the considered system for two cases. Curves 1 and 2 correspond to the excitation of the longest-lived state and the shortest-lived one at \(r_{0}\) correspondingly. For comparison curve 3 in Fig. 7a depicts the decay of noninteracting atoms at a rate of \(\gamma\). The curves are calculated for \(r_{0}=3.5k_{0}^{-1}\). Distance of closest approach is \(r_{m}=0.1k_{0}^{-1}\), relative velocity of atoms is \(k_{0}v=0.2\gamma\). Note that for the chosen conditions, the initially short-lived state (curve 2) becomes subradiant upon approach. At small interatomic distances the population of the excited state practically does not change. The radiation intensity decreases significantly. This manifests itself as a dip in curve 2 in Fig. 7b. After passing the point of closest approach, the radiation intensity increases. For the initial subradiante state, the picture is reversed. It decays very quickly when the atoms approach each other. For motionless atoms each eigenstate of a quasimolecule decays independently of the others. Therefore, when any one of them is excited, other Figure 5: Broadening of the fluorescence emission spectrum upon heating. The ensemble size is \(k_{0}L=50\). The spectrum was calculated using the short-term Fourier transform. The center of the window corresponds to the time \(\gamma t=20\) Figure 6: Dynamics of instantaneous fluorescence delay time for various number of atoms. \(k_{0}v_{0}=0.2\gamma\); \(nk_{0}^{-3}=0.005\). states are not populated during the further evolution of the system. The sub-radiant state remains sub-radiative. This is not the case for moving atoms. The relation (8) describes the possibility of a subradiant state becoming superradiant even in the absence of transitions between different collective states. The decay rate of this state, i.e. of a state with given \(\epsilon\), \(p\), and \(q\) varies nonmonotonically with \(r\) and can be either greater or less than \(\gamma\). This means that a change in the fluorescence rate of a cluster can be observed even in the absence of nonadiabatic transitions between its different states. When atoms move, transitions between different collective states are also possible, which have an additional effect on the radiation dynamics. Such transitions are shown in Fig. 8. Shown here are the relative populations of four distinct states for different geometries of a diatomic quasi-molecule. It is assumed that one of the atoms is immobile. It is located at the origin of coordinates. The second atom moves parallel to the z axis. At the initial moment of time, he was at the point \(k_{0}z_{0}=-3\), \(k_{0}x_{0}=1\)\(k_{0}y_{0}=0\). At this moment, the system is excited to the state which has the shift and width given by the formula (8) with \(\epsilon=1\), \(p=0\) and \(q=2\). Atom speed is equal to \(k_{0}v=0.05\gamma\). The transitions between different eigenstates of a diatomic quasimolecule are clearly visible. Note that for the parameters under consideration, after the scattering of atoms, the most populated state is the longest-lived one with \(\epsilon=-1\), \(p=0\), and \(q=2\). We checked that the last result is preserved regardless of which state was excited before the interatomic approach. In a real multiatomic cloud, laser radiation excites not one, but a superposition of all possible states. And the nature of subradiation is determines by those of them that are subradiative at small interatomic distances. The others states decay rapidly and their population turns out to be low, which is manifested in the fluorescence of the ensemble as a whole. ## IV Conclusion In the present work, we study the effect of atomic motion on the dynamics of fluorescence of Figure 7: a) Dynamics of the population of the excited state of a diatomic quasi-molecule with a change in the distance between atoms. b) Time dependence of total radiation intensity. Curve 1 corresponds to the initial excitation of the longest-lived state at \(r_{0}=3.5k_{0}^{-1}\), curve 2 corresponds to the shortest-lived state. Curve 3 depicts the decay with a rate of \(\gamma\). Relative velocity of atoms is \(k_{0}v=0.2\gamma\). Distance of closest approach is \(r_{m}=0.1k_{0}^{-1}\). The vertical line corresponds to the moment of closest approach of the atoms. Figure 8: Dynamics of the relative population of various collective states of a diatomic quasi-molecule with a change in the distance between atoms. One of the atoms is stationary, the second one moves with the speed \(k_{0}v=0.05\gamma\) parallel to the \(z\) axis. 1 corresponds to \(\epsilon=1\), \(p=1\) and \(q=1\), 2: \(\epsilon=-1\), \(p=1\) and \(q=1\), 3: \(\epsilon=-1\), \(p=0\) and \(q=2\), \(4{:}\epsilon=1\), \(p=0\) and \(q=2\). dilute atomic ensembles excited by resonant pulsed radiation. This effect is analyzed for three main stages of fluorescence evolution: at the stage of superradiance, the stage of diffuse trapping of radiation, and at the stage when subradiance is determined by the emission of atomic clusters randomly formed in the considered disordered atomic medium. It is shown that already for ensembles cooled to sub-Dplerian temperatures, motion can significantly affect the nature of the considered collective effects. It is found that, in addition to an increase in the subradiation velocity into the coherent forward scattering cone, heating leads to the appearance of a nonmonotonic dependence of the radiation velocity. At certain time intervals, the decay of fluorescence in this direction can be replaced by its increase. At the trapping stage, the main factor affecting the fluorescence rate is the diffusion of the secondary radiation frequency as a result of multiple scattering of light in an optically dense medium. We studied the fluorescence spectrum and revealed its significant broadening upon heating of the ensemble. The most interesting results were found for subradiation of diatomic quasimolecules. In the temperature range corresponding to the MOT, the subradiation effect is enhanced for moving atoms. This effect is explained by the action of two factors. Firstly, by a change in the rate of decay of each of the eigenstates of a quasimolecule with a change in the distance between atoms, and, secondly, by possible nonadiabatic transitions between different sub- and superradiant states due to the motion of atoms. ## V Acknowledgments The research was supported by a grant from the Foundation for the Development of Theoretical Physics and Mathematics "BASIS". The results of the work were obtained using the computing resources of the supercomputer center of Peter the Great St. Petersburg Polytechnic University ([http://www.spbstu.ru](http://www.spbstu.ru)).
2306.02973
Sign changing bubble tower solutions to a slightly subcritical elliptic problem with non-power nonlinearity
We study the following elliptic problem involving slightly subcritical non-power nonlinearity $$\left\{\begin{array}{lll} -\Delta u =\frac{|u|^{2^*-2}u}{[\ln(e+|u|)]^\epsilon}\ \ &{\rm in}\ \Omega, \\[2mm] u= 0 \ \ & {\rm on}\ \partial\Omega, \end{array} \right.$$ where $\Omega$ is a bounded smooth domain in $\mathbb{R}^n$, $n\geq 3$, $2^*=\frac{2n}{n-2}$ is the critical Sobolev exponent, $\epsilon>0$ is a small parameter. By the finite dimensional Lyapunov-Schmidt reduction method, we construct a sign changing bubble tower solution with the shape of a tower of bubbles as $\epsilon$ goes to zero.
Shengbing Deng, Fang Yu
2023-06-05T15:36:24Z
http://arxiv.org/abs/2306.02973v1
Sign changing bubble tower solutions to a slightly subcritical elliptic problem with non-power nonlinearity ###### Abstract. We study the following elliptic problem involving slightly subcritical non-power nonlinearity \[\left\{\begin{aligned} &-\Delta u=\frac{|u|^{2^{*}-2}u}{| \ln(e+|u|)|^{\varepsilon}}&\text{in }\Omega,\\ & u=0&\text{on }\partial\Omega,\end{aligned}\right.\] where \(\Omega\) is a bounded smooth domain in \(\mathbb{R}^{n}\), \(n\geq 3\), \(2^{*}=\frac{2n}{n-2}\) is the critical Sobolev exponent, \(\varepsilon>0\) is a small parameter. By the finite dimensional Lyapunov-Schmidt reduction method, we construct a sign changing bubble tower solution with the shape of a tower of bubbles as \(\varepsilon\) goes to zero. Key words and phrases:non-power nonlinearity; sign changing bubble tower solutions; Lyapunov-Schmidt reduction 2020 Mathematics Subject Classification: 35B33; 35B40; 35J15 ## 1. Introduction In this paper, we consider the following elliptic problem involving slightly subcritical non-power nonlinearity \[\left\{\begin{aligned} &-\Delta u=\frac{|u|^{2^{*}-2}u}{| \ln(e+|u|)|^{\varepsilon}}&\text{in }\Omega,\\ & u=0&\text{on }\partial\Omega,\end{aligned}\right. \tag{1.1}\] where \(\Omega\) is a bounded smooth domain in \(\mathbb{R}^{n}\), \(n\geq 2\), \(2^{*}=\frac{2n}{n-2}\) is the critical Sobolev exponent for the embedding \(H^{1}_{0}(\Omega)\hookrightarrow L^{2^{*}}(\Omega)\), \(\varepsilon>0\) is a small parameter. The main feature of problem (1.1) is the non-power type nonlinearity, which is first proposed by Castro and Pardo [12], they proved the existence of a priori \(L^{\infty}\) bounds for positive solutions of Laplacian problem involving the nonlinearity \(f(u)=\frac{u^{\frac{n+2}{n-2}}}{\ln(2+u)^{\alpha}}\) with \(\alpha>\frac{2}{n-2}\). Then, Mavinga and Pardo [36] obtained a priori estimates for positive classical solutions to the following Hamiltonian elliptic system \[\left\{\begin{aligned} &-\Delta u=\frac{v^{p}}{|\ln(e+v)|^{\alpha}}& \text{in }\Omega,\\ &-\Delta v=\frac{u^{q}}{|\ln(e+u)|^{\beta}}&\text{in } \Omega,\\ & u=v=0&\text{on }\partial\Omega,\end{aligned}\right.\] where \(\Omega\) is a bounded convex domain with smooth boundary in \(\mathbb{R}^{n}\) for \(n>2\), \(1<p\), \(q<\infty\) and \(\alpha\), \(\beta>0\), \(\frac{1}{p+1}+\frac{1}{q+1}=\frac{n-2}{n}\). For more results of non-power nonlinearity, we refer to [39, 22, 13] for slightly subcritical problem and [26] for supercritical problem. On the one hand, problem (1.1) is related to the following slightly subcritical elliptic problem \[\left\{\begin{aligned} &-\Delta u=|u|^{2^{*}-2-\varepsilon}u& \text{in }\Omega,\\ & u=0&\text{on }\partial\Omega.\end{aligned}\right. \tag{1.2}\] When \(\varepsilon=0\), Pohozaev [41] proved that the non existence of nontrivial solution if \(\Omega\) is a star-sharped domain. When \(\Omega\) is a annulus, Kazdan and Warner [33] obtained the existence of a positive radial solution. Bahri and Coron [1] studied a positive solution provided that \(\Omega\) has nontrivial topology. For the existence of sign changing solution, there are few results. When \(\Omega=\mathbb{R}^{n}\), Ding [27] showed the infinitely many sign changing solutions by Ljusternik-Schnirlman category theory. In specific case like torii, Hebey and Vaugon [32] investigated the existence and multiple sign changing solutions. The existence and multiplicity of sign changing solutions are also treated in some contractible domains with an involution symmetry by Clapp and Weth [19]. When \(\varepsilon\) is a positive parameter, problem (1.2) has a positive least energy solution \(u_{\varepsilon}\), that is, \(u_{\varepsilon}\) is a solution for the variational problem \[\inf\Big{\{}\|u\|^{2}=\int_{\Omega}|\nabla u|^{2}dx:u\in H^{1}_{0}(\Omega), \int_{\Omega}|u|^{2^{\star}-\varepsilon}dx=1\Big{\}}.\] The blow-up phenomenon for positive and sign changing solutions to (1.2) has been studied extensively. When \(\varepsilon\) goes to \(0\), Rey [42] and Han [31] studied that the solution to (1.2) blows up and concentrates at a critical point of Robin function. Moreover, Flucher and Wei [28] proved that the concentration point is the minimum point of the Robin function. Furthermore, if \(\xi^{*}\) is a stable critical point of Robin function, then (1.2) has a positive solution which blows up at \(\xi^{*}\), this result is obtained in [37, 42]. For the multiple concentration points, Rey [43] showed that the two blow up and concentration points \((\xi^{*}_{1},\xi^{*}_{2})\), which is a critical point of a function involving Robin function and Green's function. If the domain is convex, Grossi and Takahashi in [30] proved that (1.2) does not admit any positive solution blowing up more than two points. The positive solution to (1.2) concentrate simultaneously at different points \(\xi_{1},\cdots,\xi_{k}\in\Omega\), \(k\geq 2\), has been established in [37, 2]. If any \(\xi_{i}\), \(i=1,\cdots,k\), is a simple blow up point, Li [34] characterized the form of solution \(u_{\varepsilon}\) near each blow up point \(\xi_{i}\) as \[u_{\varepsilon}(x)\sim\frac{\mu_{i}\sqrt{\varepsilon}}{(\mu_{i}^{2} \varepsilon+|x-\xi_{i}|^{2})^{\frac{n-2}{2}}},\quad\text{with }\mu_{i}>0.\] On the other hand, the existence of one sign changing solution to (1.2) is first proved in [5, 11], and multiple sign changing solutions with their nodal properties are treated in [6, 7] for \(\varepsilon\in(0,\frac{4}{n-2})\). Moreover, they proved that (1.2) has a least energy nodal solution with two nodal domains. Ben Ayed et al. [8] obtained that the low energy sign-changing solutions blow up at two points, and the energy converges to the value \(2S^{\frac{n}{2}}\), where \(S\) is the Sobolev constant for the embedding \(H^{1}_{0}(\Omega)\) into \(L^{\frac{2n}{n-2}}(\Omega)\). Bartsch et al. [4] considered that (1.2) has \(k\) pairs of sign changing solutions \(\pm u_{\varepsilon}^{(i)}\), \(i=1,\cdots,k\), which satisfies that \(u_{\varepsilon}^{(i)}\) blows up positively at a point \(\xi_{1}^{(i)}\in\Omega\) and \(-u_{\varepsilon}^{(i)}\) blows up negatively at a point \(\xi_{2}^{(i)}\in\Omega\) with \(\xi_{1}^{(i)}\neq\xi_{2}^{(i)}\). Bartsch et al. [3] proved a sign changing four-bubble solution with two positive and two negative blow-up points provided that \(\Omega\) is convex and satisfies some symmetry conditions. In contrast to the result of positive and sign changing solutions to (1.2), there are some papers of bubble tower. If \(\Omega\) is a smooth bounded domain in \(\mathbb{R}^{n}\) symmetric with respect to \(x_{1},\cdots,x_{n}\) and contains the origin, Pistoia and Weth [40] constructed a sign changing bubble tower solution \(u_{\varepsilon}\) concentrating at the center of symmetry of \(\Omega\). The same consequence in any bounded smooth domain is considered in [38], and they removed the assumption on non-degeneracy of critical point of Robin's function. If the domain has holes like \(\Omega\setminus B(a,\varepsilon)\cup B(b,\varepsilon)\) with center at point \(a\), \(b\) and radius \(\varepsilon>0\), Ge et al. [29] constructed sign changing solutions blowing up both at \(a\) and \(b\). For any other bubble tower results of elliptic problem, see [14, 15, 17, 20, 24] and references therein. In particular, we refer to the papers [21, 23] for fractional and biharmonic operators involving almost critical Sobolev exponent. Recently, by Lyapunov-Schmidt reduction method, Clapp et al. [9, 18] constructed solutions to problem (1.1). Before stating the results, let us introduce some definitions and notations. For \(\xi\in\Omega\) and \(\mu>0\), let \[U(x)=\alpha_{n}\frac{1}{(1+|x|^{2})^{\frac{n-2}{2}}},\quad U_{\mu,\xi}(x)=\frac{ \alpha_{n}\mu^{\frac{n-2}{2}}}{(\mu^{2}+|x-\xi|^{2})^{\frac{n-2}{2}}},\quad \text{with }\alpha_{n}=(n(n-2))^{\frac{n-2}{4}}, \tag{1.3}\] which are the only solutions of the equation \[-\Delta u=u^{2^{*}-1},\ \ u>0\quad\text{in}\ \ \mathbb{R}^{n}. \tag{1.4}\] Let us denote by \(G(x,y)\) the Green's function of \(-\Delta\) in \(\Omega\) with Dirichlet boundary condition, and by \(H(x,y)\) its regular part, so that \[H(x,y)=\frac{1}{(n-2)|\partial B|}\Big{(}\frac{1}{|x-y|^{n-2}}-G(x,y)\Big{)}, \quad\text{for every}\ \ x,y\in\Omega,\] where \(|\partial B|\) denotes the surface area of the unit sphere in \(\mathbb{R}^{n}\). The Robin function is defined as \(\varphi(x)=H(x,x)\) for every \(x\in\Omega\). Let \(\xi_{*}\) is a non-degenerate critical point of Robin function, Clapp et al. [18] constructed a single bubble solution of the form \[u_{\varepsilon}=U_{\mu_{\varepsilon},\xi_{\varepsilon}}+\phi_{\varepsilon},\] with \(\mu_{\varepsilon}\Big{(}\frac{|\ln\varepsilon|}{\varepsilon}\Big{)}^{\frac{ 1}{n-2}}\to d>0\), \(\xi_{\varepsilon}\to\xi_{*}\), \(\phi_{\varepsilon}\in H^{1}_{0}(\Omega)\) such that \(\int_{\Omega}|\nabla\phi_{\varepsilon}|^{2}dx=O\Big{(}\frac{\varepsilon}{| \ln\varepsilon|}\Big{)}\) as \(\varepsilon\to 0\). Liu et al. [35] established a solution concentrating at the origin point for a critical Henon problem with non-power type. Ben Ayed et al. [9] obtained positive as well as sign changing solutions concentrating at several points, which involving Robin function and Green's function. In present paper, motivated by several results [9, 18, 38, 40], we construct a solution with the shape of a tower of sign changing bubbles to problem (1.1) by finite Lyapunov-Schmidt dimensional reduction procedure. Our result can be stated as follows. **Theorem 1.1**.: _Assume that \(n\geq 3\), for any integer \(k\geq 1\), there exists \(\varepsilon_{0}>0\) such that for every \(\varepsilon\in(0,\varepsilon_{0})\), there are some points \(\xi_{i_{\varepsilon}}\in\Omega\) and positive constants \(d_{i_{\varepsilon}}\) for \(i=1,\cdots k\), problem (1.1) has a solution \(u_{\varepsilon}\) of the form,_ \[u_{\varepsilon}(x)=\alpha_{n}\sum_{i=1}^{k}(-1)^{i}\Big{(}\frac{d_{i_{ \varepsilon}}(\frac{\varepsilon}{|\ln\varepsilon|^{2}})^{\frac{2i-1}{n-2}}}{ \Big{(}d_{i_{\varepsilon}}(\frac{\varepsilon}{|\ln\varepsilon|^{2}})^{\frac{2 i-1}{n-2}}\Big{)}^{2}+|x-\xi_{i_{\varepsilon}}|^{2}}\Big{)}^{\frac{n-2}{2}}+ \Theta_{\varepsilon}(x),\] _where \(\|\Theta_{\varepsilon}\|\to 0\) as \(\varepsilon\to 0\), \(\varphi(\xi_{i_{\varepsilon}})\to\min_{z\in\Omega}\varphi(z)\) and \(d_{i_{\varepsilon}}\to d_{i}>0\) for \(i=1,\cdots k\)._ Observe that in the above construction, the solutions behaves like a superposition of bubbles of different blow up orders centered at around the minimum point of the Robin function, thus, it is called bubble tower solutions. It was first studied by del Pino et al. [25] for a slightly supercritical Brezis-Nirenberg problem in a ball, and this type solutions has been constructed in many problems, see [21, 23, 29, 38, 40, 16] and references therein. The paper is organized as follows. In Section 2, we give the scheme of proof for Theorem 1.1. We show the finite dimensional reduction process in Section 3. Proposition 2.2 is proved in Section 4. Finally, there are some estimates in the Appendix. We will use \(C>0\) to denote various positive constants. ## 2. Scheme of the proof In this section, let us give the sketch proof of Theorem 1.1. We first introduce some notations. The Sobolev space \(H^{1}_{0}(\Omega)\) is endowed with inner product \(\langle\cdot,\cdot\rangle\) defined by \[\langle u,v\rangle=\int_{\Omega}\nabla u\nabla vdx,\] for all \(u\), \(v\in H^{1}_{0}(\Omega)\), and \(L^{p}(\Omega)\) is the Lebesgue space with the norm \(|u|_{q}=\Big{(}\int_{\Omega}|u|^{q}dx\Big{)}^{\frac{1}{q}}\), for \(1<q<\infty\). Let \(i^{*}:L^{\frac{2n}{n+2}}(\Omega)\hookrightarrow H^{1}_{0}(\Omega)\) be the adjoint operator of the embedding \(i:H^{1}_{0}(\Omega)\hookrightarrow L^{\frac{2n}{n-2}}(\Omega)\), that is, for \(v\in L^{\frac{2n}{n+2}}(\Omega)\), \(u=i^{*}(v)\) if and only if \[-\Delta u=v\quad\text{in }\Omega,\ \ u=0\quad\text{on }\partial\Omega.\] Then, it holds \[\|i^{*}(v)\|\leq c|u|_{\frac{2n}{n+2}}, \tag{2.1}\] for some constant \(c>0\) depending only on \(\Omega\) and \(n\). Using these definitions and notations, problem (1.1) is equivalent to the following equation \[u=i^{*}[f_{\varepsilon}(u)],\quad u\in H^{1}_{0}(\Omega),\] where \(f_{\varepsilon}(u)=\frac{|u|^{2^{*}-2}u}{[\ln(e+|u|)]^{2}}\). In order to describe the shape of the solutions to problem (1.1), we give an integer number \(k\), and define the positive parameters \(\mu_{i}\) as \[\mu_{i}=\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{2i-1}{ n-2}}d_{i},\quad\text{with}\quad d_{i}>0,\quad i=1,\cdots,k. \tag{2.2}\] Let \(\xi\) be a point in \(\Omega\), \(\xi_{i}\in\Omega\), \(i=1,\cdots,k\), is given by \[\xi_{i}=\xi+\mu_{i}\sigma_{i},\quad\text{for some points }\sigma_{i}\in \mathbb{R}^{n}, \tag{2.3}\] where \(\sigma_{k}=0\). We will assume the following bounds on the parameters and points appearing in (2.2) and (2.3): given \(\eta>0\) small, \[\text{dist}(\xi,\partial\Omega)>\eta,\quad\eta<d_{i}<\frac{1}{\eta},\quad| \sigma_{i}|\leq\frac{1}{\eta},\quad i=1,\cdots,k. \tag{2.4}\] It is an immediate observation that \[\mu_{1}=\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{1}{n-2 }}d_{1}\quad\text{and}\quad\frac{\mu_{i+1}}{\mu_{i}}=\Big{(}\frac{\varepsilon }{|\ln\varepsilon|^{2}}\Big{)}^{\frac{2}{n-2}}\frac{d_{i+1}}{d_{i}}.\] We denote by \(PU_{\mu,\xi}\) the projection onto \(H^{1}_{0}(\Omega)\) of \(U_{\mu,\xi}\), that is \[\left\{\begin{aligned} -\Delta PU_{\mu,\xi}=-\Delta U_{\mu,\xi}& \text{in }\Omega,\\ PU_{\mu,\xi}=0&\text{on }\partial\Omega.\end{aligned}\right.\] Let \(k\geq 1\), the approximate solutions are given by \[u(x)=V(x)+\phi(x),\quad V(x)=V_{\bar{d},\bar{\sigma},\xi}(x)=\sum_{i=1}^{k}(- 1)^{i}PU_{\mu_{i},\xi_{i}}(x), \tag{2.5}\] where \[\bar{d}=(d_{1},\cdots,d_{k})\in\mathbb{R}^{k}_{+},\ \bar{\sigma}=(\sigma_{1}, \cdots,\sigma_{k})\in(\mathbb{R}^{n}_{+})^{k}. \tag{2.6}\] The term \(\phi\) is small in some sense. Let us describe \(\phi\). As it is shown in [10], any solution of \[-\Delta\psi=f_{0}^{{}^{\prime}}(U_{\mu,\xi})\psi\quad\text{in }\mathbb{R}^{n}, \tag{2.7}\] can be expressed as a linear combination of \[\psi^{0}(y)=\frac{(n-2)\alpha_{n}}{2}\frac{|y|^{2}-1}{(1+|y|^{2})^{\frac{n}{2} }},\quad\psi^{h}(y)=(n-2)\alpha_{n}\frac{y_{h}}{(1+|y|^{2})^{\frac{n}{2}}}, \quad\text{for }h=1,\cdots,n. \tag{2.8}\] Moreover, we set \[\psi^{0}_{\mu,\xi}(x)=\mu^{-\frac{n-2}{2}}\psi^{0}(\frac{x-\xi}{ \mu})=\frac{n-2}{2}\alpha_{n}\mu^{\frac{n-2}{2}}\frac{|x-\xi|^{2}-\mu^{2}}{( \mu^{2}+|x-\xi|^{2})^{\frac{n}{2}}}, \tag{2.9}\] \[\psi^{h}_{\mu,\xi}(x)=\mu^{-\frac{n-2}{2}}\psi^{h}(\frac{x-\xi}{ \mu})=(n-2)\alpha_{n}\mu^{\frac{n}{2}}\frac{x_{h}-\xi_{h}}{(\mu^{2}+|x-\xi|^{ 2})^{\frac{n}{2}}},\text{ for }h=1,\cdots,n,\] then \[\psi^{0}_{\mu,\xi}(x)=\mu\frac{\partial U_{\mu,\xi}}{\partial\mu},\quad\psi^{ h}_{\mu,\xi}(x)=\mu\frac{\partial U_{\mu,\xi}}{\partial\xi_{h}}. \tag{2.10}\] We denote that \(P\psi^{h}_{\mu,\xi}\) is the projection of \(\psi^{h}_{\mu,\xi}\), \(h=0,\cdots,n\), and define the subspace of \(H^{1}_{0}(\Omega)\) \[E_{\mu,\xi}=\operatorname{span}\left\{P\psi^{h}_{\mu,\xi}:h=0,1,\cdots,n,\ i=1,\cdots,k\right\},\] \[E^{\perp}_{\mu,\xi}=\left\{\phi\in H^{1}_{0}(\Omega):\langle\phi,P\psi^{h}_{\mu,\xi}\rangle=0:h=0,1,\cdots,n,\ i=1,\cdots,k\right\}.\] Let \[\Pi_{\mu,\xi}:H^{1}_{0}(\Omega)\to E_{\mu,\xi}\quad\text{and}\quad\Pi^{ \perp}_{\mu,\xi}:H^{1}_{0}(\Omega)\to E^{\perp}_{\mu,\xi},\] be the corresponding projections. To solve (1.1), it is equivalent to solve the couple of following equations \[\Pi^{\perp}_{\mu,\xi}\Big{(}V+\phi-i^{*}[f_{\varepsilon}(V+\phi]\Big{)}=0, \tag{2.11}\] and \[\Pi_{\mu,\xi}\Big{(}V+\phi-i^{*}[f_{\varepsilon}(V+\phi)]\Big{)}=0. \tag{2.12}\] We solve equation (2.11) in the following result, whose proof can be found in Section 3. **Proposition 2.1**.: _There exists \(\varepsilon_{0}>0\) such that for any \(\xi\in\Omega\), \(\bar{d}\in\mathbb{R}^{k}_{+}\), \(\bar{\sigma}\in(\mathbb{R}^{n}_{+})^{k}\) satisfying (2.4), for \(\varepsilon\in(0,\varepsilon_{0})\), there is a unique function \(\phi\in E^{\perp}_{\mu,\xi}\) which solves (2.11). Moreover_ \[\|\phi\|=\begin{cases}O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|} \ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\ln\Big{|}\ln\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}&\text{if }\,3\leq n\leq 6,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n+2}{2(n -2)}}\Big{)}&\text{if }\,n\geq 7.\end{cases} \tag{2.13}\] From Proposition 2.1, there is a unique \(\phi\in E^{\perp}_{\mu,\xi}\) such that (2.11) holds, it means that there are some constants \(c_{il}s\)\((i=1,\cdots,k,\)\(l=0,\cdots,n)\) such that \[V+\phi-i^{*}[f_{\varepsilon}(V+\phi)]=\sum_{i=1}^{k}\sum_{l=0}^{n}c_{il}P\psi^ {l}_{\mu_{i},\xi_{l}}, \tag{2.14}\] which equals to solve equation (2.12), that is, the following result is valid, whose proof is postponed to Section 4. **Proposition 2.2**.: _The following facts hold. Part a. If \((\bar{d}_{\varepsilon},\bar{\sigma}_{\varepsilon},\xi_{\varepsilon})\) satisfies_ \[\Big{\langle}V+\phi-i^{*}[f_{\varepsilon}(V+\phi)],P\psi^{h}_{\mu_{j_{\varepsilon }},\xi_{j_{\varepsilon}}}\Big{\rangle}=0,\quad\text{for}\ \ h=0,\cdots,n, \tag{2.15}\] _where \(j=1,\cdots,k\). Then \(V+\phi\) is a solution of problem (1.1). Part \(b\). For \(\xi\in\Omega\), \(\bar{d}=(d_{1},\cdots,d_{k})\in\mathbb{R}^{k}_{+},\bar{\sigma}=(\sigma_{1}, \cdots,\sigma_{k})\in(\mathbb{R}^{n}_{+})^{k}\), there holds_ \[\Big{\langle}V+\phi-i^{*}[f_{\varepsilon}(V+\phi)],P\psi^{h}_{\mu_{j},\xi_{j}} \Big{\rangle}=\begin{cases}\frac{\varepsilon}{\left|\ln\varepsilon\right|^{2}} G^{\varepsilon}_{0}(\bar{d},\bar{\sigma},\xi)-\frac{2k^{2}}{(n-2)^{2}}a_{4} \varepsilon\Big{|}\ln\frac{\varepsilon}{\left|\ln\varepsilon\right|^{2}} \Big{|}\ \ \text{for}\ h=0,\\ \frac{\varepsilon}{\left|\ln\varepsilon\right|^{2}}G^{\varepsilon}_{h}(\bar{ d},\bar{\sigma},\xi)\ \ \text{for}\ h=1,\cdots,n,\end{cases}\] _where \(j=1,\cdots,k\), and \(G^{\varepsilon}=(G^{\varepsilon}_{0},G^{\varepsilon}_{h})\) is given by_ \[\begin{cases}G^{\varepsilon}_{0}(\bar{d},\bar{\sigma},\xi)=\alpha_{n}a_{1}d_{ 1}^{n-2}\varphi(\xi)+a_{3}\sum\limits_{i=1}^{k-1}\Big{(}\frac{d_{i+1}}{d_{i}} \Big{)}^{\frac{n-2}{2}}g(\sigma_{i})-a_{4}\sum\limits_{i=1}^{k}\frac{2}{2i-1} \lvert\ln d_{i}\rvert+o(1),\\ G^{\varepsilon}_{h}(d_{1},\xi)=\frac{\alpha_{n}}{2}a_{2}\partial_{\xi_{h}} \varphi(\xi)d_{1}^{n-1}\ \ \text{for}\ h=1,\cdots,n,\end{cases}\] _with \(G^{\varepsilon}_{0}:[0,+\infty]\times[0,+\infty]\times\Omega\to\mathbb{R} \times\mathbb{R}\times\mathbb{R}^{n}\), \(G^{\varepsilon}_{h}:[0,+\infty]\times\Omega\to\mathbb{R}\times\mathbb{R}^{n}\) and_ \[a_{1} =(2^{*}-1)\int_{\mathbb{R}^{n}}U^{2^{*}-2}(y)\psi^{0}(y)dy,\] \[a_{2} =\int_{\mathbb{R}^{n}}U^{2^{*}-1}(y)dy,\] \[a_{3} =\frac{n-2}{2}\alpha_{n}^{p+1},\] \[a_{4} =\int_{\mathbb{R}^{n}}\bigg{|}\frac{1}{(1+\left|y-\sigma_{i} \right|^{2})^{\frac{n+2}{2}}}\ln\Big{(}\frac{1}{(1+\left|y-\sigma_{i}\right|^ {2})^{\frac{n+2}{2}}}\Big{)}\psi^{0}(y)\bigg{|}dy>0,\] \[g(\sigma) =\int_{\mathbb{R}^{n}}\frac{y^{2-n}}{(1+\left|y-\sigma\right|^{ 2})^{\frac{n+2}{2}}}dy.\] From Propositions 2.1 and 2.2, we view that \(V+\phi\) is the solution to problem (1.1) if there are \(d_{\varepsilon}>0\), \(\sigma_{\varepsilon}>0\) and \(\xi_{\varepsilon}\in\Omega\) such that \(c_{il}(d_{\varepsilon},\sigma_{\varepsilon},\xi_{\varepsilon})\) are zero when \(\varepsilon\) small enough. The sequel of this section is devoted to the proof of the main result. _Proof of Theorem 1.1_. By Proposition 2.2, equation (2.15) is equivalent to \[\begin{cases}\alpha_{n}a_{1}d_{1}^{n-2}\varphi(\xi)+a_{3}\sum\limits_{i=1}^{k- 1}\Big{(}\frac{d_{i+1}}{d_{i}}\Big{)}^{\frac{n-2}{2}}g(\sigma_{i})-a_{4}\sum \limits_{i=1}^{k}\frac{2}{2i-1}\lvert\ln d_{i}\rvert=o(1),\\ \frac{\alpha_{n}}{2}a_{2}\partial_{\xi_{h}}\varphi(\xi)d_{1}^{n-1}=0,\end{cases} \tag{2.16}\] for \(h=1,\cdots,n\). We note that \(G^{\varepsilon}\to G\) uniformly on compact set of \([0,+\infty]\times[0,+\infty]\times\Omega\), and the vector functional \(G(\bar{d},\bar{\sigma},\xi)=\Big{(}G_{0}(\bar{d},\bar{\sigma},\xi),G_{h}(d_{1}, \xi)\Big{)}\) is the principal part of defined by \[G_{0}(\bar{d},\bar{\sigma},\xi)=\alpha_{n}a_{1}d_{1}^{n-2}\varphi(\xi)+a_{3} \sum\limits_{i=1}^{k-1}\Big{(}\frac{d_{i+1}}{d_{i}}\Big{)}^{\frac{n-2}{2}}g( \sigma_{i})-a_{4}\sum\limits_{i=1}^{k}\frac{2}{2i-1}\lvert\ln d_{i}\rvert,\] \[G_{h}(d_{1},\xi)=\frac{\alpha_{n}}{2}a_{2}\partial_{\xi_{h}}\varphi(\xi)d_{1}^{ n-2},\quad\text{for}\ \ h=1,\cdots,n.\] Let us set \(s_{1}=d_{1}\), \(s_{i}=\frac{d_{i}}{d_{i-1}}\), \(i=2,\cdots,k\), then in the new variables \(\bar{s}=(s_{1},\cdots,s_{k})\), \(G_{h}(d_{1},\xi)\) and \(G_{0}(\bar{d},\bar{\sigma},\xi)\) can be rewrite as \[\bar{G}_{h}(s_{1},\xi)= \frac{\alpha_{n}}{2}a_{2}\partial_{\xi_{h}}\varphi(\xi)s_{1}^{n-2 },\quad\text{for}\quad h=1,\cdots,n,\] \[\bar{G}_{0}(\bar{s},\bar{\sigma},\xi)= \alpha_{n}a_{1}s_{1}^{n-2}\varphi(\xi)+a_{3}\sum\limits_{i=2}^{k}s _{i}^{\frac{n-2}{2}}g(\sigma_{i})-a_{4}\sum\limits_{i=1}^{k}\frac{2}{2i-1}|\ln s _{i}|.\] We denote \(\bar{G}=\left(\bar{G}_{0}(\bar{s},\bar{\sigma},\xi),\bar{G}_{h}(s_{1},\xi)\right)\). Let \(\xi^{0}\in\Omega\) be a strict minimum point of Robin function \(\varphi\), which is the zero point of function \(\bar{G}_{h}\) for \(h=1,\cdots,n\). Observe that \(\sigma_{i}=0\) is a strict minimum point of \(g\). On the other hand, when \(s_{i}\) is close to \(0\), the function \(\bar{G}_{0}\) tends to \(-\infty\), and \(\bar{G}_{0}>0\) as \(s_{i}>0\) large enough, thus, by intermediate value theorem, there exists a \(\bar{s}_{0}\) such that \(\bar{G}(\bar{s}_{0},0,\xi^{0})=0\). Moreover, \((\bar{s}_{0},0,\xi^{0})\) is an isolated zero of \(\bar{G}\) whose Brouwer degree is not zero. Therefore, if \(\varepsilon\) is small enough, (2.16) has a solution \((\bar{s}_{\varepsilon},\bar{\sigma}_{\varepsilon},\xi_{\varepsilon})\) near \((\bar{s}_{0},0,\xi_{0})\). We conclude that the right hand side of (2.14) is zero, i.e., \[\sum\limits_{i=1}^{k}\sum\limits_{l=0}^{n}c_{il}\Big{\langle}P\psi_{\mu_{i}, \xi_{i}}^{l},P\psi_{\mu_{j},\xi_{j}}^{h}\Big{\rangle}=0.\] Moreover, by Lemma 5.3, we conclude that \(c_{il}\) are zero. We finish the proof of this theorem. ## 3. The finite dimensional reduction In this section, we prove Proposition 2.1. Let \(L_{\mu,\xi}:E_{\mu,\xi}^{\perp}\to E_{\mu,\xi}^{\perp}\) be the linear operator defined by \[L_{\mu,\xi}(\phi)=\phi-\Pi_{\mu,\xi}^{\perp}\Big{(}i^{*}[f_{\varepsilon}^{{}^ {\prime}}(V)\phi]\Big{)}, \tag{3.1}\] where \(V\) is defined in (2.5). In the following, we establish the invertibility of \(L_{\mu,\xi}\) on \(E_{\mu,\xi}^{\perp}\). **Lemma 3.1**.: _There exist \(\varepsilon_{0}>0\) and \(C>0\) such that for any \(\xi\in\Omega\), \(\bar{d}\in\mathbb{R}_{+}^{k}\), \(\bar{\sigma}\in(\mathbb{R}_{+}^{n})^{k}\) satisfying (2.4), for \(\varepsilon\in(0,\varepsilon_{0})\), it holds_ \[\|L_{\mu,\xi}(\phi)\|\geq C\|\phi\|,\quad\forall\phi\in E_{\mu,\xi}^{\perp}. \tag{3.2}\] Proof.: We argue by contradiction. Assume there exist sequences \(\varepsilon_{m}\to 0\), \(\xi\in\Omega\), \(\bar{\sigma}_{m}\in(\mathbb{R}_{+}^{n})^{k}\) and \(\bar{d}_{m}=(d_{1m},\cdots,d_{km})\in\mathbb{R}_{+}^{k}\) with \(\xi_{m}\to\xi\in\Omega\), \(\sigma_{im}\to\sigma_{i}\) and \(d_{im}\to d_{i}>0\), \(i=1,\cdots,k\), \(\phi_{m}\), \(h_{m}\in\Lambda_{\mu_{m},\xi_{m}}^{\perp}\) such that \[L_{\mu_{m},\xi_{m}}(\phi_{m})=h_{m},\quad\|\phi_{m}\|=1\quad\text{and}\quad\| h_{m}\|\to 0. \tag{3.3}\] From (3.1), there exists \(\omega_{m}\in E_{\mu_{m},\xi_{m}}\) such that \[\phi_{m}-i^{*}[f_{\varepsilon}^{{}^{\prime}}(V_{m})\phi_{m}]=h_{m}+\omega_{m}, \tag{3.4}\] where \(V_{m}=V(\bar{d}_{m},\bar{\sigma}_{m},\xi_{m})=\sum\limits_{i=1}^{k}PU_{\mu_{im },\xi_{im}}.\) **Step 1.** We prove that \[\|\omega_{m}\|\to 0. \tag{3.5}\] Let \(\omega_{m}=\sum\limits_{i=1}^{k}\sum\limits_{l=0}^{n}c_{m}^{il}P\psi_{\mu_{m} ^{i},\xi_{m}}^{l}\), we multiply (3.4) by \(P\psi_{\mu_{m}^{l},\xi_{m}^{l}}^{h}\), and integrating in \(\Omega\), then \[\sum\limits_{i=1}^{k}\sum\limits_{l=0}^{n}c_{m}^{il}\langle P\psi_{\mu_{m}^{i}, \xi_{m}^{i}}^{l},P\psi_{\mu_{m}^{j},\xi_{m}^{j}}^{h}\rangle=\int_{\Omega}f_{ \varepsilon}^{{}^{\prime}}(V_{m})\phi_{m}P\psi_{\mu_{m}^{j},\xi_{m}^{j}}^{h}dx. \tag{3.6}\] From Lemma 5.3, we obtain \[\sum\limits_{i=1}^{k}\sum\limits_{l=0}^{n}c_{m}^{il}\langle P\psi_{\mu_{m}^{i}, \xi_{m}^{i}}^{l},P\psi_{\mu_{m}^{j},\xi_{m}^{j}}^{h}\rangle\] \[= c_{m}^{jh}\Big{(}c_{h}(1+o(1))\Big{)}+O(1)\sum_{l=0,l\neq h}^{n}c_{m}^{ jl}+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{n-2}} \bigg{)}\sum_{i=1,i\neq j}^{k}\sum_{l=0}^{n}c_{m}^{il}. \tag{3.7}\] On the other hand, by (5.4), (5.6), (5.7), (5.9), (5.11), (5.13) and the orthogonality condition \(\langle\phi_{m},P\psi^{h}_{\mu_{m}^{j},\xi_{m}^{j}}\rangle=0\), we have \[\int_{\Omega}f^{{}^{\prime}}_{\varepsilon}(V_{m})\phi_{m}P\psi^{h }_{\mu_{m}^{j},\xi_{m}^{j}}dx\] \[= \int_{\Omega}\Big{(}f^{{}^{\prime}}_{\varepsilon}(V_{m})-f^{{}^{ \prime}}_{0}(V_{m})\Big{)}\phi_{m}\Big{(}P\psi^{h}_{\mu_{m}^{j},\xi_{m}^{j}}- \psi^{h}_{\mu_{m}^{j},\xi_{m}^{j}}\Big{)}dx+\int_{\Omega}\Big{(}f^{{}^{\prime }}_{\varepsilon}(V_{m})-f^{{}^{\prime}}_{0}(V_{m})\Big{)}\phi_{m}\psi^{h}_{\mu _{m}^{j},\xi_{m}^{j}}dx\] \[+\int_{\Omega}\Big{(}f^{{}^{\prime}}_{0}(V_{m})-\sum_{i=1}^{k}(- 1)^{i}f^{{}^{\prime}}_{0}(PU_{\mu_{m}^{i},\xi_{m}^{i}})\Big{)}\phi_{m}P\psi^{h }_{\mu_{m}^{j},\xi_{m}^{j}}dx\] \[+\sum_{i=1}^{k}(-1)^{i}\int_{\Omega}\Big{(}f^{{}^{\prime}}_{0}(PU _{\mu_{m}^{i},\xi_{m}^{i}})-f^{{}^{\prime}}_{0}(U_{\mu_{m}^{i},\xi_{m}^{i}}) \Big{)}\phi_{m}P\psi^{h}_{\mu_{m}^{j},\xi_{m}^{j}}dx\] \[\leq \Big{|}f^{{}^{\prime}}_{\varepsilon}(V_{m})-f^{{}^{\prime}}_{0}(V _{m})\Big{|}_{\frac{n}{2}}|\phi_{m}|_{\frac{2n}{n-2}}\Big{|}P\psi^{h}_{\mu_{m} ^{j},\xi_{m}^{j}}-\psi^{h}_{\mu_{m}^{j},\xi_{m}^{j}}\Big{|}_{\frac{2n}{n-2}}\] \[+\Big{|}f^{{}^{\prime}}_{0}(V_{m})-\sum_{i=1}^{k}(-1)^{i}f^{{}^{ \prime}}_{0}(PU_{\mu_{m}^{i},\xi_{m}^{i}})\Big{|}_{\frac{n}{2}}|P\psi^{h}_{\mu _{m}^{j},\xi_{m}^{j}}|_{\frac{2n}{n-2}}|\phi_{m}|_{\frac{2n}{n-2}} \tag{3.8}\] \[+\Big{|}f^{{}^{\prime}}_{0}(PU_{\mu_{m}^{i},\xi_{m}^{i}})-f^{{}^{ \prime}}_{0}(U_{\mu_{m}^{i},\xi_{m}^{i}})\Big{|}_{\frac{n}{2}}|P\psi^{h}_{\mu _{m}^{j},\xi_{m}^{j}}|_{\frac{2n}{n-2}}|\phi_{m}|_{\frac{2n}{n-2}}=O\Big{(} \varepsilon\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}.\] Consequently, from (3.6)-(3.8), we obtain (3.5). **Step 2.** We prove that \[\liminf_{m\to\infty}\int_{\Omega}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{m}^{2 }dx=C>0, \tag{3.9}\] where \(u_{m}\) satisfies \[\left\{\begin{aligned} -\Delta u_{m}=f^{{}^{\prime}}_{ \varepsilon}(V_{m})u_{m}+f^{{}^{\prime}}_{\varepsilon}(V_{m})(h_{m}+\omega_{m}) &\text{in }\Omega,\\ u_{m}=0&\text{on }\partial\Omega,\end{aligned}\right. \tag{3.10}\] with \[u_{m}=\phi_{m}-h_{m}-\omega_{m},\ \ \ \ \|u_{m}\|\to 1. \tag{3.11}\] We prove that \[\liminf_{m\to\infty}\|u_{m}\|=C>0. \tag{3.12}\] From (3.10), there holds \[u_{m}=i^{*}\Big{[}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{m}+f^{{}^{\prime}}_{ \varepsilon}(V_{m})(h_{m}+\omega_{m})\Big{]}. \tag{3.13}\] Moreover, by (2.1), (5.9), (5.11) and (5.13), we get \[|u_{m}|_{\frac{2n}{n+2}}\leq C\Big{(}|f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{m}|_{\frac{2n}{n+2}}+|f^{{} ^{\prime}}_{\varepsilon}(V_{m})(h_{m}+\omega_{m})|_{\frac{2n}{n+2}}\Big{)}\] \[\leq C|f^{{}^{\prime}}_{\varepsilon}(V_{m})-f^{{}^{\prime}}_{0}(V_{m})|_{ \frac{n}{2}}|u_{m}|_{\frac{2n}{n-2}}+C\Big{|}f^{{}^{\prime}}_{0}(V_{m})-\sum_{i= 1}^{k}(-1)^{i}f^{{}^{\prime}}_{0}(PU_{\mu_{m}^{i},\xi_{m}^{i}})\Big{|}_{\frac{n} {2}}|u_{m}|_{\frac{2n}{n-2}}\] \[\quad+C|f^{{}^{\prime}}_{\varepsilon}(V_{m})-f^{{}^{\prime}}_{0}(V_{m})| _{\frac{n}{2}}|h_{m}+\omega_{m}|_{\frac{2n}{n-2}}\] \[\quad+C\Big{|}f^{{}^{\prime}}_{0}(V_{m})-\sum\limits_{i=1}^{k}(-1 )^{i}f^{{}^{\prime}}_{0}(PU_{\mu_{m}^{i},\xi_{m}^{i}})\Big{|}_{\frac{n}{2}}|h_ {m}+\omega_{m}|_{\frac{2n}{n-2}} \tag{3.14}\] \[\quad+C\Big{|}\sum\limits_{i=1}^{k}(-1)^{i}\Big{(}f^{{}^{\prime}} _{0}(PU_{\mu_{m}^{i},\xi_{m}^{i}})-f^{{}^{\prime}}_{0}(U_{\mu_{m}^{i},\xi_{m}^ {i}})\Big{)}\Big{|}_{\frac{n}{2}}|h_{m}+\omega_{m}|_{\frac{2n}{n-2}}\leq C\|u _{m}\|+o(1).\] It follows that \(|u_{m}|_{\frac{2n}{n+2}}\to 0\) provided that \(\|u_{m}\|\to 0\), this contradicts with (3.11). Therefore, (3.12) holds. We multiply (3.13) by \(u_{m}\), that is \[\|u_{m}\|^{2}=\int_{\Omega}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{m}^{2}dx+ \int_{\Omega}f^{{}^{\prime}}_{\varepsilon}(V_{m})(h_{m}+\omega_{m})u_{m}dx. \tag{3.15}\] By (3.3) and (3.5), one has \[\int_{\Omega}f^{{}^{\prime}}_{\varepsilon}(V_{m})(h_{m}+\omega_{m })u_{m}dx\leq |f^{{}^{\prime}}_{\varepsilon}(V_{m})|_{\frac{n}{2}}|h_{m}+ \omega_{m})|_{\frac{2n}{n-2}}|u_{m}|_{\frac{2n}{n-2}} \tag{3.16}\] \[\leq \|h_{m}+\omega_{m}\|\|u_{m}\|=o(1).\] Therefore (3.9) follows by (3.11), (3.12), (3.15) and (3.16). **Step 3.** Let us prove that a contradiction arises, by showing that \[\int_{\Omega}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{m}^{2}dx=o(1). \tag{3.17}\] In order to deal with this conclusion, we decompose \(B(\xi,\rho)\) into the union of non-overlapping annuli, that is \(B(\xi,\rho)=\bigcup\limits_{i=1}^{k}\mathcal{A}_{i}\), where \[\mathcal{A}_{i}=B(\xi,\sqrt{\mu_{i}\mu_{i-1}})\setminus B(\xi,\sqrt{\mu_{i} \mu_{i+1}}),\quad i=1,\cdots,k, \tag{3.18}\] with \(\mu_{0}=\frac{\rho^{2}}{\mu_{1}}\) and \(\mu_{k+1}=0\). We set a smooth cut-off function \(\chi_{m}^{i}\) as \[\chi_{m}^{i}(x)=\begin{cases}1&\text{if }\sqrt{\mu_{m}^{i}\mu_{m}^{i+1}}\leq|x- \xi_{m}|\leq\sqrt{\mu_{m}^{i}\mu_{m}^{i-1}},\\ 0&\text{if }|x-\xi_{m}|\leq\frac{\sqrt{\mu_{m}^{i}\mu_{m}^{i+1}}}{2}\text{ or }|x-\xi_{m}|\geq 2\sqrt{\mu_{m}^{i}\mu_{m}^{i-1}},\end{cases} \tag{3.19}\] and \[|\nabla\chi_{m}^{i}(x)|\leq\frac{2}{\sqrt{\mu_{m}^{i}\mu_{m}^{i-1}}}\quad \text{and}\quad|\nabla^{2}\chi_{m}^{i}(x)|\leq\frac{4}{\mu_{m}^{i}\mu_{m}^{i-1 }},\quad\text{for }\text{ any }i=1,\cdots,k. \tag{3.20}\] We define \[\tilde{u}_{m}^{i}(y)=(\mu_{m}^{i})^{\frac{n-2}{2}}u_{m}(\mu_{m}^{i}y+\xi_{m}) \chi_{m}^{i}(\mu_{m}^{i}y+\xi_{m}). \tag{3.21}\] First, the following results will be showed in Step 4, \[\tilde{u}_{m}^{i}\to 0\quad\text{weakly in }D^{1,2}(\mathbb{R}^{n}),\quad \tilde{u}_{m}^{i}\to 0\quad\text{strongly in }L^{q}_{loc}(\mathbb{R}^{n})\text{ for any }q\in[2,2^{*}). \tag{3.22}\] Let us prove (3.17). There holds \[\int_{\Omega}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{m}^{2}dx=\int_{\Omega \setminus B(\xi,\rho)}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{m}^{2}dx+\sum \limits_{i=1}^{k}\int_{\mathcal{A}_{i}}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{ m}^{2}dx,\] where \[\int_{\Omega\setminus B(\xi,\rho)}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{m}^{2}dx \leq C\sum_{i=1}^{k}(\mu_{i}^{n})^{2}\int_{\Omega\setminus B(\xi,\rho)}u_{m}^{2} dx=o(1).\] Since \((\frac{1}{1+|x|^{2}})^{2}\in L^{\frac{n}{2}}(\mathbb{R}^{n})\) and (3.22) hold, we conclude that \(\int_{(\mathcal{A}^{i}_{m}-\xi_{m})/\mu_{m}^{i}}(\frac{1}{1+|y-\sigma_{in}|^{2 }})^{\frac{n-2}{2}(p-1)}(\tilde{u}_{m}^{i})^{2}dy\to 0\). On the other hand, we set \(x-\xi_{m}=\mu_{m}^{i}y\) and by a fact that, let \(h\in L^{1}_{rad}(\mathbb{R}^{n})\), performing the proper change of variable: for any \(i\neq l\), \[\int_{\frac{\mathcal{A}^{i}_{m}-\xi_{m}}{\mu_{m}^{i}}}h(|x|)dx=\begin{cases}O \Big{(}(\frac{\mu_{l}}{\mu_{i}})^{\frac{n}{2}}\Big{)}&\text{if $i\leq l-1<l$,}\\ O\Big{(}(\frac{\mu_{l}}{\mu_{l}})^{\frac{n}{2}}\Big{)}&\text{if $i\geq l-1>l$.}\end{cases} \tag{3.23}\] By (3.23) and the choice of \(\mu_{i}\) in (2.2), we deduce that \[\text{if $h\in L^{1}_{rad}(\mathbb{R}^{n})$, $i\neq l$,}\quad\int_{\frac{ \mathcal{A}^{l}_{m}-\xi_{m}}{\mu_{m}^{i}}}h(|x|)dx=O\bigg{(}\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{n-2}}\bigg{)}. \tag{3.24}\] Then, there holds \[\int_{\frac{\mathcal{A}^{i}_{m}-\xi_{m}}{\mu_{m}^{i}}}U_{\mu_{im},\xi_{im}}^{( p-1)\frac{n}{2}}dx=O\bigg{(}\int_{\frac{\sqrt{\mu_{lm}^{i}\mu_{m}^{i+1}}}{ \mu_{m}^{i}}\leq|y|\leq\frac{\sqrt{\mu_{lm}^{i}\mu_{m}^{i-1}}}{\mu_{m}^{i}}} \frac{1}{(1+|y-\sigma_{im}|^{2})^{n}}dy\bigg{)}.\] Consquently, from (5.13), we have \[\int_{\mathcal{A}^{i}_{m}}f^{{}^{\prime}}_{\varepsilon}(V_{m})u_{ m}^{2}dx\] \[= \int_{\mathcal{A}^{i}_{m}}\Big{(}f^{{}^{\prime}}_{\varepsilon}(V_ {m})-f^{{}^{\prime}}_{0}(V_{m})\Big{)}u_{m}^{2}dx+\int_{\mathcal{A}^{i}_{m}}f ^{{}^{\prime}}_{0}(V_{m})u_{m}^{2}dx\] \[\leq C|f^{{}^{\prime}}_{\varepsilon}(V_{m})-f^{{}^{\prime}}_{0}(V_{m })|_{\frac{n}{2}}|u_{m}|_{\frac{n}{n-2}}^{2}+C\sum_{i=1}^{k}\int_{\mathcal{A}^ {i}_{m}}U_{\mu_{im},\xi_{im}}^{p-1}u_{m}^{2}dx\] \[\leq C\varepsilon\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{ 2}}\Big{|}+C(\mu_{m}^{i})^{2-\frac{n-2}{2}(p-1)}\int_{\frac{\mathcal{A}^{i}_{ m}-\xi_{m}}{\mu_{m}^{i}}}\Big{(}\frac{1}{1+|y-\sigma_{im}|^{2}}\Big{)}^{\frac{n-2}{2 }(p-1)}(\tilde{u}_{m}^{i})^{2}dy\] \[+C\sum_{j=1,j\neq i}^{k}\Big{(}\int_{\mathcal{A}^{i}_{m}}U_{\mu_{ jm},\xi_{jm}}^{(p-1)\frac{n}{2}}dx\Big{)}^{\frac{2}{n}}|u_{m}|_{\frac{2n}{n-2}}^{2}=o (1).\] **Step 4.** We prove (3.22). From the definition of \(\tilde{u}_{m}^{i}\), \(i=1,\cdots,k\), in (3.21), when \(x-\xi_{m}=\mu_{m}^{i}y\), we get \[\nabla\tilde{u}_{m}^{i}(y)=(\mu_{m}^{i})^{\frac{n}{2}}\bigg{[}\Big{(}\nabla u_{ m}(x)\Big{)}\chi_{m}^{i}(x)+u_{m}(x)\Big{(}\nabla\chi_{m}^{i}(x)\Big{)}\bigg{]}, \tag{3.25}\] and \[\Delta\tilde{u}_{m}^{i}(y)=(\mu_{m}^{i})^{\frac{n+2}{2}}\bigg{[}\Big{(}\Delta u _{m}(x)\Big{)}\chi_{m}^{i}(x)+2\nabla u_{m}(x)\nabla\chi_{m}^{i}(x)+u_{m}(x) \Big{(}\Delta\chi_{m}^{i}(x)\Big{)}\bigg{]}. \tag{3.26}\] Then, from (3.19), (3.20) and (3.25), it holds that \(\|\tilde{u}_{m}^{i}\|_{D^{1,2}(\mathbb{R}^{n})}\leq C\). It follows that, up to a subsequence, \[\tilde{u}_{m}^{i}\to\tilde{u}^{i}\quad\text{weakly in $D^{1,2}(\mathbb{R}^{n})$,} \quad\tilde{u}_{m}^{i}\to\tilde{u}^{i}\quad\text{strongly in $L^{q}_{loc}(\mathbb{R}^{n})$ for any $q\in[2,2^{*})$.}\] Next, we show that \(\tilde{u}^{i}\) is the solution of the following problem \[-\Delta\tilde{u}^{i}=f^{{}^{\prime}}_{0}(U_{1,\sigma_{i}})\tilde{u}^{i}\quad \text{in $\mathbb{R}^{n}$,} \tag{3.27}\] and satisfies the orthogonality conditions \[\int_{\mathbb{R}^{n}}\nabla\psi^{h}_{1,\sigma_{i}}\nabla\tilde{u}^{i}dx=0,\quad h =0,1,\cdots,n. \tag{3.28}\] It follows that \(\tilde{u}^{i}=0\). This is a contradiction by the result of [10], which concludes the proof. **Step 5.** We prove (3.27) and (3.28). (1) Let us prove (3.27). By (3.25) and (3.26), if \(x-\xi_{m}=\mu^{i}_{m}y\), \(y\in\Omega^{i}_{m}=\frac{\Omega-\xi_{m}}{\mu^{i}_{m}}\), we have \[\left\{\begin{aligned} -\Delta\tilde{u}^{i}_{m}(y)& =(\mu^{i}_{m})^{2}f^{{}^{\prime}}_{\varepsilon}\Big{(}V_{m}(x) \Big{)}\tilde{u}^{i}_{m}(y)+(\mu^{i}_{m})^{\frac{n+2}{2}}f^{{}^{\prime}}_{ \varepsilon}\Big{(}V_{m}(x)\Big{)}\Big{(}h_{m}(x)+\omega_{m}(x)\Big{)}\chi^{i} _{m}(x)\\ &\qquad\qquad+2(\mu^{i}_{m})^{\frac{n+2}{2}}\nabla u_{m}(x) \nabla\chi^{i}_{m}(x)+2(\mu^{i}_{m})^{\frac{n+2}{2}}u_{m}(x)\Delta\chi^{i}_{m }(x),\\ \tilde{u}^{i}_{m}&=0\qquad\text{ on }\partial \Omega^{i}_{m}.\end{aligned}\right.\] Therefore, if \(\varpi\in C^{\infty}_{0}(\mathbb{R}^{n})\), one has \[\int_{\mathbb{R}^{n}}\nabla\tilde{u}^{i}_{m}(y)\nabla\varpi(y)dy\] \[= \int_{\mathbb{R}^{n}}(\mu^{i}_{m})^{2}f^{{}^{\prime}}_{ \varepsilon}\Big{(}V_{m}(\mu^{i}_{m}y+\xi_{m})\Big{)}\tilde{u}^{i}_{m}(y) \varpi(y)dy+\int_{\mathbb{R}^{n}}(\mu^{i}_{m})^{\frac{n+2}{2}}\] \[\times f^{{}^{\prime}}_{\varepsilon}\Big{(}V_{m}(\mu^{i}_{m}y+\xi _{m})\Big{)}\Big{(}h_{m}(\mu^{i}_{m}y+\xi_{m})+\omega_{m}(\mu^{i}_{m}y+\xi_{m })\Big{)}\chi^{i}_{m}(\mu^{i}_{m}y+\xi_{m})\varpi(y)dy \tag{3.29}\] \[+2(\mu^{i}_{m})^{\frac{n+2}{2}}\int_{\mathbb{R}^{n}}\Big{(}\nabla u _{m}(\mu^{i}_{m}y+\xi_{m})\nabla\chi^{i}_{m}(y)+u_{m}(\mu^{i}_{m}y+\xi_{m}) \Delta\chi^{i}_{m}(\mu^{i}_{m}y+\xi_{m})\Big{)}\varpi(y)dy.\] From (3.19) and (3.20), we deduce that the second and the third term tends to \(0\). For the first term, if \(\frac{\sqrt{\mu^{i}_{m}\mu^{i+1}_{m}}}{2}\leq|\mu^{i}_{m}y|\leq 2\sqrt{\mu^{i}_{ m}\mu^{i-1}_{m}}\), there holds \[f^{{}^{\prime}}_{\varepsilon}\Big{(}V_{m}(\mu^{i}_{m}y+\xi_{m}) \Big{)}= f^{{}^{\prime}}_{\varepsilon}\Big{(}PU_{\mu_{in},\xi_{in}}(\mu^{i}_{ m}y+\xi_{m})+\sum_{i=1,j\neq i}^{k}PU_{\mu_{jm},\xi_{jm}}(\mu^{i}_{m}y+\xi_{m}) \Big{)}\] \[= f^{{}^{\prime}}_{\varepsilon}\Big{(}(\mu^{i}_{m})^{-\frac{n-2}{2 }}U(y-\sigma_{in})+U_{\mu_{jm},\xi_{jm}}(\mu^{i}_{m}y+\xi_{m})+o(1)\Big{)}, \tag{3.30}\] where \[U_{\mu_{jm},\xi_{jm}}(\mu^{i}_{m}y+\xi_{m})=\left\{\begin{aligned} & O\Big{(}(\mu^{j}_{m})^{-\frac{n-2}{2}} \Big{)}&\text{if }i>j,\\ & O\Big{(}\frac{(\mu^{j}_{m})^{\frac{n-2}{2}}}{(\mu^{j}_{m})^{n-2} }\Big{|}y-\frac{\mu^{j}_{m}}{\mu^{i}_{m}}\sigma^{j}_{m}\Big{|}^{-(n-2)}\Big{)} &\text{if }i<j.\end{aligned}\right. \tag{3.31}\] Moreover, by (3.30), (3.31) and Lebesgue's dominated convergence theorem, it holds that \[\int_{\mathbb{R}^{n}}(\mu^{i}_{m})^{2}f^{{}^{\prime}}_{\varepsilon}\Big{(}V_{ m}(\mu^{i}_{m}y+\xi_{m})\Big{)}\tilde{u}^{i}_{m}(y)\varpi(y)dy\to\int_{\mathbb{R}^{n}}f^{{}^{ \prime}}_{0}\Big{(}U(y-\sigma_{i})\Big{)}\tilde{u}^{i}(y)\varpi(y)dy.\] Then, (3.27) follows by passing to the limit in (3.29). (2) Let us prove (3.28). We set \(x-\xi_{m}=\mu^{i}_{m}y\), then \[\int_{\mathbb{R}^{n}}\nabla\psi^{h}_{1,\sigma^{i}_{m}}(y)\nabla \tilde{u}^{i}_{m}(y)dy= \int_{\mathbb{R}^{n}}f^{{}^{\prime}}_{0}\Big{(}U_{1,\sigma^{i}_{ m}}(y)\Big{)}\psi^{h}_{1,\sigma^{i}_{m}}(y)\tilde{u}^{i}_{m}(y)dy\] \[= \mu^{i}_{m}\int_{\frac{\sqrt{\mu^{i}_{m}\mu^{i+1}_{m}}}{2}\leq|x- \xi|\leq 2\sqrt{\mu^{i}_{m}\mu^{i-1}_{m}}}f^{{}^{\prime}}_{0}\Big{(}U_{\mu^{i}_{m} \star\xi^{i}_{m}}(x)\Big{)}\psi^{h}_{\mu^{i}_{m},\xi^{i}_{m}}(x)u_{m}(x)\chi^{i} _{m}(x)dx\] \[= \mu^{i}_{m}\Big{(}\int_{\mathcal{A}^{i}_{m}}f^{{}^{\prime}}_{0} \Big{(}U_{\mu^{i}_{m},\xi^{i}_{m}}(x)\Big{)}\psi^{h}_{\mu^{i}_{m},\xi^{i}_{m}}(x)u _{m}(x)dx+o(1)\Big{)}. \tag{3.32}\] Now, we show that \[\mu_{m}^{i}\int_{\mathcal{A}_{m}^{i}}f_{0}^{{}^{\prime}}\Big{(}U_{\mu_{m}^{i}, \xi_{m}^{i}}(x)\Big{)}\psi_{\mu_{m}^{i},\xi_{m}^{i}}^{h}(x)u_{m}(x)dx=o(1). \tag{3.33}\] Therefore, (3.28) follows by (3.32) and (3.33), taking into account that \(\sigma_{m}^{i}\to\sigma_{i}\). From (3.5) and (3.11), one has \[\mu_{m}^{i}\int_{\Omega}\nabla P\psi_{\mu_{m}^{i},\xi_{m}^{i}}^{h}(x)\nabla u _{m}(x)dx=o(1). \tag{3.34}\] On the other hand, \[\int_{\Omega}\nabla P\psi_{\mu_{m}^{i},\xi_{m}^{i}}^{h}(x)\nabla u _{m}(x)dx= \int_{\Omega}f_{0}^{{}^{\prime}}\Big{(}U_{\mu_{m}^{i},\xi_{m}^{i }}(x)\Big{)}\psi_{\mu_{m}^{i},\xi_{m}^{i}}^{h}(x)u_{m}(x)dx\] \[= \int_{\Omega\setminus B(\xi_{m},\rho)}\cdots dx+\sum_{l=1,\ l \neq i}^{k}\int_{\mathcal{A}_{m}^{l}}\cdots dx+\int_{\mathcal{A}_{m}^{i}} \cdots dx\] \[= \int_{\mathcal{A}_{m}^{i}}\cdots dx+o\Big{(}\frac{1}{\mu_{m}^{i} }\Big{)},\] where \[\int_{\Omega\setminus B(\xi_{m},\rho)}\Big{|}f_{0}^{{}^{\prime}} \Big{(}U_{\mu_{m}^{i},\xi_{m}^{i}}(x)\Big{)}\psi_{\mu_{m}^{i},\xi_{m}^{i}}^{h} (x)u_{m}(x)\Big{|}dx\] \[\leq C|\psi_{\mu_{m}^{i},\xi_{m}^{i}}^{h}|_{\frac{2n}{n-2}}|u_{m}|_{ \frac{2n}{n-2}}\Big{(}\int_{\Omega\setminus B(\xi_{m},\rho)}U_{\mu_{m}^{i}, \xi_{m}^{i}}^{\frac{2n}{n-2}}dx\Big{)}^{\frac{2}{n}}=O(\mu_{m}^{i}). \tag{3.35}\] If \(l\neq i\), by (3.24), there is a fact that \(\int_{\mathcal{A}_{m}^{i}}U_{\mu_{m}^{i},\xi_{m}^{i}}^{\frac{2n}{n-2}}dx=O \Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{n-2}} \Big{)}\), then \[\int_{\mathcal{A}_{m}^{l}}\Big{|}f_{0}^{{}^{\prime}}\Big{(}U_{\mu _{m}^{i},\xi_{m}^{i}}(x)\Big{)}\psi_{\mu_{m}^{i},\xi_{m}^{i}}^{h}(x)u_{m}(x) \Big{|}dx\] \[\leq C|\psi_{\mu_{m}^{i},\xi_{m}^{i}}^{h}|_{\frac{2n}{n-2}}|u_{m}|_{ \frac{2n}{n-2}}\Big{(}\int_{\mathcal{A}_{m}^{l}}U_{\mu_{m}^{i},\xi_{m}^{i}}^{ \frac{2n}{n-2}}dx\Big{)}^{\frac{2}{n}}=O\Big{(}\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}^{\frac{n}{n-2}}\Big{)}.\] We finish the proof of this lemma. Now, by means of the previous result, we show the following proof. _Proof of Proposition 2.1_: First of all, we point out that \(\phi\) solves equation (2.11) if and only if \(\phi\) is a fixed point of the map \(T_{\mu,\xi}:E_{\mu,\xi}^{\perp}\to E_{\mu,\xi}^{\perp}\) defined by \[T_{\mu,\xi}(\phi)= L_{\mu,\xi}^{-1}\Pi_{\mu,\xi}^{\perp}i^{*}\Big{[}\Big{(}f_{ \varepsilon}(V+\phi)-f_{\varepsilon}(V)-f_{\varepsilon}^{{}^{\prime}}(V)\phi \Big{)}\] \[+\Big{(}f_{\varepsilon}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{i}f_ {0}^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})\Big{)}\phi+\Big{(}\sum_{i=1}^{k}(-1)^{i }f_{0}^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})-\sum_{i=1}^{k}(-1)^{i}f_{0}^{{}^{ \prime}}(U_{\mu_{i},\xi_{i}})\Big{)}\phi\] \[+\Big{(}f_{\varepsilon}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i},\xi_{i}})\Big{)}+\Big{(}\sum_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i},\xi_{i}})- \sum_{i=1}^{k}(-1)^{i}f_{0}(U_{\mu_{i},\xi_{i}})\Big{)}\Big{]}.\] From Lemma 3.1 and Sobolev inequality, we have \[\|T_{\mu,\xi}(\phi)\|\leq C\Big{|}f_{\varepsilon}(V+\phi)-f_{\varepsilon}(V)-f_{ \varepsilon}^{{}^{\prime}}(V)\phi\Big{|}_{\frac{2n}{n+2}}\] \[+C\Big{|}\Big{(}f_{\varepsilon}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{ i}f_{0}^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})\Big{)}\phi\Big{|}_{\frac{2n}{n+2}}+C \Big{|}\Big{(}f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}^{{}^{\prime}}(U _{\mu_{i},\xi_{i}})\Big{)}\phi\Big{|}_{\frac{2n}{n+2}}\] \[+C\Big{|}f_{\varepsilon}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{ i},\xi_{i}})\Big{|}_{\frac{2n}{n+2}}+C\Big{|}\sum_{i=1}^{k}(-1)^{i}\Big{(}f_{0}(PU_{\mu_{i}, \xi_{i}})-f_{0}(U_{\mu_{i},\xi_{i}})\Big{)}\Big{|}_{\frac{2n}{n+2}}\] \[= H_{1}+\cdots+H_{5}.\] _Estimate of \(H_{1}\)_: From the mean value theorem, we choose \(t=t(x)\in(0,1)\), then \[H_{1}=\Big{|}f_{\varepsilon}(V+\phi)-f_{\varepsilon}(V)-f_{\varepsilon}^{{}^{ \prime}}(V)\phi\Big{|}_{\frac{2n}{n+2}}=\Big{|}f_{\varepsilon}(V+t\phi)-f_{ \varepsilon}^{{}^{\prime}}(V)\phi\Big{|}_{\frac{2n}{n+2}}. \tag{3.36}\] When \(n<6\), Lemma 5.5 follows that \[H_{1}\leq C\Big{(}|\phi|_{\frac{2n}{n+2}}^{p}+|U_{\mu_{i},\xi_{i}}^{p-2}\phi^{ 2}|_{\frac{2n}{n+2}}\Big{)}\leq C\Big{(}|\phi|_{\frac{2n}{n-2}}^{p-2}+|U_{\mu_ {i},\xi_{i}}|_{p-2}^{p-2}\Big{)}|\phi|_{\frac{2n}{n-2}}^{2}=C\Big{(}|\phi|^{p- 2}+1\Big{)}\|\phi\|^{2}.\] When \(n=6\), by Sobolev inequality, one has \[H_{1}\leq C\Big{(}\Big{|}|\phi|^{p}\Big{|}_{\frac{2n}{n+2}}+|\phi^{2}|_{\frac{ 2n}{n+2}}\Big{)}=C\Big{(}|\phi|_{\frac{2n}{n-2}}^{p}+\Big{(}\int\limits_{ \Omega}|\phi|^{\frac{2n}{n-2}}dx\Big{)}^{\frac{n+2}{2n}}\Big{)}=2C|\phi|_{p+1} ^{p}\leq 2C\|\phi\|^{2}.\] When \(n>6\), there holds \[H_{1}\leq C\Big{(}|\phi|_{\frac{2n}{n+2}}^{p}+\varepsilon|U_{\mu_{i},\xi_{i}} ^{p-1}\phi|_{\frac{2n}{n+2}}\Big{)}=C\Big{(}|\phi|_{\frac{2n}{n-2}}^{p}+\Big{(} \int\limits_{\Omega}(U_{\mu_{i},\xi_{i}}^{p-1}|\phi|)^{\frac{2n}{n+2}}dx\Big{)} ^{\frac{n+2}{2n}}\Big{)}\] \[\leq C\Big{(}|\phi|_{\frac{2n}{n-2}}^{p}+\varepsilon|U_{\mu_{i},\xi_{i }}|_{\frac{2n}{n-2}}^{p-1}|\phi|_{\frac{2n}{n-2}}\Big{)}=C\Big{(}|\phi|_{\frac {2n}{n-2}}^{p-1}+\varepsilon|U_{\mu_{i},\xi_{i}}|_{\frac{2n}{n-2}}^{p-1}\Big{)} |\phi|_{\frac{2n}{n-2}}\] \[\leq C(\|\phi\|^{p-1}+\varepsilon)\|\phi\|.\] Sum up these estimates, we have \[H_{1}\leq\begin{cases}C(|\phi|^{p-2}+1)\|\phi\|^{2}&\text{if }3\leq n\leq 5,\\ C\|\phi\|^{2}&\text{if }n=6,\\ C(\|\phi\|^{p-1}+\varepsilon)\|\phi\|&\text{if }n\geq 7.\end{cases} \tag{3.37}\] _Estimate of \(H_{2}\)_: From Holder's inequality and (5.12), we get \[H_{2}= \Big{|}\Big{(}f_{\varepsilon}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1) ^{i}f_{0}^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})\Big{)}\phi\Big{|}_{\frac{2n}{n+2}}\] \[\leq \Big{|}f_{\varepsilon}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0 }^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})\Big{|}_{\frac{n}{2}}|\phi|_{\frac{2n}{n- 2}}\leq C\varepsilon\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|} \|\phi\|.\] _Estimate of \(H_{3}\)_: By Holder's inequality, (5.9) and (5.11), there holds \[H_{3} = \Big{|}\Big{(}f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}^{ {}^{\prime}}(U_{\mu_{i},\xi_{i}})\Big{)}\phi\Big{|}_{\frac{2n}{n+2}}\] \[= \Big{|}\Big{(}f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}^{ {}^{\prime}}(PU_{\mu_{i},\xi_{i}})\Big{)}\phi\Big{|}_{\frac{2n}{n+2}}+\Big{|} \sum_{i=1}^{k}(-1)^{i}\Big{(}f_{0}^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})-f_{0}^ {{}^{\prime}}(U_{\mu_{i},\xi_{i}})\Big{)}\phi\Big{|}_{\frac{2n}{n+2}}\] \[\leq \Big{|}f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}^{{}^{ \prime}}(U_{\mu_{i},\xi_{i}})\Big{|}_{\frac{n}{2}{n-2}}+k\Big{|}f_{0}^{{}^{ \prime}}(PU_{\mu_{i},\xi_{i}})-f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})\Big{|}_{ \frac{n}{2}}|\phi|_{\frac{2n}{n-2}}\] \[\leq \begin{cases}O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\|\phi\| \Big{)}&\text{if }\,3\leq n\leq 5,\\ \Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\ln\frac{\varepsilon}{| \ln\varepsilon|^{2}}\Big{|}\|\phi\|\Big{)}&\text{if }\,n=6,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{-n+8}{n- 2}}\|\phi\|\Big{)}&\text{if }\,n\geq 7.\end{cases}\] _Estimate of \(H_{4}\) and \(H_{5}\)_: From (5.12) and (5.8), one has \[|H_{4}|_{\frac{2n}{n+2}}+|H_{5}|_{\frac{2n}{n+2}}=O(R_{\varepsilon}),\] and \(R_{\varepsilon}\) satisfies \[R_{\varepsilon}=\begin{cases}O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}} \Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\ln\Big{|}\ln\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}&\text{if }\,3\leq n\leq 6,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n+2}{2( n-2)}}\Big{)}&\text{if }\,n\geq 7.\end{cases}\] From \(H_{1}\)-\(H_{5}\), there is a constant \(C^{*}>0\) and \(\mu_{0}>0\) such that for each \(\mu\in(0,\mu_{0})\), we obtain \[\|T_{\mu,\xi}(\phi)\|\leq C^{*}R_{\varepsilon}\quad\text{for every }\phi\in\tilde{B}=\{\phi \in E_{\mu,\xi}^{\perp}:\|\phi\|\leq C^{*}R_{\varepsilon}\}.\] Next, we prove that \(T_{\mu,\xi}\) is a contraction map. If \(\phi_{1}\), \(\phi_{2}\in\tilde{B}\), then \[\|T_{\mu,\xi}(\phi_{2})-T_{\mu,\xi}(\phi_{1})\|\] \[\leq C\Big{|}f_{\varepsilon}(V+\phi_{2})-f_{\varepsilon}(V+\phi_{1})- f_{\varepsilon}^{{}^{\prime}}(V)(\phi_{2}-\phi_{1})\Big{|}_{\frac{2n}{n+2}}\] \[+C\Big{|}\Big{(}f_{\varepsilon}^{{}^{\prime}}(V)-\sum_{i=1}^{k}( -1)^{i}f_{0}^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})\Big{)}(\phi_{2}-\phi_{1}) \Big{|}_{\frac{2n}{n+2}}\] \[+C\Big{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}f_{0}^{{}^{\prime}}(PU_{\mu _{i},\xi_{i}})-\sum_{i=1}^{k}(-1)^{i}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}}) \Big{)}(\phi_{2}-\phi_{1})\Big{|}_{\frac{2n}{n+2}}=K_{1}+K_{2}+K_{3}.\] _Estimate of \(K_{1}\)_: Similar to the computations of \(H_{1}\)-\(H_{3}\). By mean value theorem, we choose \(\varrho=\varrho(x)\in(0,1)\) and \(\phi_{\varrho}=(1-\varrho)\phi_{1}+\varrho\phi_{2}\), then \[K_{1}= \Big{|}f_{\varepsilon}(V+\phi_{2})-f_{\varepsilon}(V+\phi_{1})-f_ {\varepsilon}^{{}^{\prime}}(V)(\phi_{2}-\phi_{1})\Big{|}_{\frac{2n}{n+2}}\] \[= \Big{|}\Big{(}f_{\varepsilon}^{{}^{\prime}}(V+\phi_{\varrho})-f_ {\varepsilon}^{{}^{\prime}}(V)\Big{)}(\phi_{2}-\phi_{1})\Big{|}_{\frac{2n}{n+2 }}\,.\] When \(n<6\), by Lemma 5.5 and Holder's inequality, we get \[K_{1}\leq C\Big{(}\Big{|}|\phi_{\varrho}|^{p-1}(\phi_{2}-\phi_{1})\Big{|}_{ \frac{2n}{n+2}}+\Big{|}\Big{(}\sum_{i=1}^{k}U_{\mu_{i},\xi_{i}}\Big{)}^{p-2} \phi_{\varrho}(\phi_{2}-\phi_{1})\Big{|}_{\frac{2n}{n+2}}\Big{)}\] \[\leq C\Big{(}|\phi_{\varrho}|^{p-1}_{\frac{2n}{n-2}}|\phi_{2}-\phi_{1 }|_{\frac{2n}{n-2}}+\bigg{(}\int_{\Omega}\Big{[}\Big{(}\sum_{i=1}^{k}U_{\mu_{i },\xi_{i}}\Big{)}^{p-2}\phi_{\varrho}(\phi_{2}-\phi_{1})\Big{]}^{\frac{2n}{n+ 2}}dx\bigg{)}^{\frac{n+2}{2n}}\bigg{)}\] \[\leq C\Big{(}|\phi_{\varrho}|^{p-1}_{\frac{2n}{n-2}}+\sum_{i=1}^{k}|U_ {\mu_{i},\xi_{i}}|_{\frac{2n}{n-2}}^{p-2}|\phi_{\varrho}|_{\frac{2n}{n-2}} \Big{)}|\phi_{2}-\phi_{1}|_{\frac{2n}{n-2}}\,.\] When \(n=6\), we have \[K_{1}\leq C|\phi_{\varrho}|^{p-1}_{\frac{2n}{n-2}}|\phi_{2}-\phi_{1}|_{\frac{2n }{n-2}}.\] When \(n>6\), there holds \[K_{1}\leq C\Big{(}\big{|}|\phi_{\varrho}|^{p-1}(\phi_{2}-\phi_{1})\Big{|}_{ \frac{2n}{n+2}}+\varepsilon\big{|}\Big{(}\sum\limits_{i=1}^{k}U_{\mu_{i},\xi_{i }}\Big{)}^{p-1}\phi_{\varrho}(\phi_{2}-\phi_{1})\Big{|}_{\frac{2n}{n+2}}\Big{)}\] \[\leq C\Big{[}|\phi_{\varrho}|^{p-1}_{\frac{2n}{n-2}}|\phi_{2}-\phi_{1}| _{\frac{2n}{n-2}}+\Big{(}\varepsilon\int_{\Omega}\Big{[}\Big{(}\sum\limits_{i =1}^{k}U_{\mu_{i},\xi_{i}}\Big{)}^{p-1}\phi_{\varrho}(\phi_{2}-\phi_{1})\Big{]} ^{\frac{2n}{n+2}}dx\Big{)}^{\frac{n+2}{2n}}\Big{]}\] \[\leq C\Big{(}|\phi_{\varrho}|^{\frac{p-1}{2n}}_{\frac{2n}{n-2}}+ \varepsilon\big{)}|\phi_{2}-\phi_{1}|_{\frac{2n}{n-2}}.\] Hence, by Sobolev inequality, we obtain \[K_{1}\leq C\Big{(}|\phi_{\varrho}|^{p-1}+\max\{\|\phi_{\varrho}\|,\varepsilon \}\Big{)}\|\phi_{2}-\phi_{1}\|.\] _Estimate of \(K_{2}\)_: Similar to the proof of \(H_{2}\) and \(H_{3}\), from (5.13) and (5.11), there holds \[K_{2}= \Big{|}\Big{(}f_{\varepsilon}^{{}^{\prime}}(V)-\sum\limits_{i=1}^ {k}(-1)^{i}f_{0}^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})\Big{)}(\phi_{1}-\phi_{2} )\Big{|}_{\frac{2n}{n+2}}\] \[\leq\Big{|}f_{\varepsilon}^{{}^{\prime}}(V)-f_{0}^{{}^{\prime}}(V )\Big{|}_{\frac{n}{2}}|\phi_{2}-\phi_{1}|_{\frac{2n}{n-2}}+\Big{|}f_{0}^{{}^{ \prime}}(V)-\sum\limits_{i=1}^{k}(-1)^{i}f_{0}^{{}^{\prime}}(PU_{\mu_{i},\xi_ {i}})\Big{|}_{\frac{n}{2}}|\phi_{2}-\phi_{1}|_{\frac{2n}{n-2}}\] \[\leq C\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac {-n+8}{n-2}}\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\|\phi_ {2}-\phi_{1}\|.\] _Estimate of \(K_{3}\)_: By (5.9), one has \[K_{3} = \Big{|}\Big{(}\sum\limits_{i=1}^{k}(-1)^{i}f_{0}^{{}^{\prime}}(PU_ {\mu_{i},\xi_{i}})-\sum\limits_{i=1}^{k}(-1)^{i}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})\Big{)}(\phi_{2}-\phi_{1})\Big{|}_{\frac{2n}{n+2}}\] \[\leq k\Big{|}f_{0}^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})-f_{0}^{{}^{ \prime}}(U_{\mu_{i},\xi_{i}})\Big{|}_{\frac{n}{2}}|\phi_{2}-\phi_{1}|_{\frac{2 n}{n-2}}\] \[\leq \begin{cases}O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\| \phi_{2}-\phi_{1}\|\Big{)}&\text{if }3\leq n\leq 5,\\ O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\ln\frac{\varepsilon}{| \ln\varepsilon|^{2}}\Big{|}^{\frac{1}{2}}\|\phi_{2}-\phi_{1}\|\Big{)}&\text{if }n=6,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{2}{n-2}} \|\phi_{2}-\phi_{1}\|\Big{)}&\text{if }n\geq 7.\end{cases}\] From \(K_{1}\)-\(K_{3}\), if \(\varepsilon\) is sufficient small, there exists a constant \(L^{*}\in(0,1)\) such that \[\|T_{\mu,\xi}(\phi_{2})-T_{\mu,\xi}(\phi_{1})\|\leq L^{*}\|\phi_{2}-\phi_{1}\|.\] It follows that \(T_{\mu,\xi}\) is a contraction mapping from \(\tilde{B}\) to \(\tilde{B}\), then, it has a unique fixed point \(\phi\in\tilde{B}\). This concludes the proof. ## 4. Proof of Proposition 2.2 This section is devoted to prove Proposition 2.2. _Proof of Part a_. We consider the following perturbation problem \[\begin{cases}-\Delta(V+\phi)=f_{\varepsilon}(V+\phi)+\sum\limits_{i=0}^{k}\sum \limits_{l=0}^{n}c_{il}U_{\mu_{\varepsilon i},\xi_{\varepsilon i}}^{p-1}P\psi_{ \mu_{\varepsilon i},\xi_{\varepsilon i}}^{l}&\text{in }\Omega,\\ \sum\limits_{i=1}^{k}\int_{\Omega}U_{\mu_{\varepsilon i},\xi_{\varepsilon i}}^{p- 1}P\psi_{\mu_{\varepsilon i},\xi_{\varepsilon i}}^{l}\phi dx=0&\text{for }l=0,1,\cdots,n.\end{cases} \tag{4.1}\] From (2.15), we have \[\int_{\Omega}\Delta(V+\phi)P\psi^{h}_{\mu_{j},\xi_{j}}dx+\int_{\Omega}f_{\varepsilon }(V+\phi)P\psi^{h}_{\mu_{j},\xi_{j}}dx.=0, \tag{4.2}\] Thus, by (4.1) and (4.2), we obtain \[c_{il}\int_{\Omega}U^{p-1}_{\mu_{\varepsilon i},\xi_{e}i}P\psi^{h}_{\mu_{ \varepsilon i},\xi_{e}i}P\psi^{h}_{\mu_{j},\xi_{j}}dx=0,\] which means that \(c_{il}=0\) for \(i=1,\cdots,k\) and \(l=0,1,\cdots,n\). Then \(V+\phi\) is a solution of problem (1.1). _Proof of Part b_. There holds \[\Big{\langle}V+\phi-i^{*}[f_{\varepsilon}(V+\phi)],P\psi^{h}_{\mu_ {j},\xi_{j}}\Big{\rangle}\] \[= \sum_{i=1}^{k}\langle PU_{\mu_{i},\xi_{i}},P\psi^{h}_{\mu_{j},\xi _{j}}\rangle-\int_{\Omega}f_{\varepsilon}(V+\phi)P\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= \sum_{i=1}^{k}\int_{\Omega}(-1)^{i}f_{0}(U_{\mu_{i},\xi_{i}})P \psi^{h}_{\mu_{j},\xi_{j}}dx-\int_{\Omega}f_{\varepsilon}(V+\phi)P\psi^{h}_{ \mu_{j},\xi_{j}}dx\] \[= \sum_{i=1}^{k}\int_{\Omega}(-1)^{i}\Big{[}f_{0}(U_{\mu_{i},\xi_{i }})-f_{0}(PU_{\mu_{i},\xi_{i}})\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[+\sum_{i=1}^{k}\int_{\Omega}(-1)^{i}\Big{[}f_{0}(U_{\mu_{i},\xi_{ i}})-f_{0}(PU_{\mu_{i},\xi_{i}})\Big{]}(P\psi^{h}_{\mu_{j},\xi_{j}}-\psi^{h}_{ \mu_{j},\xi_{j}})dx\] \[+\int_{\Omega}\Big{[}\sum_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i},\xi_ {i}})-f_{\varepsilon}(V)\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[+\int_{\Omega}\Big{[}\sum_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i},\xi_ {i}})-f_{\varepsilon}(V)\Big{]}(P\psi^{h}_{\mu_{j},\xi_{j}}-\psi^{h}_{\mu_{j}, \xi_{j}})dx\] \[-\int_{\Omega}\Big{[}f_{\varepsilon}(V+\phi)-f_{\varepsilon}(V)- f_{\varepsilon}^{{}^{\prime}}(V)\phi\Big{]}P\psi^{h}_{\mu_{j},\xi_{j}}dx-\int_{ \Omega}[f_{\varepsilon}^{{}^{\prime}}(V)-f_{0}^{{}^{\prime}}(V)]\phi P\psi^{h} _{\mu_{j},\xi_{j}}dx\] \[-\int_{\Omega}\Big{[}f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{ i}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})\Big{]}\phi P\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[-\sum_{i=1}^{k}\int_{\Omega}(-1)^{i}f_{0}^{{}^{\prime}}(U_{\mu_{i },\xi_{i}})\phi(P\psi^{h}_{\mu_{j},\xi_{j}}-\psi^{h}_{\mu_{j},\xi_{j}})dx-\sum_{ i=1}^{k}\int_{\Omega}(-1)^{i}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})\phi\psi^{h}_{\mu_{j}, \xi_{j}}dx\] \[= P_{1}+\cdots,P_{9}.\] _Estimate of \(P_{1}\)_: It holds \[P_{1}= \sum_{i=1}^{k}\int_{\Omega}(-1)^{i}\Big{[}f_{0}(U_{\mu_{i},\xi_{i }})-f_{0}(PU_{\mu_{i},\xi_{i}})\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= -\sum_{i=1}^{k}\int_{\Omega}(-1)^{i}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i},\xi_{i}}-U_{\mu_{i},\xi_{i}})\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[-\sum_{i=1}^{k}\int_{\Omega}(-1)^{i}\Big{[}f_{0}(PU_{\mu_{i},\xi_ {i}})-f_{0}(U_{\mu_{i},\xi_{i}})-f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{ \mu_{i},\xi_{i}}-U_{\mu_{i},\xi_{i}})\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx.\] If \(h=0\), by (5.1) and (2.9), we have \[-\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i}, \xi_{i}}-U_{\mu_{i},\xi_{i}})\psi^{0}_{\mu_{j},\xi_{j}}dx\] \[= p\alpha_{n}\mu_{i}^{\frac{n-2}{2}}\int_{\Omega}U^{p-1}_{\mu_{i}, \xi_{i}}H(x,\xi_{i})\psi^{0}_{\mu_{j},\xi_{j}}dx\] \[= p\alpha_{n}\mu_{i}^{n-2}\Big{(}H(\xi_{i},\xi_{i})+O(\mu_{i}) \Big{)}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}\frac{1}{(1+|y|^{2})^{2}}\psi^{0} \Big{(}\frac{\mu_{i}y+\xi_{i}-\xi_{j}}{\mu_{j}}\Big{)}dy\] \[= \begin{cases}p\alpha_{n}\mu_{i}^{n-2}\Big{(}H(\xi_{i},\xi_{i})+O (\mu_{i})\Big{)}\int_{\mathbb{R}^{n}}U^{p-1}(y)\psi^{0}(y)dy&\text{if $j=i$},\\ \frac{(n-2)\alpha_{n}^{2}p}{2}\mu_{i}^{n-2}\Big{(}H(\xi_{i},\xi_{i})+O(\mu_{i} )\Big{)}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}\frac{1}{(1+|y|^{2})^{2}}\frac{| \mu_{i}y+\xi_{i}-\xi_{j}|^{2}-\mu_{j}^{2}}{(\mu_{j}^{2}+|\mu_{i}y+\xi_{i}-\xi_ {j}|^{2})^{\frac{n}{2}}}dy&\text{if $j>i$},\end{cases}\] \[= \begin{cases}\alpha_{n}a_{1}H(\xi,\xi)\mu_{i}^{n-2}+O(\mu_{i}^{n- 1})&\text{if $j=i$ and $h=0$},\\ CH(\xi,\xi)+O(\mu_{i}^{n-1})&\text{if $j>i$ and $h=0$}.\end{cases}\] If \(h=1,\cdots,n\) and \(j=i\), we set \(\partial_{\xi_{i}^{h}}\varphi(\xi)=\frac{\partial\varphi(\xi)}{\partial\xi_{i} ^{h}}\), by (2.10), one has \[-\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i},\xi_{i}}-U_{\mu_{i},\xi_{i}})\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= p\alpha_{n}\mu_{i}^{\frac{n-2}{2}}\int_{\Omega}U^{p-1}_{\mu_{i},\xi_{i}}H(x,\xi_{i})\psi^{h}_{\mu_{i},\xi_{i}}dx\] \[= \alpha_{n}\mu_{i}^{\frac{n}{2}}\int_{\Omega}H(x,\xi_{i})\frac{ \partial U^{p}_{\mu_{i},\xi_{i}}}{\partial\xi_{i}^{h}}dx\] \[= \alpha_{n}\mu_{i}^{\frac{n}{2}}\Big{[}\frac{\partial}{\partial\xi _{i}^{h}}\int_{\Omega}U^{p}_{\mu_{i},\xi_{i}}H(x,\xi_{i})dx-\int_{\Omega}U^{p} _{\mu_{i},\xi_{i}}\frac{\partial H(x,\xi_{i})}{\partial\xi_{i}^{h}}dx\Big{]}\] \[= \alpha_{n}\mu_{i}^{\frac{n}{2}}\Big{[}\mu_{i}^{\frac{n-2}{2}} \frac{\partial}{\partial\xi_{i}^{h}}\int_{\Omega-\xi_{i}}U^{p}_{\mu_{i},\xi_{ i}}H(\mu_{i}y+\xi_{i},\xi_{i})dy-\mu_{i}^{\frac{n-2}{2}}\int_{\frac{\Omega-\xi_{i}}{ \mu_{i}}}U^{p}_{\mu_{i},\xi_{i}}\frac{\partial H(\mu_{i}y+\xi_{i},\xi_{i})}{ \partial\xi_{i}^{h}}dy\Big{]}\] \[= \alpha_{n}a_{2}\mu_{i}^{n-1}\Big{[}\frac{\partial(H(\xi_{i},\xi_ {i}))}{\partial\xi_{i}^{h}}-\frac{\partial H(\xi_{i},\xi_{i})}{\partial\xi_{i}^{ h}}+O(\mu_{i})\Big{]}\] \[= \alpha_{n}a_{2}\mu_{i}^{n-1}\Big{(}\frac{1}{2}\partial_{\xi_{h}} \varphi(\xi)+O(\mu_{i})\Big{)}.\] If \(h=1,\cdots,n\) and \(j>i\), by Lemma 5.1 and (2.9), one has \[\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i}, \xi_{i}}-U_{\mu_{i},\xi_{i}})\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= p\alpha_{n}\mu_{i}^{\frac{n-2}{2}}\int_{\Omega}U^{p-1}_{\mu_{i}, \xi_{i}}H(x,\xi_{i})\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= (n-2)p\alpha_{n}^{2}\mu_{i}^{\frac{n-2}{2}}\mu_{j}^{\frac{n}{2}} \int_{\Omega}\frac{\mu_{i}^{2}}{(\mu_{i}^{2}+|x-\xi_{i}|^{2})^{2}}H(x,\xi_{i}) \frac{(x-\xi_{j})_{h}}{(\mu_{j}^{2}+|x-\xi_{j}|^{2})^{\frac{n}{2}}}dx\] \[= O(\mu_{i}^{\frac{n-2}{2}}\mu_{j}^{\frac{n}{2}})=O\Big{(}\Big{(} \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n+1}{n-2}}\Big{)}.\] On the other hand, by (5.6), (5.7) and (5.10), we get \[\int_{\Omega}\Big{[}f_{0}(PU_{\mu_{i},\xi_{i}})-f_{0}(U_{\mu_{i}, \xi_{i}})-f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i},\xi_{i}}-U_{\mu_ {i},\xi_{i}})\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[\leq \Big{|}f_{0}(PU_{\mu_{i},\xi_{i}})-f_{0}(U_{\mu_{i},\xi_{i}})-f_{0} ^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i},\xi_{i}}-U_{\mu_{i},\xi_{i}}) \Big{|}_{\frac{n}{2}}|\psi^{h}_{\mu_{j},\xi_{j}}|_{\frac{n}{n-2}}\] \[= o\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{ \frac{n-1}{n-2}}\Big{)}\quad\text{for $h=0,\cdots,n$.}\] Therefore, we get \[P_{1} = \left\{\begin{array}{ll}\alpha_{n}a_{1}\sum\limits_{i=1}^{k} \mu_{i}^{n-2}\Big{(}H(\xi_{i},\xi_{i})+O(\mu_{i})\Big{)}+o(\mu_{i}^{n-2})&\text {if $j=i$ and $h=0$,}\\ \sum\limits_{i=1}^{k}O\Big{(}H(\xi_{i},\xi_{i})+O(\mu_{i})\Big{)}+o(\mu_{i}^{n -2})&\text{if $j>i$ and $h=0$,}\\ \alpha_{n}a_{2}\sum\limits_{i=1}^{k}\mu_{i}^{n-1}\Big{(}\frac{1}{2}\partial_{ \xi_{k}}\varphi(x)+O(\mu_{i})\Big{)}+o(\mu_{i}^{n-1})&\text{if $j=i$ and $h=1,\cdots,n$,}\\ O(\sum\limits_{i=1}^{k}{\mu_{i}}^{\frac{n-2}{2}}\mu_{j}^{\frac{n}{2}})+o(\mu_{i }^{n-1})&\text{if $j>i$ and $h=1,\cdots,n$,}\\ \end{array}\right.\] \[= \left\{\begin{array}{ll}\alpha_{n}a_{1}H(\xi,\xi)\frac{ \varepsilon}{|\ln\varepsilon|^{2}}d_{1}^{n-2}+o\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}&\text{if $j=i$ and $h=0$,}\\ CH(\xi,\xi)+o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}&\text{if $j>i$ and $h=0$,}\\ \frac{1}{2}\alpha_{n}a_{2}\partial_{\xi_{h}}\varphi(\xi)\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}d_{1}^{n-2}+o\Big{(} \Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\Big{)} &\text{if $j=i$ and $h=1,\cdots,n$,}\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}} \Big{)}&\text{if $j>i$ and $h=1,\cdots,n$.}\end{array}\right.\] _Estimate of \(P_{2}\)_: By (5.8) and (5.4), we deduce \[P_{2} = \sum\limits_{i=1}^{k}\int_{\Omega}\Big{|}(-1)^{i}\Big{[}f_{0}(U_{ \mu_{i},\xi_{i}})-f_{0}(PU_{\mu_{i},\xi_{i}})\Big{]}(P\psi^{h}_{\mu_{j},\xi_{j }}-\psi^{h}_{\mu_{j},\xi_{j}})\Big{|}dx\] \[\leq C\sum\limits_{i=1}^{k}\Big{|}f_{0}(U_{\mu_{i},\xi_{i}})-f_{0}(PU_ {\mu_{i},\xi_{i}})\Big{|}_{\frac{2n}{n+2}}\Big{|}P\psi^{h}_{\mu_{j},\xi_{j}}- \psi^{h}_{\mu_{j},\xi_{j}}\Big{|}_{\frac{2n}{n-2}}\] \[= \left\{\begin{array}{ll}o\Big{(}\frac{\varepsilon}{|\ln\varepsilon |^{2}}\Big{)}&\text{if $h=0$,}\\ o\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}} \Big{)}&\text{if $h=1,\cdots,n$.}\end{array}\right.\] _Estimate of \(P_{3}\)_: The main proof of \(P_{3}\) shows in Lemma 5.7, the final result is \[P_{3} = \int_{\Omega}\Big{[}\sum\limits_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{ i},\xi_{i}})-f_{\varepsilon}(V)\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= \left\{\begin{array}{ll}\alpha_{n}a_{1}\frac{\varepsilon}{|\ln \varepsilon|^{2}}H(\xi,\xi)d_{1}^{n-2}+a_{3}\frac{\varepsilon}{|\ln\varepsilon |^{2}}\sum\limits_{i=1}^{k-1}\Big{(}\frac{d_{i+1}}{d_{i}}\Big{)}^{\frac{n-2}{2} }g(\sigma_{i})-\frac{2k^{2}}{(n-2)^{2}}a_{4}\varepsilon\Big{|}\ln\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\\ -a_{4}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\sum\limits_{i=1}^{k}\frac{2}{2 |\ln d_{i}|}+o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}&\text{if $h=0$,}\\ \frac{1}{2}\alpha_{n}a_{2}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)} ^{\frac{n-1}{n-2}}d_{1}^{n-1}\partial_{\xi_{h}}\varphi(\xi)+o\Big{(}\Big{(} \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\Big{)}& \text{if $h=1,\cdots,n$,}\end{array}\right.\] _Estimate of \(P_{4}\)_: From (5.12) and (5.4), we get \[P_{4} = \int_{\Omega}\Big{|}\Big{[}\sum\limits_{i=1}^{k}(-1)^{i}f_{0}(PU_ {\mu_{i},\xi_{i}})-f_{\varepsilon}(V)\Big{]}(P\psi^{h}_{\mu_{j},\xi_{j}}-\psi^{h }_{\mu_{j},\xi_{j}})\Big{|}dx\] \[P_{7} = \int_{\Omega}\Big{|}\Big{[}f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1) ^{i}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})\Big{]}\phi P\psi_{\mu_{j},\xi_{j}}^{h }\Big{|}dx\] \[= O\Big{(}\Big{|}\Big{[}f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1) ^{i}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})\Big{]}P\psi_{\mu_{j},\xi_{j}}^{h} \Big{|}_{\frac{2n}{n+2}}\|\phi\|\Big{)}\] \[= O\Big{(}\Big{|}f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0} ^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})\Big{|}_{\frac{n}{2}}|P\psi_{\mu_{j},\xi_ {j}}^{h}|_{\frac{2n}{n+2}}\|\phi\|\Big{)}\] \[+ O\Big{(}\sum_{i=1}^{k}(-1)^{i}\Big{|}f_{0}^{{}^{\prime}}(PU_{\mu _{i},\xi_{i}})-f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})\Big{|}_{\frac{n}{2}}|P \psi_{\mu_{j},\xi_{j}}^{h}|_{\frac{2n}{n+2}}\|\phi\|\Big{)}\] \[= \begin{cases}o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}& \text{if }\,h=0,\\ o\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}} \Big{)}&\text{if }\,h=1,\cdots,n.\end{cases}\] _Estimate of \(P_{8}\)_: For \(h=0\), by (5.5), (2.13) and (5.4), it follows that \[P_{8} = \sum_{i=1}^{k}\int_{\Omega}\Big{|}(-1)^{i}f_{0}^{{}^{\prime}}(U_{ \mu_{i},\xi_{i}})\phi(P\psi_{\mu_{j},\xi_{j}}^{0}-\psi_{\mu_{j},\xi_{j}}^{0}) \Big{|}dx\] \[= O\bigg{(}\sum_{i=1}^{k}\Big{|}f_{0}^{{}^{\prime}}(U_{\mu_{i}, \xi_{i}})\Big{|}_{\frac{n}{2}}|\phi|_{\frac{2n}{n-2}}\Big{|}P\psi_{\mu_{j},\xi _{j}}^{0}-\psi_{\mu_{j},\xi_{j}}^{0}\Big{|}_{\frac{2n}{n-2}}\bigg{)}\] \[= O\Big{(}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\|\phi\|\Big{)}= \begin{cases}o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}&\text{if }\,n=3,\\ o\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2 }}\Big{)}&\text{if }\,n\geq 4,\end{cases}\] and for \(h=1,\cdots,n\), we obtain \[P_{8} = \sum_{i=1}^{k}\int_{\Omega}\Big{|}(-1)^{i}f_{0}^{{}^{\prime}}(U_{ \mu_{i},\xi_{i}})\phi(P\psi_{\mu_{j},\xi_{j}}^{h}-\psi_{\mu_{j},\xi_{j}}^{h}) \Big{|}dx\] \[= O\bigg{(}\Big{|}\sum_{i=1}^{k}f_{0}^{{}^{\prime}}(U_{\mu_{i}, \xi_{i}})\Big{|}_{\frac{n}{2}}|\phi|_{\frac{2n}{n-2}}|P\psi_{\mu_{j},\xi_{j}}^{ h}-\psi_{\mu_{j},\xi_{j}}^{h}\big{|}_{\frac{2n}{n-2}}\bigg{)}\] \[= O\Big{(}\sum_{i=1}^{k}\mu_{i}^{\frac{n}{2}}\|\phi\|\Big{)}= \begin{cases}o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}&\text{if }\,3\leq n\leq 5,\\ o\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2 }}\Big{)}&\text{if }\,n\geq 6.\end{cases}\] _Estimate of \(P_{9}\)_: We use \(\phi\) to multiply (2.7) and integral in the \(\Omega\), then \[\int_{\Omega}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})\phi\psi_{\mu_{j},\xi_{j} }^{h}dx=0.\] From \(P_{1}\)-\(P_{9}\), we complete the proof. ## 5. Appendix We collect some well known estimates. **Lemma 5.1**.: _[_42_]_ _Let \(\xi\in\Omega\), \(\mu>0\) is small, there holds_ \[PU_{\mu,\xi}(x)=U_{\mu,\xi}(x)-\alpha_{n}\mu^{\frac{n-2}{2}}H(x,\xi)+O(\mu^{ \frac{n+2}{2}}), \tag{5.1}\] \[P\psi_{\mu,\xi}^{0}(x)=\psi_{\mu,\xi}^{0}(x)-\frac{n-2}{2}\alpha_{n}\mu^{\frac {n-2}{2}}H(x,\xi)+O(\mu^{\frac{n+4}{2}}), \tag{5.2}\] \[P\psi_{\mu,\xi}^{h}(x)=\psi_{\mu,\xi}^{h}(x)-\alpha_{n}\mu^{\frac{n}{2}}\partial _{\xi_{h}}H(x,\xi)+O(\mu^{\frac{n+2}{2}}), \tag{5.3}\] _as \(\mu\to 0\) uniformly with respect to \(\xi\) in compact subsets of \(\Omega\), where \(h=1,\cdots,n\) and \(\alpha_{n}\) is given in (1.3). Moreover,_ \[|P\psi_{\mu_{j},\xi_{j}}^{h}-\psi_{\mu_{j},\xi_{j}}^{h}\big{|}_{\frac{2n}{n-2}} =\begin{cases}O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{ \frac{1}{2}}\Big{)}&\text{if }\,h=0,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{2(n-2 )}}\Big{)}&\text{if }\,h=1,\cdots,n,\end{cases} \tag{5.4}\] _for \(j=1,\cdots,k\)._ **Lemma 5.2**.: _[_18_]_ _There holds_ \[\int_{\Omega}U_{\mu,\xi}^{q}(x)dx=\begin{cases}O\Big{(}\Big{(}\frac{\varepsilon} {|\ln\varepsilon|^{2}}\Big{)}^{\frac{q}{2}}\Big{)}&\text{if }\,0<q<\frac{n}{n-2},\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{2(n-2)} }\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}&\text{if }\,q= \frac{n}{n-2},\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{n-2} -\frac{q}{2}}\Big{)}&\text{if }\frac{n}{n-2}<q\leq\frac{2n}{n-2},\end{cases} \tag{5.5}\] \[\int_{\Omega}|\psi_{\mu,\xi}^{0}(x)|^{q}dx=\begin{cases}O\Big{(}\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{q}{2}}\Big{)}&\text{if }\,0<q< \frac{n}{n-2},\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{2(n-2 )}}\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}&\text{if }\,q= \frac{n}{n-2},\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{n-2 }-\frac{q}{2}}\Big{)}&\text{if }\,\frac{n}{n-2}<q\leq\frac{2n}{n-2},\end{cases} \tag{5.6}\] _and_ \[\int_{\Omega}|\psi_{\mu,\xi}^{h}(x)|^{q}dx=\begin{cases}O\Big{(}\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}&\text{if }\,0<q<\frac{n}{n-1},\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n^{2}}{2(n -1)(n-2)}}\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}& \text{if }\,q=\frac{n}{n-1},\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{n-2 }-\frac{q}{2}}\Big{)}&\text{if }\,\frac{n}{n-1}<q\leq\frac{2n}{n-2},\end{cases} \tag{5.7}\] _for \(h=1,\cdots,n\). Moreover,_ \[\Big{|}f_{0}(PU_{\mu,\xi})-f_{0}(U_{\mu,\xi})\Big{|}_{\frac{2n}{n+2}}= \begin{cases}O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}&\text{if }\,3\leq n\leq 5, \\ O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\ln\frac{\varepsilon}{| \ln\varepsilon|^{2}}\Big{|}^{\frac{2}{3}}\Big{)}&\text{if }\,n=6,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n+2}{2(n -2)}}\Big{)}&\text{if }\,n\geq 7,\end{cases} \tag{5.8}\] \[\Big{|}f_{0}(PU_{\mu,\xi})-f_{0}(U_{\mu,\xi})-f_{0}^{{}^{\prime}}(U_{\mu,\xi} )-f_{0}^{{}^{\prime}}(U_{\mu,\xi})(PU_{\mu,\xi}-U_{\mu,\xi})\Big{|}_{\frac{n} {2}} \tag{5.9}\] \[= \begin{cases}O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}} \Big{)}^{\frac{5}{2}}\Big{)}&\text{if }\,n=3,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{3}{2}} \Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}^{\frac{1}{2}}\Big{)} &\text{if }\,n=4,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n+2}{2(n -2)}}\Big{)}&\text{if }\,n\geq 5.\end{cases} \tag{5.10}\] **Lemma 5.3**.: _It holds_ \[\left\langle P\psi^{l}_{\mu_{i},\xi_{i}},P\psi^{h}_{\mu_{j},\xi_{j}}\right\rangle= \begin{cases}o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{ \frac{n}{n-2}}\bigg{)}&\mathrm{if}\ j>i,\\ O(1)&\mathrm{if}\ l\neq h,\\ c_{h}(1+o(1))&\mathrm{if}\ i=j,\ l=h,\end{cases}\] _for some positive constants \(c_{0}\) and \(c_{1}=\cdots,c_{n}\), where \(i\), \(j=1,\cdots,k\) and \(h\), \(l=0,\cdots,n\),_ Proof.: We have \[\left\langle P\psi^{l}_{\mu_{i},\xi_{i}},P\psi^{h}_{\mu_{j},\xi_{j}}\right\rangle =\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})\psi^{l}_{\mu_{i},\xi_{i }}\psi^{h}_{\mu_{j},\xi_{j}}dx+\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i}, \xi_{i}})\psi^{l}_{\mu_{i},\xi_{i}}(P\psi^{h}_{\mu_{j},\xi_{j}}-\psi^{h}_{\mu _{j},\xi_{j}})dx.\] From (5.5), (5.7) and (5.4), there holds \[\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})\psi^{l}_{ \mu_{i},\xi_{i}}(P\psi^{h}_{\mu_{j},\xi_{j}}-\psi^{h}_{\mu_{j},\xi_{j}})dx\] \[\leq|f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})_{\frac{1}{2}}|\psi^ {l}_{\mu_{i},\xi_{i}}|_{\frac{2n}{n-2}}|P\psi^{h}_{\mu_{j},\xi_{j}}-\psi^{h}_{ \mu_{j},\xi_{j}}|_{\frac{2n}{n-2}}=o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}\bigg{)}.\] On the other hand, if \(l\), \(h=1,\cdots,n\), the change of variables \(x-\xi_{i}=\mu_{i}y\) shows that \[\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})\psi^{l}_{ \mu_{i},\xi_{i}}\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= (n-2)^{2}\alpha_{n}^{\frac{2n}{n-2}}\mu_{i}^{\frac{n+4}{2}}\mu_{ j}^{\frac{n}{2}}\int_{\Omega}\frac{(x-\xi_{i})_{l}}{(\mu_{i}^{2}+|x-\xi_{i}|^{2})^{ \frac{n+4}{2}}}\frac{(x-\xi_{j})_{h}}{(\mu_{j}^{2}+|x-\xi_{j}|^{2})^{\frac{n}{ 2}}}dx\] \[= (n-2)^{2}\alpha_{n}^{\frac{2n}{n-2}}\mu_{i}^{\frac{n-2}{2}}\mu_{ j}^{\frac{n}{2}}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}\frac{y_{l}}{(1+|y|^{2})^{ \frac{n+4}{2}}}\frac{(\mu_{i}y+\xi_{i}-\xi_{j})_{h}}{(\mu_{j}^{2}+|\mu_{i}y+ \xi_{i}-\xi_{j}|^{2})^{\frac{n}{2}}}dy\] \[= \begin{cases}O\Big{(}(\frac{\mu_{j}}{\mu_{i}})^{\frac{n}{2}}\Big{)} &\mathrm{if}\ j>i,\\ O(1)&\mathrm{if}\ i=j,\ l\neq h,\\ c_{h}(1+o(1))&\mathrm{if}\ i=j,\ l=h.\end{cases}\] If \(l=1,\cdots,n\) and \(h=0\), we have \[\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})\psi^{l}_{ \mu_{i},\xi_{i}}\psi^{0}_{\mu_{j},\xi_{j}}dx\] \[= \frac{(n-2)^{2}}{2}\alpha_{n}^{\frac{2n}{n-2}}\mu_{i}^{\frac{n+4} {2}}\mu_{j}^{\frac{n-2}{2}}\int_{\Omega}\frac{(x-\xi_{i})^{l}}{(\mu_{i}^{2}+|x -\xi_{i}|^{2})^{\frac{n+4}{2}}}\frac{|x-\xi_{j}|^{2}-\mu_{j}^{2}}{(\mu_{j}^{2} +|x-\xi_{j}|^{2})^{\frac{n}{2}}}dx\] \[= (n-2)^{2}\alpha_{n}^{\frac{2n}{n-2}}\mu_{i}^{\frac{n-2}{2}}\mu_{ j}^{\frac{n-2}{2}}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}\frac{y^{l}}{(1+|y|^{2})^{ \frac{n+4}{2}}}\frac{|\mu_{i}y+\xi_{i}-\xi_{j}|^{2}-\mu_{j}^{2}}{(\mu_{j}^{2} +|\mu_{i}y+\xi_{i}-\xi_{j}|^{2})^{\frac{n}{2}}}dy\] \[= o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)} \bigg{)}.\] Finally, if \(l=0\) and \(h=0\), one has \[\int_{\Omega}f^{{}^{\prime}}_{0}(U_{\mu_{i},\xi_{i}})\psi^{0}_{\mu_ {i},\xi_{i}}\psi^{0}_{\mu_{j},\xi_{j}}dx\] \[= \frac{(n-2)^{2}}{4}\alpha_{n}^{\frac{2n}{n-2}}\mu_{i}^{\frac{n-2}{ 2}}\mu_{j}^{\frac{n-2}{2}}\int_{\Omega}\frac{|x-\xi_{i}|^{2}-\mu_{i}^{2}}{(\mu_ {i}^{2}+|x-\xi_{i}|^{2})^{\frac{n}{2}}}\frac{|x-\xi_{j}|^{2}-\mu_{j}^{2}}{(\mu_{j} ^{2}+|x-\xi_{j}|^{2})^{\frac{n}{2}}}dx\] \[= (n-2)^{2}\alpha_{n}^{\frac{2n}{n-2}}\mu_{i}^{-\frac{n-2}{2}}\mu_{j}^{ \frac{n-2}{2}}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}\frac{(|y-\sigma_{i}|^{2}-1)} {(1+|y-\sigma_{i}|^{2})^{\frac{n}{2}}}\frac{|\mu_{i}y+\xi_{i}-\xi_{j}|^{2}-\mu_ {j}^{2}}{(\mu_{j}^{2}+|\mu_{i}y+\xi_{i}-\xi_{j}|^{2})^{\frac{n}{2}}}dy\] \[= \begin{cases}o\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{ 2}}\Big{)}\Big{)}&\text{if }j>i,\\ c_{0}(1+o(1))&\text{if }i=j.\end{cases}\] Therefore, this lemma follows from above estimates. \(\square\) **Lemma 5.4**.: _[_37_]_ _Let \(\xi\in\Omega\), there holds_ \[P\psi_{\mu,\xi}^{0}(x)=\frac{n-2}{2}a_{2}\mu^{\frac{n-2}{2}}G(x,\xi)+o\Big{(} \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)},\quad x\in\Omega,\] _and_ \[P\psi_{\mu,\xi}^{h}(x)=a_{2}\mu^{\frac{n}{2}}\frac{\partial G}{\partial\xi_{h }}(x,\xi)+o\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{ \frac{n-1}{n-2}}\Big{)}\ \ \text{if }h=1,\cdots,n,\quad x\in\Omega,\] _as \(\varepsilon\to 0\) uniformly on compact sets of \(\Omega\setminus\{\xi\}\), where \(a_{2}\) is given in Proposition 2.2._ **Lemma 5.5**.: _[_18_]_ _Let \(\theta>0\) and \(u=\sum\limits_{i=1}^{k}u_{i}\), \(i=1,\cdots,n\), if \(\varepsilon>0\) small enough, for \(u,u_{i},v\in\mathbb{R}\), \(p=2^{*}-1\), it holds that_ (1)_\(|f_{\varepsilon}(u)-f_{0}(u)|\leq\varepsilon|u|^{p}\ln\ln(e+|u|)\),_ (2)_\(f_{\varepsilon}^{{}^{\prime}}(u)\leq C|u|^{p-1}\),_ (3)_\(|f_{\varepsilon}^{{}^{\prime}}(u)-f_{0}^{{}^{\prime}}(u)|\leq\varepsilon|u|^{ p-1}\Big{(}p\ln\ln(e+|u|)+\frac{1}{\ln(e+|u|)}\Big{)}\),_ \[|f_{\varepsilon}^{{}^{\prime}}(u+v)-f_{\varepsilon}^{{}^{\prime}}(u)|\leq \begin{cases}C(|u|^{p-2}+|v|^{p-2})|v|&\text{if }n\leq 6,\\ C(|v|^{p-1}+\varepsilon|u|^{p-1})&\text{if }n>6,\end{cases} \tag{4}\] (5)_\(\ln\ln(e+\mu^{-\theta}u)=\ln\ln(\mu^{-\theta})+\ln\Big{(}1+\frac{\ln(e^{1- \theta}|\ln\mu|+u)}{\theta|\ln\mu|}\Big{)}\),_ (6)_\(\lim\limits_{\mu\to 0}\bigg{(}|\ln\mu|\ln\Big{(}1+\frac{\ln(e^{1-\theta}|\ln\mu|+u)}{ \theta|\ln\mu|}\Big{)}\bigg{)}=\frac{1}{\theta}\ln u\), where \(C\) is a positive constant._ **Lemma 5.6**.: _There holds_ \[\Big{|}f_{0}^{{}^{\prime}}(V)-\sum\limits_{i=1}^{k}(-1)^{i}f_{0}^{{}^{\prime}} (PU_{\mu_{i},\xi_{i}})\Big{|}_{\frac{n}{2}}=\begin{cases}O\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}&\text{if }3\leq n\leq 5,\\ O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\ln\frac{\varepsilon}{| \ln\varepsilon|^{2}}\Big{|}\Big{)}&\text{if }n=6,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{-n+8}{n-2 }}\Big{)}&\text{if }n\geq 7,\end{cases} \tag{5.11}\] \[\Big{|}f_{\varepsilon}(V)-\sum\limits_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i},\xi_ {i}})\Big{|}_{\frac{2n}{n+2}}=\begin{cases}O\Big{(}\varepsilon\ln\Big{|}\ln \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}&\text{if }3\leq n\leq 6,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n+2}{2(n-2 )}}\Big{)}&\text{if }n\geq 7,\end{cases} \tag{5.12}\] \[|f_{\varepsilon}^{{}^{\prime}}(V)-f_{0}^{{}^{\prime}}(V)|_{\frac{n}{2}}=O\Big{(} \varepsilon\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}. \tag{5.13}\] Proof.: Let us estimate (5.11). One has \[\int_{\Omega}\left|f_{0}^{{}^{\prime}}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0 }^{{}^{\prime}}(PU_{\mu_{i},\xi_{i}})\right|^{\frac{n}{2}}dx\] \[= \int_{\Omega\setminus B(\xi,\rho)}\left|V^{p-1}-\sum_{i=1}^{k}(-1 )^{i}(PU_{\mu_{i},\xi_{i}})^{p-1}\right|^{\frac{n}{2}}dx+\sum_{l=1}^{k}\int_{ \mathcal{A}_{l}}\left|V^{p-1}-\sum_{i=1}^{k}(-1)^{i}(PU_{\mu_{i},\xi_{i}})^{p- 1}\right|^{\frac{n}{2}}dx.\] We estimate the first term \[\int_{\Omega\setminus B(\xi,\rho)}\left|V^{p-1}-\sum_{i=1}^{k}(-1 )^{i}(PU_{\mu_{i},\xi_{i}})^{p-1}\right|^{\frac{n}{2}}dx\] \[\leq \sum_{i=1}^{k}\int_{\Omega\setminus B(\xi,\rho)}U_{\mu_{i},\xi_{ i}}^{(p-1)^{\frac{n}{2}}}dx\leq C\sum_{i=1}^{k}\mu_{i}^{n}=O\bigg{(}\Big{(} \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{n-2}}\bigg{)}.\] For any \(l\), by the mean value theorem, there exists \(t=t(x)\in[0,1]\) such that \[\int_{\mathcal{A}_{l}}\left|V^{p-1}-\sum_{i=1}^{k}(-1)^{i}(PU_{ \mu_{i},\xi_{i}})^{p-1}\right|^{\frac{n}{2}}dx\] \[= \int_{\mathcal{A}_{l}}\left|\Big{(}(-1)^{l}PU_{\mu_{l},\xi_{l}}+ \sum_{i\neq l}^{k}(-1)^{i}PU_{\mu_{i},\xi_{i}}\Big{)}^{p-1}-(-1)^{l}(PU_{\mu_ {l},\xi_{l}})^{p-1}-\sum_{i\neq l}^{k}(-1)^{i}(PU_{\mu_{i},\xi_{i}})^{p-1} \right|^{\frac{n}{2}}dx\] \[\leq C\int_{\mathcal{A}_{l}}\left|\Big{(}(-1)^{l}PU_{\mu_{l},\xi_{l}}+ t\sum_{i\neq l}^{k}(-1)^{i}PU_{\mu_{i},\xi_{i}}\Big{)}^{p-2}\sum_{i\neq l}^{k}(-1 )^{i}PU_{\mu_{i},\xi_{i}}\right|^{\frac{n}{2}}dx+C\sum_{i\neq l}^{k}\int_{ \mathcal{A}_{l}}\left|PU_{\mu_{i},\xi_{i}}\right|^{(p-1)\frac{n}{2}}dx\] \[\leq C\int_{\mathcal{A}_{l}}\left|(-1)^{l+i}(PU_{\mu_{l},\xi_{l}})^{p -2}\sum_{i\neq l}^{k}PU_{\mu_{i},\xi_{i}}\right|^{\frac{n}{2}}dx+C\sum_{i\neq l }^{k}\int_{\mathcal{A}_{l}}\left|PU_{\mu_{i},\xi_{i}}\right|^{(p-1)\frac{n}{2} }dx\] \[\leq C\sum_{i\neq l}^{k}\int_{\mathcal{A}_{l}}\left|U_{\mu_{l},\xi_{l}} ^{p-2}U_{\mu_{i},\xi_{i}}\right|^{\frac{n}{2}}dx+C\sum_{i\neq l}^{k}\int_{ \mathcal{A}_{l}}\left|U_{\mu_{i},\xi_{i}}\right|^{(p-1)\frac{n}{2}}dx.\] If \(i\neq l\), by (3.24), let \(x-\xi=\mu_{i}y\), then \[\int_{\mathcal{A}_{l}}\left|U_{\mu_{i},\xi_{i}}\right|^{(p-1) \frac{n}{2}}dx\leq C\int_{\mathcal{A}_{l}}\bigg{(}\frac{\mu_{i}^{\frac{n-2}{2}}}{( \mu_{i}^{2}+|x-\xi_{i}|^{2})^{\frac{n-2}{2}}}\bigg{)}^{(p-1)\frac{n}{2}}dx\] \[= C\mu_{i}^{n-\frac{n-2}{2}(p-1)\frac{n}{2}}\int_{\frac{\mathcal{A }_{l}}{\mu_{i}}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{n}}dy=O\bigg{(}\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n}{n-2}}\bigg{)}.\] If \(n>6\), by (3.24), one has \[\int_{\mathcal{A}_{l}}\left|U_{\mu_{l},\xi_{l}}^{p-2}U_{\mu_{i}, \xi_{l}}\right|^{\frac{n}{2}}dx\] \[\leq C\int_{\mathcal{A}_{l}}\bigg{(}\frac{\mu_{l}^{\frac{-n+6}{2}}}{ (\mu_{l}^{2}+|x-\xi_{l}|^{2})^{\frac{-n+6}{2}}}\bigg{)}^{\frac{n}{2}}\bigg{(} \frac{\mu_{i}^{\frac{n-2}{2}}}{(\mu_{i}^{2}+|x-\xi_{i}|^{2})^{\frac{n-2}{2}}} \bigg{)}^{\frac{n}{2}}dx\] \[= C\mu_{i}^{n-\frac{n-2}{2}\frac{n}{2}}\mu_{l}^{\frac{-n+6}{2}\frac {n}{2}}\int_{\frac{\mathcal{A}_{l}}{\mu_{i}}}\frac{1}{(\mu_{l}^{2}+|\mu_{i}y- \mu_{l}\sigma_{l}|^{2})^{\frac{-n+6}{2}\frac{n}{2}}}\frac{1}{(1+|y-\sigma_{i}| ^{2})^{\frac{n-2}{2}\frac{n}{2}}}dy\] \[= \left\{\begin{array}{ll}O(\mu_{i}^{n-\frac{n-2}{2}\frac{n}{2}+(n-6) \frac{n}{2}}\mu_{l}^{\frac{-n+6}{2}\frac{n}{2}})\int_{\frac{\mathcal{A}_{l}}{ \mu_{l}}}\frac{1}{|y-\frac{\mu_{l}}{\mu_{l}}|^{\frac{n(-n+6)}{2}}}\frac{1}{(1+| y-\sigma_{i}|^{2})^{\frac{n-2}{2}\frac{n}{2}}}dy&\mbox{if }l>i,\\ O(\mu_{i}^{n-\frac{n-2}{2}\frac{n}{2}}\mu_{l}^{\frac{-n+6}{2}\frac{n}{2}})\int_ {\frac{\mathcal{A}_{l}}{\mu_{l}}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n-2}{2 }\frac{n}{2}}}dy&\mbox{if }l<i,\\ O(\mu_{i}^{n-\frac{n-2}{2}\frac{n}{2}}(\frac{\mu_{l}}{\mu_{i}})^{\frac{-n+6}{2 }\frac{n}{2}}(\frac{\varepsilon}{|\ln\varepsilon|^{2}})^{\frac{n}{n-2}})&\mbox{ if }l>i,\\ O\Big{(}(\frac{\mu_{i}}{\mu_{l}})^{\frac{-n+6}{2}\frac{n}{2}}(\frac{ \varepsilon}{|\ln\varepsilon|^{2}})^{\frac{n}{n-2}}\Big{)}&\mbox{if }l<i, \end{array}\right.\] \[= O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{ \frac{-n+6}{2}\frac{n}{2}+\frac{n}{n-2}}\Big{)}.\] If \(n<6\), it holds \[\int_{\mathcal{A}_{l}}\left|U_{\mu_{l},\xi_{l}}^{p-2}U_{\mu_{i}, \xi_{l}}\right|^{\frac{n}{2}}dx\] \[\leq C\int_{\mathcal{A}_{l}}\Big{(}\frac{\mu_{l}^{\frac{-n+6}{2}}}{( \mu_{l}^{2}+|x-\xi_{l}|^{2})^{\frac{-n+6}{2}}}\Big{)}^{\frac{n}{2}}\Big{(} \frac{\mu_{i}^{\frac{n-2}{2}}}{(\mu_{i}^{2}+|x-\xi_{i}|^{2})^{\frac{n-2}{2}}} \Big{)}^{\frac{n}{2}}dx\] \[= C\mu_{l}^{n-\frac{-n+6}{2}\frac{n}{2}}\mu_{i}^{\frac{n-2}{2}\frac {n}{2}}\int_{\frac{\mathcal{A}_{l}}{\mu_{l}}}\frac{1}{(1+|y-\sigma_{l}|^{2})^ {\frac{-n+6}{2}\frac{n}{2}}}\frac{1}{(\mu_{i}^{2}+|\mu_{l}y-\mu_{i}\sigma_{i} |^{2})^{\frac{n-2}{2}\frac{n}{2}}}dy\] \[= \left\{\begin{array}{ll}O(\mu_{l}^{n-\frac{-n+6}{2}\frac{n}{2}} \mu_{i}^{-\frac{n-2}{2}\frac{n}{2}})&\mbox{if }l>i,\\ O(\mu_{l}^{n-\frac{-n+6}{2}\frac{n}{2}-(n-2)\frac{n}{2}\frac{n-6}{2}})\int_{ \frac{\mathcal{A}_{l}}{\mu_{l}}}\frac{1}{|y-\frac{\mu_{l}}{\mu_{l}}|^{\frac{n( n-2)}{2}}}\frac{1}{(1+|y-\sigma_{l}|^{2})^{\frac{-n+6}{2}\frac{n}{2}}}dy&\mbox{if }l<i, \end{array}\right.\] \[= \left\{\begin{array}{ll}O((\frac{\mu_{l}}{\mu_{l}})^{\frac{n-2} {2}\frac{n}{2}})&\mbox{if }l>i,\\ O((\frac{\mu_{l}}{\mu_{l}})^{\frac{n-2}{2}\frac{n}{2}})&\mbox{if }l<i, \end{array}\right.\] \[= O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{ \frac{n}{2}}\Big{)}.\] A similar estimate can be obtained for \(n=6\), we prove that \[\int_{\mathcal{A}_{l}}\left|U_{\mu_{l},\xi_{l}}^{p-2}U_{\mu_{i},\xi_{l}}\right| ^{3}dx=O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{3}\Big{|} \ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}^{3}\Big{)}.\] Thus, (5.11) holds. Let us now estimate (5.12), one has \[\Big{|}f_{\varepsilon}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i}, \xi_{l}})\Big{|}_{\frac{2n}{n+2}} \tag{5.14}\] \[= \Big{|}f_{\varepsilon}(V)-f_{0}(V)\Big{|}_{\frac{2n}{n+2}}+\Big{|} f_{0}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i},\xi_{l}})\Big{|}_{\frac{2n}{n+2}}.\] Similar to the proof of (5.11), we obtain \[\Big{|}f_{0}(V)-\sum_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i},\xi_{i}})\Big{|}_{\frac{2n }{n+2}}=\begin{cases}O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}& \text{if }3\leq n\leq 5,\\ O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\ln\frac{\varepsilon}{ |\ln\varepsilon|^{2}}\Big{|}\Big{)}&\text{if }n=6,\\ O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n+2}{2(n-2 )}}\Big{)}&\text{if }n\geq 7.\end{cases} \tag{5.15}\] On the other hand, by Lemma 5.5, there holds \[\int_{\Omega}\Big{|}f_{\varepsilon}(V)-f_{0}(V)\Big{|}^{\frac{2n }{n+2}}dx\leq \varepsilon\int_{\Omega}|V^{p}\ln\ln(e+V)|^{\frac{2n}{n+2}}dx\] \[\leq \varepsilon\int_{\Omega}\Big{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{ \mu_{i},\xi_{i}}\Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i}, \xi_{i}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dx\] \[\leq \varepsilon\int_{\Omega\setminus B(\xi,\rho)}\Big{|}\Big{(}\sum_ {i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}( -1)^{i}U_{\mu_{i},\xi_{i}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dx\] \[+\varepsilon\sum_{l=1}^{k}\int_{\mathcal{A}_{l}}\Big{|}\Big{(} \sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^ {k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dx. \tag{5.16}\] We now observe that \[\int_{\Omega\setminus B(\xi,\rho)}\Big{|}\Big{(}\sum_{i=1}^{k}(-1 )^{i}U_{\mu_{i},\xi_{i}}\Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu _{i},\xi_{i}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dx\] \[\leq C\sum_{i=1}^{k}\int_{\Omega\setminus B(\xi,\rho)}\Big{|}U_{\mu_{ i},\xi_{i}}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)} \Big{|}^{\frac{2n}{n+2}}dx\] \[\leq C\sum_{i=1}^{k}\mu_{i}^{n}\Big{|}\ln\ln\Big{(}e+\sum_{i=1}^{k}(- 1)^{i}\mu_{i}^{\frac{n-2}{2}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dy\] \[\leq C\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n} {n-2}}\Big{(}\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|} \Big{)}^{\frac{2n}{n+2}}.\] For the second integral in (5.16), from (3.18), and let \(x-\xi=\mu_{l}y\), then \[\int_{\mathcal{A}_{l}}\Big{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{ i},\xi_{i}}\Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}\Big{|}^{\frac{2n}{n+2}}dx\] \[= \int_{\mathcal{A}_{l}}\Big{|}\Big{(}(-1)^{l}U_{\mu_{i},\xi_{l}}+ \sum_{i\neq l}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)}^{p}\ln\ln\Big{(}e+(-1)^{l} U_{\mu_{i},\xi_{l}}+\sum_{i\neq l}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)}\Big{|}^{ \frac{2n}{n+2}}dx\] \[= \int_{\mathcal{A}_{l}}\Big{|}(-1)^{l}U_{\mu_{i},\xi_{l}}^{p}\ln \ln\Big{(}e+(-1)^{l}U_{\mu_{i},\xi_{l}}+\sum_{i\neq l}^{k}(-1)^{i}U_{\mu_{i}, \xi_{i}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dx\] \[+C\sum_{i\neq l}^{k}\int_{\mathcal{A}_{l}}U_{\mu_{i},\xi_{l}}\Big{|} \ln\ln\Big{(}e+(-1)^{l}U_{\mu_{i},\xi_{l}}+\sum_{i\neq l}^{k}(-1)^{i}U_{\mu_{i}, \xi_{i}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dx. \tag{5.17}\] For \(i>l\), by Lemma 5.5, we have \[\int_{\mathcal{A}_{l}}\Big{|}U^{p}_{\mu_{l},\xi_{l}}\ln\ln\Big{(}e+( -1)^{l}U_{\mu_{l},\xi_{l}}+\sum\limits_{i\neq l}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}\Big{|}^{\frac{2n}{n+2}}dx\] \[= \int_{\mathcal{A}_{l}}\frac{\alpha_{n}^{\frac{2n}{n-2}}\mu_{l}^{n }}{(\mu_{l}^{2}+|x-\xi_{l}|^{2})^{n}}\Big{|}\ln\ln\Big{(}e+(-1)^{l}\frac{ \alpha_{n}\mu_{l}^{\frac{n-2}{2}}}{(\mu_{l}^{2}+|x-\xi_{l}|^{2})^{\frac{n-2}{2 }}}+\sum\limits_{i\neq l}^{k}(-1)^{i}\frac{\alpha_{n}\mu_{i}^{\frac{n-2}{2}}}{ (\mu_{i}^{2}+|x-\xi_{i}|^{2})^{\frac{n-2}{2}}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dx\] \[= \int_{\frac{\mathcal{A}_{l}}{\mu_{l}}}\frac{\alpha_{n}^{\frac{2n }{n-2}}}{(1+|y-\sigma_{l}|^{2})^{n}}\Big{|}\ln\ln\Big{(}e+(-1)^{l}\mu_{l}^{- \frac{n-2}{2}}\frac{\alpha_{n}}{(1+|y-\sigma_{l}|^{2})^{\frac{n-2}{2}}}+o( \varepsilon)\Big{)}\Big{|}^{\frac{2n}{n+2}}dy\] \[= \int_{\frac{\mathcal{A}_{l}}{\mu_{l}}}\frac{\alpha_{n}^{\frac{2n }{n-2}}}{(1+|y-\sigma_{l}|^{2})^{n}}\Big{|}\ln\ln\mu_{l}^{-\frac{n-2}{2}}+\ln \Big{[}1+\frac{\ln\Big{(}e^{1-\frac{n-2}{2}|\ln\mu_{l}|}+\frac{\alpha_{n}}{1+|y -\sigma_{l}|^{2})^{\frac{n-2}{2}}}\Big{]}\Big{|}^{\frac{2n}{n+2}}dy+o(\varepsilon)\] \[= \int_{\frac{\mathcal{A}_{l}}{\mu_{l}}}\frac{\alpha_{n}^{\frac{2n }{n-2}}}{(1+|y-\sigma_{l}|^{2})^{n}}\Big{|}\ln\ln\mu_{l}^{-\frac{n-2}{2}}+ \frac{1}{|\ln\mu_{l}|}\frac{2}{n-2}\ln\frac{\alpha_{n}}{(1+|y-\sigma_{l}|^{2} )^{\frac{n-2}{2}}}\Big{|}^{\frac{2n}{n+2}}dy+o(\varepsilon)\] \[\leq C(\ln|\ln\mu_{1}|)^{\frac{2n}{n+2}}=O\bigg{(}\Big{(}\ln\Big{|} \ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}^{\frac{2n}{n+2}} \bigg{)}.\] In a same way, the estimate of second term in (5.17) is \[\sum\limits_{i\neq l}^{k}\int_{\mathcal{A}_{l}}U_{\mu_{i},\xi_{i}} \Big{|}\ln\ln\Big{(}e+(-1)^{l}U_{\mu_{l},\xi_{l}}+\sum\limits_{i\neq l}^{k}(-1) ^{i}U_{\mu_{i},\xi_{i}}\Big{)}\Big{|}^{\frac{2n}{n+2}}dx=O\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}.\] Thus, \[\Big{|}f_{\varepsilon}(V)-f_{0}(V)\Big{|}_{\frac{2n}{n+2}}=O\bigg{(} \varepsilon\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\bigg{)}. \tag{5.18}\] Combining (5.14), (5.15) and (5.18), we obtain (5.12). Similar to the proof of (5.16), the estimate (5.13) holds. **Lemma 5.7**.: _There holds_ \[P_{3} = \int_{\Omega}\Big{[}\sum\limits_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i },\xi_{i}})-f_{\varepsilon}(V)\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= \left\{\begin{array}{l}\alpha_{n}a_{1}\frac{\varepsilon}{|\ln \varepsilon|^{2}}H(\xi,\xi)d_{1}^{n-2}+a_{3}\frac{\varepsilon}{|\ln\varepsilon| ^{2}}\sum\limits_{i=1}^{k-1}\Big{(}\frac{d_{i+1}}{d_{i}}\Big{)}^{\frac{n-2}{2} }g(\sigma_{i})-\frac{2k^{2}}{(n-2)^{2}}a_{4}\varepsilon\Big{|}\ln\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\\ -a_{4}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\sum\limits_{i=1}^{k}\frac{2}{2 ^{1-1}}|\ln d_{i}|+o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}\quad \text{if }h=0,\\ \frac{1}{2}\alpha_{n}a_{2}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}} \Big{)}^{\frac{n-1}{n-2}}d_{1}^{n-1}\partial_{\xi_{h}}\varphi(\xi)+o\bigg{(} \Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)} \quad\text{if }h=1,\cdots,n,\end{array}\right.\] _where \(a_{1}\)-\(a_{4}\) and \(g(\sigma_{i})\) are given in Proposition 2.2 for \(i=1,\cdots,k\)._ Proof.: We have \[P_{3}= \int_{\Omega}\Big{[}\sum\limits_{i=1}^{k}(-1)^{i}f_{0}(PU_{\mu_{i },\xi_{i}})-f_{\varepsilon}(V)\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= \int_{\Omega}U_{\mu_{i},\xi_{i}}^{p}\psi_{\mu_{i},\xi_{i}}^{0}dx- \frac{n-2}{2}\alpha_{n}\mu_{i}^{\frac{n-2}{2}}\int_{\Omega}U_{\mu_{i},\xi_{i}}^{p }\Big{(}H(x,\xi_{i})+O(\mu_{i}^{\frac{n+4}{2}})\Big{)}dx\] \[= \mu_{i}^{n-\frac{n+2}{2}+1+\frac{n-2}{2}-(n-2)}\int_{\frac{\Omega- \xi_{i}}{\mu_{i}}}U^{p}(y)\Big{(}\frac{\partial U}{\partial\mu}\Big{)}_{|_{\mu =1}}dy\] \[-\frac{n-2}{2}\mu_{i}^{n-2}H(\xi_{i},\xi_{i})\int_{\frac{\Omega- \xi_{i}}{\mu_{i}}}U^{p}(y)dy+O(\mu_{i}^{\frac{3n}{2}}) \tag{5.21}\] \[= -\frac{n-2}{2}a_{2}\mu_{i}^{n-2}H(\xi_{i},\xi_{i})+O\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}.\] If \(h=1,\cdots,n\) and \(i\neq j\), from Lemma 5.4, we get \[\int_{\Omega}f_{0}(U_{\mu_{i},\xi_{i}})P\psi_{\mu_{j},\xi_{j}}^{h}dx= \int_{\Omega}U_{\mu_{i},\xi_{i}}^{2^{*}-1}P\psi_{\mu_{j},\xi_{j}}^{h}dx\] \[= a_{2}\mu_{j}^{\frac{n}{2}}\mu_{i}^{\frac{n-2}{2}}\int_{\frac{ \Omega-\xi_{i}}{\mu_{i}}}\frac{1}{(1+|y|^{2})^{\frac{n+2}{2}}}\frac{\partial G }{\partial\xi_{j}^{h}}(\mu_{i}y+\xi_{i},\xi_{j})dy+o\Big{(}\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\Big{)}\] \[= a_{2}^{2}\mu_{j}^{\frac{n}{2}}\mu_{i}^{\frac{n-2}{2}}\frac{\partial G }{\partial\xi_{j}^{h}}(\xi_{i},\xi_{j})+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}. \tag{5.22}\] If \(h=1,\cdots,n\) and \(i=j\), from (5.3) and (2.10), we have \[\int_{\Omega}f_{0}(U_{\mu_{i},\xi_{i}})P\psi_{\mu_{i},\xi_{i}}^{h}dx= \int_{\Omega}U_{\mu_{i},\xi_{i}}^{2^{*}-1}P\psi_{\mu_{i},\xi_{i}}^ {h}dx\] \[= \int_{\Omega}U_{\mu_{i},\xi_{i}}^{2^{*}-1}\psi_{\mu,\xi_{i}}^{h}( x)dx-\alpha_{n}\mu_{i}^{\frac{n}{2}}\int_{\Omega}U_{\mu_{i},\xi_{i}}^{2^{*}-1} \partial_{\xi_{i}}H(x,\xi_{i})dx+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}\] \[= \mu_{i}^{n-\frac{n+2}{2}+1-\frac{n-2}{2}}\int_{\frac{\Omega- \xi_{i}}{\mu_{i}}}U^{2^{*}-1}(y)\psi^{h}(y)dy\] \[-\alpha_{n}\mu_{i}^{\frac{n}{2}-\frac{n+2}{2}+n}\partial_{\xi_{i} ^{h}}H(\xi_{i},\xi_{i})\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}U^{2^{*}-1}(y)dy+ o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}} \bigg{)}\] \[= \mu_{i}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}U^{2^{*}-1}(y)\psi^{h }(y)dy\] \[-\alpha_{n}\mu_{i}^{n-1}\partial_{\xi_{i}^{h}}H(\xi_{i},\xi_{i}) \int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}U^{2^{*}-1}(y)dy+o\bigg{(}\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}\] \[= -\alpha_{n}a_{2}\mu_{i}^{n-1}\frac{\partial H}{\partial\xi_{i}^{ h}}(\xi_{i},\xi_{i})+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}} \Big{)}^{\frac{n-1}{n-2}}\bigg{)}.\] It remains to estimate the second term in (5.19), if \(h=0\), by (5.2), it holds \[\int_{\Omega}f_{0}(U_{\mu_{i},\xi_{i}})(\psi_{\mu_{j},\xi_{j}}^{0 }-P\psi_{\mu_{j},\xi_{j}}^{0})dx\] \[= \frac{n-2}{2}\alpha_{n}\mu_{i}^{\frac{n-2}{2}}\mu_{j}^{\frac{n-2} {2}}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}\frac{1}{(1+|y|^{2})^{\frac{n+2}{2}} }H(\mu_{i}y+\xi_{i},\xi_{j})dy+o(\mu_{i}^{\frac{n-2}{2}}\mu_{j}^{\frac{n+4}{2}})\] \[= \begin{cases}\frac{n-2}{2}a_{2}\mu_{i}^{\frac{n-2}{2}}\mu_{j}^{ \frac{n-2}{2}}H(\xi_{i},\xi_{j})+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}&\text{if }i>j,\\ \frac{n-2}{2}a_{2}\mu_{i}^{n-2}H(\xi_{i},\xi_{i})+o\bigg{(}\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}&\text{if }i=j.\end{cases} \tag{5.24}\] If \(h=1,\cdots,n\), by (5.3), we obtain \[\int_{\Omega}f_{0}(U_{\mu_{i},\xi_{i}})(\psi_{\mu_{j},\xi_{j}}^{h }-P\psi_{\mu_{j},\xi_{j}}^{h})dx\] \[= \alpha_{n}\mu_{i}^{\frac{n-2}{2}}\mu_{j}^{\frac{n}{2}}\int_{\frac {\Omega-\xi_{i}}{\mu_{i}}}\frac{1}{(1+|y|^{2})^{\frac{n+2}{2}}}\partial_{\xi_ {j}^{h}}H(\mu_{i}y+\xi_{i},\xi_{j})dy+o(\mu_{i}^{\frac{n-2}{2}}\mu_{j}^{\frac{n +2}{2}})\] \[= \begin{cases}a_{2}\mu_{i}^{\frac{n-2}{2}}\mu_{j}^{\frac{n}{2}} \partial_{\xi_{j}^{h}}H(\xi_{i},\xi_{j})+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}&\text{if }i>j,\\ a_{2}\mu_{i}^{n-1}\partial_{\xi_{i}^{h}}H(\xi_{i},\xi_{i})+o\bigg{(}\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}&\text{if }i=j.\end{cases} \tag{5.25}\] For the last integral in (5.19), \[\int_{\Omega}f_{0}(V)\psi_{\mu_{j},\xi_{j}}^{h}dx=\int_{\Omega\setminus B(\xi, \rho)}f_{0}(V)\psi_{\mu_{j},\xi_{j}}^{h}dx+\sum_{l=1}^{k}\int_{\mathcal{A}_{l}}f_ {0}(V)\psi_{\mu_{j},\xi_{j}}^{h}dx. \tag{5.26}\] If \(h=0\), by (2.9), a direct computation shows that \[\int_{\Omega\setminus B(\xi,\rho)}\Big{|}f_{0}(V)\psi^{0}_{\mu_{j}, \xi_{j}}\Big{|}dx\leq C\frac{n-2}{2}\alpha_{n}\mu_{j}^{\frac{n-2}{2}}\sum_{i=1}^{k} \int_{\Omega\setminus B(\xi,\rho)}\Big{|}U^{p}_{\mu_{i},\xi_{i}}\frac{|x-\xi_{ j}|^{2}-\mu_{j}^{2}}{(\mu_{j}^{2}+|x-\xi_{j}|^{2})^{\frac{n}{2}}}\Big{|}dx\] \[\leq C\frac{n-2}{2}\alpha_{n}\mu_{j}^{\frac{n-2}{2}}\sum_{i=1}^{k} \int_{\Omega\setminus B(\xi,\rho)}\Big{|}\frac{\mu_{i}^{\frac{n+2}{2}}}{(\mu_ {i}^{2}+|x-\xi_{i}|^{2})^{\frac{n+2}{2}}}\frac{|x-\xi_{j}|^{2}-\mu_{j}^{2}}{( \mu_{j}^{2}+|x-\xi_{j}|^{2})^{\frac{n}{2}}}\Big{|}dx\] \[= o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)} ^{\frac{n-1}{n-2}}\bigg{)}.\] If \(h=1,\cdots,n\), we obtain \[\int_{\Omega\setminus B(\xi,\rho)}\Big{|}f_{0}(V)\psi^{0}_{\mu_{j },\xi_{j}}\Big{|}dx\leq C(n-2)\alpha_{n}\mu_{j}^{\frac{n}{2}}\sum_{i=1}^{k}\int_{ \Omega\setminus B(\xi,\rho)}\Big{|}(-1)^{i}U^{p}_{\mu_{i},\xi_{i}}\frac{x^{h} -\xi_{j}^{h}}{(\mu_{j}^{2}+|x-\xi_{j}|^{2})^{\frac{n}{2}}}\Big{|}dx\] \[\leq C\frac{n-2}{2}\alpha_{n}\mu_{j}^{\frac{n}{2}}\sum_{i=1}^{k}\int_ {\Omega\setminus B(\xi,\rho)}\Big{|}\frac{\mu_{i}^{\frac{n+2}{2}}}{(\mu_{i}^{2 }+|x-\xi_{i}|^{2})^{\frac{n+2}{2}}}\frac{x^{h}-\xi_{j}^{h}}{(\mu_{j}^{2}+|x- \xi_{j}|^{2})^{\frac{n}{2}}}\Big{|}dx\] \[= o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)} ^{\frac{n-1}{n-2}}\bigg{)}.\] On the other hand, we estimate the second term in (5.26), \[\int_{\mathcal{A}_{l}}\Big{|}f_{0}(V)\psi^{h}_{\mu_{j},\xi_{j}} \Big{|}dx\] \[= \int_{\mathcal{A}_{l}}\Big{|}\Big{(}(-1)^{l}PU_{\mu_{l},\xi_{l}}+ \sum_{i\neq l}^{k}(-1)^{i}PU_{\mu_{i},\xi_{i}}\Big{)}^{p}\psi^{h}_{\mu_{j},\xi _{j}}\Big{|}dx\] \[= \int_{\mathcal{A}_{l}}\Big{|}(-1)^{l}(PU_{\mu_{l},\xi_{l}})^{p} \psi^{h}_{\mu_{j},\xi_{j}}\Big{|}dx+O\bigg{(}\int_{\mathcal{A}_{l}}\Big{|} \Big{(}\sum_{i\neq l}^{k}(-1)^{i}PU_{\mu_{i},\xi_{i}}\Big{)}\psi^{h}_{\mu_{j}, \xi_{j}}\Big{|}dx\bigg{)}\] \[= \int_{\mathcal{A}_{l}}\Big{|}(-1)^{l}\Big{(}(PU_{\mu_{l},\xi_{l}} )^{p}-U^{p}_{\mu_{l},\xi_{l}}\Big{)}\psi^{h}_{\mu_{j},\xi_{j}}\Big{|}dx+\int_{ \mathcal{A}_{l}}|(-1)^{l}U^{p}_{\mu_{l},\xi_{l}}\psi^{h}_{\mu_{j},\xi_{j}}|dx \tag{5.27}\] On the fixed annulus \(\mathcal{A}_{l}\), by (5.1), for \(h=0\), let \(x-\xi=\mu_{l}y\), then \[\int_{\mathcal{A}_{l}}\Big{|}\Big{(}(PU_{\mu_{l},\xi_{l}})^{p}-U^ {p}_{\mu_{l},\xi_{l}}\Big{)}\psi^{0}_{\mu_{j},\xi_{j}}\Big{|}dx\] \[= \int_{\mathcal{A}_{l}}\Big{|}\bigg{[}\Big{(}U_{\mu_{l},\xi_{l}}- \alpha_{n}\mu_{l}^{\frac{n-2}{2}}H(x,\xi_{l})+O(\mu_{l}^{\frac{n+2}{2}})\Big{)} ^{p}-U^{p}_{\mu_{l},\xi_{l}}\bigg{]}\psi^{0}_{\mu_{j},\xi_{j}}\Big{|}dx\] \[= \frac{(n-2)\alpha_{n}}{2}\mu_{l}^{\frac{n-2}{2}+n}\mu_{j}^{\frac {n-2}{2}}O\left(-\alpha_{n}H(\xi_{l},\xi_{l})+O(\mu_{l}^{2})\right)\int_{ \frac{\mathcal{A}_{l}}{\mu_{l}}}\Bigg{|}\frac{\Big{[}|y-\sigma_{j}\frac{\mu_{ j}}{\mu_{l}}|^{2}-(\frac{\mu_{j}}{\mu_{l}})^{2}\Big{]}\mu_{l}^{2}}{\Big{(}( \frac{\mu_{j}}{\mu_{l}})^{2}+|y-\sigma_{j}\frac{\mu_{j}}{\mu_{l}}|^{2}\Big{)}^ {\frac{n}{2}}\mu_{l}^{n}}\Bigg{|}dy\] \[= \begin{cases}O(\mu_{j}^{\frac{n+2}{2}}\mu_{l}^{\frac{n-2}{2}}H( \xi_{l},\xi_{l}))&\text{if }j>l,\\ O(\mu_{l}^{n}H(\xi_{l},\xi_{l}))&\text{if }j=l,\end{cases}\] \[= o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{ \frac{n-1}{n-2}}\bigg{)}.\] Similarly, for \(h=1,\cdots,n\), we get \[\int_{\mathcal{A}_{l}}\Big{|}\Big{(}(PU_{\mu_{l},\xi_{l}})^{p}-U_{ \mu_{l},\xi_{l}}^{p}\Big{)}\psi_{\mu_{j},\xi_{j}}^{h}\Big{|}dx\] \[= \int_{\mathcal{A}_{l}}\Big{|}\Big{[}\Big{(}U_{\mu_{l},\xi_{l}}- \alpha_{n}\mu_{l}^{\frac{n-2}{2}}H(x,\xi_{l})+O(\mu_{l}^{\frac{n+2}{2}})\Big{)} ^{p}-U_{\mu_{l},\xi_{l}}^{p}\Big{]}\psi_{\mu_{j},\xi_{j}}^{h}\Big{|}dx\] \[= -(n-2)\alpha_{n}\mu_{j}^{\frac{n}{2}}\int_{\mathcal{A}_{l}}\Big{|} O\Big{(}-\alpha_{n}\mu_{l}^{\frac{n-2}{2}}H(x,\xi_{l})+O(\mu_{l}^{\frac{n+2}{2}}) \Big{)}\frac{(x-\xi_{j})_{h}}{(\mu_{j}^{2}+|x-\xi_{j}|^{2})^{\frac{n}{2}}} \Big{|}dx\] \[= \begin{cases}O\Big{(}\mu_{j}^{\frac{n}{2}}\mu_{l}^{\frac{n}{2}}H (\xi_{l},\xi_{l})\Big{)}&\text{if $j>l$,}\\ O\Big{(}\mu_{l}^{n}H(\xi_{l},\xi_{l})\Big{)}&\text{if $j=l$,}\end{cases}\] \[= o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^ {\frac{n-1}{n-2}}\bigg{)}.\] The second term in (5.27). By (5.1), for \(h=0\), if \(j>l\), let \(x-\xi=\mu y\), then \[\int_{\mathcal{A}_{l}}|U_{\mu_{l},\xi_{l}}^{p}\psi_{\mu_{j},\xi_{ j}}^{0}|dx\] \[=\] \[=\] \[=\] \[=\] \[=\] \[=\] \[=\] \[=\] \[=\] For \(h=1,\cdots,n\), if \(j>l\), let \(x-\xi=\mu y\), there holds \[\int_{\mathcal{A}_{l}}|U_{\mu_{l},\xi_{l}}^{p}\psi_{\mu_{j},\xi_{ j}}^{h}|dx=\] \[= o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^ {\frac{n-1}{n-2}}\bigg{)}.\] The last term in (5.27). For \(h=0\) and \(j>i\), \[\int_{\mathcal{A}_{l}}\Big{|}\Big{(}\sum_{i\neq l}^{k}(-1)^{i}PU_ {\mu_{i},\xi_{i}}\Big{)}\psi_{\mu_{j},\xi_{j}}^{0}\Big{|}dx\] \[\leq \sum_{i\neq l}^{k}\int_{\mathcal{A}_{l}}|U_{\mu_{i},\xi_{i}}\psi_ {\mu_{j},\xi_{j}}^{0}|dx\] \[= \frac{n-2}{2}\alpha_{n}^{2}\sum_{i\neq l}^{k}\mu_{i}^{-\frac{n-2} {2}}\mu_{j}^{\frac{n-2}{2}}\int_{\frac{\mathcal{A}_{l}}{\mu_{i}}}\bigg{|} \frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n-2}{2}}}\frac{\Big{(}|y-\sigma_{j}\frac {\mu_{j}}{\mu_{i}}|^{2}-(\frac{\mu_{j}}{\mu_{i}})^{2}\Big{)}\mu_{l}^{2}}{ \Big{(}(\frac{\mu_{j}}{\mu_{i}})^{2}+|y-\sigma_{j}\frac{\mu_{j}}{\mu_{i}}|^{2} \Big{)}^{\frac{n}{2}}}\Big{|}dy\] \[= o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{ \frac{n-1}{n-2}}\bigg{)}.\] For \(h=1,\cdots,n\) and \(j>i\), \[\int_{\mathcal{A}_{l}}\big{|}\Big{(}\sum_{i\neq l}^{k}(-1)^{i}PU_{ \mu_{i},\xi_{i}}\Big{)}\psi_{\mu_{j},\xi_{j}}^{h}\Big{|}dx\] \[\leq \sum_{i\neq l}^{k}\int_{\mathcal{A}_{l}}|U_{\mu_{i},\xi_{i}}\psi_{ \mu_{j},\xi_{j}}^{h}|dx\] \[= (n-2)\alpha_{n}^{2}\sum_{i\neq l}^{k}\mu_{j}^{\frac{n}{2}}\int_{ \frac{\mathcal{A}_{l}}{\mu_{l}}}\bigg{|}\frac{\mu_{i}^{-\frac{n-2}{2}}}{(1+|y -\sigma_{i}|^{2})^{\frac{n-2}{2}}}\frac{(y^{h}-\sigma_{j}\frac{\mu_{j}}{\mu_{ l}})\mu_{i}}{\Big{(}(\frac{\mu_{j}}{\mu_{i}})^{2}+|y-\sigma_{j}\frac{\mu_{j}}{\mu_{ i}}|^{2}\Big{)}^{\frac{n}{2}}\mu_{i}^{n}}\bigg{|}\mu_{i}^{n}dy\] \[= o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)} ^{\frac{n-1}{n-2}}\bigg{)}.\] Therefore, we have \[\int_{\Omega}f_{0}(V)\psi_{\mu_{j},\xi_{j}}^{h}dx\] \[= \begin{cases}a_{3}\Big{(}\frac{\mu_{l+1}}{\mu_{l}}\Big{)}^{\frac {n-2}{2}}g(\sigma_{l})+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2} }\Big{)}^{\frac{n-1}{n-2}}\bigg{)}&\text{if }l=1,\cdots,k-1,\ h=0,\\ o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n- 2}}\bigg{)}&\text{if }l=k,\end{cases} \tag{5.28}\] where \(a_{3}\) and \(g(\sigma_{l})\) are defined in Proposition 2.2. From (5.19)-(5.25) and (5.28), we obtain \[J_{1}=\begin{cases}-a_{3}\frac{\varepsilon}{|\ln\varepsilon|^{2} \sum_{l=1}^{k-1}\Big{(}\frac{d_{l+1}}{d_{l}}\Big{)}^{\frac{n-2}{2}}}g(\sigma_{ l})+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n- 2}}\bigg{)}&\text{if }l=1,\cdots,k-1,\ h=0,\\ o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n- 2}}\bigg{)}&\text{if }l=k,\ h=1,\cdots,n.\end{cases} \tag{5.29}\] _Estimate of \(J_{2}\)_: By Lemma 5.1, there holds \[J_{2}= \sum_{i=1}^{k}\int_{\Omega}(-1)^{i}\Big{[}f_{0}(PU_{\mu_{i},\xi_{ i}})-f_{0}(U_{\mu_{i},\xi_{i}})\Big{]}\psi_{\mu_{j},\xi_{j}}^{h}dx\] \[= \sum_{i=1}^{k}\int_{\Omega}(-1)^{i}f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i},\xi_{i}}-U_{\mu_{i},\xi_{i}})\psi_{\mu_{j},\xi_{j}}^{h}dx\] \[+\sum_{i=1}^{k}\int_{\Omega}(-1)^{i}\Big{[}f_{0}(PU_{\mu_{i},\xi_{ i}})-f_{0}(U_{\mu_{i},\xi_{i}})-f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i}, \xi_{i}}-U_{\mu_{i},\xi_{i}})\Big{]}\psi_{\mu_{j},\xi_{j}}^{h}dx\] \[= (2^{*}-1)\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\int_{\Omega}U_{\mu _{i},\xi_{i}}^{2^{*}-2}\Big{(}-\alpha_{n}H(x,\xi_{i})+O(\mu_{i}^{2})\Big{)}\psi_ {\mu_{j},\xi_{j}}^{h}dx\] \[-\int_{\Omega}(-1)^{i}\Big{[}f_{0}(PU_{\mu_{i},\xi_{i}})-f_{0}(U_{ \mu_{i},\xi_{i}})-f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i},\xi_{i}} -U_{\mu_{i},\xi_{i}})\Big{]}\psi_{\mu_{j},\xi_{j}}^{h}dx.\] Moreover, for \(l=0,\cdots,n\), by Holder inequality, (5.6), (5.7) and (5.10), one has \[\int_{\Omega}\Big{|}(-1)^{i}\Big{[}f_{0}(PU_{\mu_{i},\xi_{i}})-f_{0}(U_{\mu_{i},\xi_{i}})-f_{0}^{{}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i},\xi_{i}}-U_{\mu _{i},\xi_{i}})\Big{]}\psi_{\mu_{j},\xi_{j}}^{h}\Big{|}dx\] \[\leq \Big{|}f_{0}(PU_{\mu_{i},\xi_{i}})-f_{0}(U_{\mu_{i},\xi_{i}})-f_{0}^{ {}^{\prime}}(U_{\mu_{i},\xi_{i}})(PU_{\mu_{i},\xi_{i}}-U_{\mu_{i},\xi_{i}})\Big{|} _{\frac{n}{2}}|\psi_{\mu_{j},\xi_{j}}^{h}|_{\frac{n}{n-2}}=o\bigg{(}\Big{(} \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}.\] If \(h=0\) and \(j=i\), by (2.9), we get \[(2^{*}-1)\alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\int_{ \Omega}U_{\mu_{i},\xi_{i}}^{2^{*}-2}H(x,\xi_{i})\psi_{\mu_{i},\xi_{i}}^{0}dx\] \[= (2^{*}-1)\alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{3(n-2)}{2}}\mu_{i }^{-\frac{n-2}{2}}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}U^{2^{*}-2}(y)H(\mu_{i }y+\xi_{i},\xi_{i})\psi^{0}\Big{(}\frac{x-\xi_{i}}{\mu_{i}}\Big{)}dy\] \[= \alpha_{n}a_{1}\sum_{i=1}^{k}\mu_{i}^{n-2}H(\xi_{i},\xi_{i})+o \Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)},\] where \(a_{1}\) is given in Proposition 2.2. If \(h=1,\cdots,n\) and \(j=i\), by (2.10), one has \[(2^{*}-1)\alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\int_{ \Omega}U_{\mu_{i},\xi_{i}}^{2^{*}-2}(x)H(x,\xi_{i})\psi_{\mu_{i},\xi_{i}}^{h}dx\] \[= \alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{n}{2}}\int_{\Omega}H(x,\xi _{i})\frac{\partial}{\partial\xi_{i}^{h}}U_{\mu_{i},\xi_{i}}^{2^{*}-1}(x)dx\] \[= \alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{n}{2}}\Big{(}\mu_{i}^{ \frac{n-2}{2}}\frac{\partial}{\partial\xi_{i}^{h}}\int_{\frac{\Omega-\xi_{i}} {\mu_{i}}}U_{\mu_{i},\xi_{i}}^{2^{*}-1}(y)H(\mu_{i}y+\xi_{i},\xi_{i})dy-\mu_{i} ^{\frac{n-2}{2}}\int_{\frac{\Omega-\xi_{i}}{\mu_{i}}}U_{\mu_{i},\xi_{i}}^{2^{ *}-1}\frac{\partial H(\mu_{i}y+\xi_{i},\xi_{i})}{\partial\xi_{i}^{h}}dy\Big{)}\] \[= \alpha_{n}a_{2}\sum_{i=1}^{k}\mu_{i}^{n-1}\Big{(}\frac{\partial(H (\xi_{i},\xi_{i}))}{\partial\xi_{i}^{h}}-\frac{\partial H(\xi_{i},\xi_{i})}{ \partial\xi_{i}^{h}}+O(\mu_{i})\Big{)}\] \[= \frac{1}{2}\alpha_{n}a_{2}\sum_{i=1}^{k}\mu_{i}^{n-1}\partial_{ \xi_{i}^{h}}\rho(\xi_{i})+o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon| ^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}.\] In a same way, if \(h=1,\cdots,n\) and \(j\neq i\), we have \[(2^{*}-1)\alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\int_{ \Omega}U_{\mu_{i},\xi_{i}}^{2^{*}-2}(x)H(x,\xi_{i})\psi_{\mu_{j},\xi_{j}}^{h}dx= o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}.\] If \(h=0\) and \(j\neq i\), \[(2^{*}-1)\alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\int_{ \Omega}U_{\mu_{i},\xi_{i}}^{2^{*}-2}H(x,\xi_{i})\psi_{\mu_{j},\xi_{j}}^{0}dx=o \Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}.\] As a consequence, there holds \[J_{2}=\left\{\begin{aligned} &\alpha_{n}a_{1}d_{1}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}H(\xi,\xi)+o\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}&\text{if }h=0,\\ &\frac{1}{2}\alpha_{n}a_{2}\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\partial_{\xi_{h}}\rho(\xi)+o\bigg{(} \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2}}\bigg{)}& \text{if }h=1,\cdots,n.\end{aligned}\right.\] _Estimate of \(J_{3}\)_: By Taylor expansion with respect to \(\varepsilon\), we have \[J_{3}= \int_{\Omega}\Big{[}f_{0}(V)-f_{\varepsilon}(V)\Big{]}\psi_{\mu_{ j},\xi_{j}}^{h}dx\] \[= \varepsilon\int_{\Omega}V^{p}\ln\ln(e+V)\psi_{\mu_{j},\xi_{j}}^{h} dx-\varepsilon^{2}\int_{\Omega}V^{p}\Big{(}\ln\ln(e+V)\Big{)}^{2}\psi_{\mu_{j},\xi_{j}}^{h}dx. \tag{5.30}\] For the second term in (5.30), from Lemma 5.5 and the annulus given in (3.18), it holds \[\int_{\Omega}\Big{|}V^{p}\Big{(}\ln\ln(e+V)\Big{)}^{2}\psi^{h}_{\mu_ {j},\xi_{j}}\Big{|}dx\] \[\leq \int_{\Omega}\Big{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i }}\Big{)}^{p}\Big{[}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}\Big{]}^{2}\psi^{h}_{\mu_{j},\xi_{j}}\Big{|}dx\] \[= \int_{B(\xi,\rho)}\Big{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i}, \xi_{i}}\Big{)}^{p}\Big{[}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_ {i}}\Big{)}\Big{]}^{2}\psi^{h}_{\mu_{j},\xi_{j}}\Big{|}dx+o\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}\] \[= \sum_{i=1}^{k}\int_{\mathcal{A}_{i}}\Big{|}\Big{(}(-1)^{i}U_{\mu _{i},\xi_{i}}+(-1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}^{p} \tag{5.31}\] \[\quad\times\Big{[}\ln\ln\Big{(}e+(-1)^{i}U_{\mu_{i},\xi_{i}}+(-1)^ {j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}\Big{]}^{2}\psi^{h}_{\mu_{j}, \xi_{j}}\Big{|}dx+o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)},\] and on each annulus \(\mathcal{A}_{i}\), we change variable setting \(\mu_{i}y=x-\xi\), for \(h=0\), by (2.9), then \[\int_{\mathcal{A}_{i}}\Big{|}\Big{(}(-1)^{i}U_{\mu_{i},\xi_{i}}+( -1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}^{p}\Big{[}\ln\ln\Big{(}e+( -1)^{i}U_{\mu_{i},\xi_{i}}+(-1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)} \Big{]}^{2}|\psi^{0}_{\mu_{j},\xi_{j}}|\Big{|}dx\] \[= \frac{(n-2)\alpha_{n}^{p+1}}{2}\mu_{j}^{\frac{n-2}{2}}\int_{ \frac{\mathcal{A}_{i}}{\mu_{i}}}\Big{|}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n -2}{2}}}+\mu_{i}^{\frac{n-2}{2}}\sum_{j\neq i}^{k}\frac{\mu_{j}^{\frac{n-2}{2}} }{(\mu_{j}^{2}+|\mu_{i}y-\mu_{j}\sigma_{j}|^{2})^{\frac{n-2}{2}}}\Big{|}^{p}\] \[\times\Big{|}\ln\ln\Big{[}\alpha_{n}\mu_{i}^{-\frac{n-2}{2}}\Big{(} e+\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n-2}{2}}}+\mu_{i}^{\frac{n-2}{2}}\sum_{j \neq i}^{k}\frac{\mu_{j}^{\frac{n-2}{2}}}{(\mu_{j}^{2}+|\mu_{i}y-\mu_{j}\sigma_ {j}|^{2})^{\frac{n-2}{2}}}\Big{)}\Big{]}\Big{|}^{2}\] \[\times\Big{|}\frac{|\mu_{i}y-\mu_{j}\sigma_{j}|^{2}-\mu_{j}^{2}}{ (\mu_{j}^{2}+|\mu_{i}y-\mu_{j}\sigma_{j}|^{2})^{\frac{n}{2}}}\Big{|}dy\] \[= O\Big{(}\Big{|}\ln|\ln\mu_{i}|\Big{|}^{2}+\Big{(}\frac{\mu_{i}}{ \mu_{i-1}}\Big{)}^{\frac{n}{2}}+\Big{(}\frac{\mu_{i+1}}{\mu_{i}}\Big{)}^{\frac {n}{2}}\Big{)}=O(\ln|\ln\mu_{i}|)=O\Big{(}\ln\Big{|}\ln\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{|}\Big{)}.\] Similarly, for \(h=1,\cdots,n\), we have \[\int_{\mathcal{A}_{i}}\Big{(}(-1)^{i}U_{\mu_{i},\xi_{i}}+(-1)^{j }\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}^{p}\Big{[}\ln\ln\Big{(}e+(-1)^{i} U_{\mu_{i},\xi_{i}}+(-1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}\Big{]}^{2}| \psi^{h}_{\mu_{j},\xi_{j}}|dx\] \[= O\Big{(}\ln|\ln\mu_{i}|+\Big{(}\frac{\mu_{i}}{\mu_{i-1}}\Big{)}^{ \frac{n+2}{2}}+\Big{(}\frac{\mu_{i+1}}{\mu_{i}}\Big{)}^{\frac{n+2}{2}}\Big{)}= O(\ln|\ln\mu_{i}|)=O\Big{(}\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}} \Big{|}\Big{)}.\] Thus the second term in (5.30) becomes \[\int_{\Omega}\Big{|}V^{p}\Big{(}\ln\ln(e+V)\Big{)}^{2}\psi^{h}_{\mu_{j},\xi_{j}} \Big{|}dx=O((\ln|\ln\mu_{i}|)^{2})=O\Big{(}\Big{(}\ln|\ln\frac{\varepsilon}{| \ln\varepsilon|^{2}}\Big{|}\Big{)}^{2}\Big{)}.\] Consequently, \[J_{3}= \int_{\Omega}\Big{(}f_{0}(V)-f_{\varepsilon}(V)\Big{)}\psi^{h}_{\mu _{j},\xi_{j}}dx \tag{5.32}\] \[= \varepsilon\int_{\Omega}V^{p}\Big{(}\ln\ln(e+V)\Big{)}\psi^{h}_{\mu _{j},\xi_{j}}dx+O\Big{(}\varepsilon^{2}\Big{(}\ln\Big{|}\ln\frac{\varepsilon}{| \ln\varepsilon|^{2}}\Big{|}\Big{)}^{2}\Big{)}.\] Moreover, \[\int_{\Omega}V^{p}\Big{(}\ln\ln(e+V)\Big{)}\psi^{h}_{\mu_{j},\xi_{j} }dx\] \[= \int_{\Omega}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)} ^{p}\Big{[}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)} \Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx \tag{5.33}\] \[-\Big{[}\int_{\Omega}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_ {i}}\Big{)}^{p}\Big{[}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}\Big{]}-V^{p}\ln\ln(e+V)\Big{]}\psi^{h}_{\mu_{j},\xi_{j}}dx.\] Let us set \(h(u)=u^{p}\ln\ln(e+u)\), by the mean value theorem, one has \[0\leq h(u)-h(v)\leq Cu^{p-1}\Big{(}\ln\ln(e+u)+1\Big{)}(u-v)\quad\text{for }0 \leq v\leq u.\] Then \[\int_{\Omega}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}^{p-1}\bigg{(}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}+1\bigg{)}\bigg{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}-V\bigg{)} \psi^{h}_{\mu_{j},\xi_{j}}dx\] \[= \sum_{i=1}^{k}\int_{\mathcal{A}_{i}}\Big{(}\sum_{i=1}^{k}(-1)^{ i}U_{\mu_{i},\xi_{i}}\Big{)}^{p-1}\bigg{(}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{ \mu_{i},\xi_{i}}\Big{)}+1\bigg{)}\sum_{i=1}^{k}(U_{\mu_{i},\xi_{i}}-PU_{\mu_{i },\xi_{i}})\psi^{h}_{\mu_{j},\xi_{j}}dx\] \[+o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}.\] Moreover, on each annulus \(\mathcal{A}_{i}\), \(i=1,\cdots,k\), if \(h=0\), by Lemma 5.1, (2.9) and the Lemma 5.5, we change variable setting \(\mu_{i}y=x-\xi\), then \[\int_{\mathcal{A}_{i}}\Big{|}\Big{(}(-1)^{i}U_{\mu_{i},\xi_{i}}+( -1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}^{p-1}\bigg{(}\ln\ln\Big{(} e+(-1)^{i}U_{\mu_{i},\xi_{i}}+(-1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}} \Big{)}+1\bigg{)}\] \[\times\sum_{i=1}^{k}(-1)^{i}(U_{\mu_{i},\xi_{i}}-PU_{\mu_{i},\xi_ {i}})\psi^{0}_{\mu_{j},\xi_{j}}\Big{|}dx\] \[= \alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\int_{\mathcal{A}_ {i}}\Big{|}(-1)^{i}U^{p-1}_{\mu_{i},\xi_{i}}\Big{(}\ln\ln(e+(-1)^{i}U_{\mu_{i },\xi_{i}})+1\Big{)}(H(x,\xi_{i})+O(\mu_{i}^{2}))\psi^{0}_{\mu_{j},\xi_{j}} \Big{|}dx\] \[+o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}\] \[= \alpha_{n}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\int_{\frac{ \mathcal{A}_{i}}{\mu_{i}}}\Big{|}\frac{\mu_{i}^{-2}}{(1+|y-\sigma_{i}|^{2})^{2 }}\Big{(}\ln\ln\Big{(}e+\frac{\mu_{i}^{-\frac{n-2}{2}}}{(1+|y-\sigma_{i}|^{2}) ^{\frac{n-2}{2}}}\Big{)}+1\Big{)}\] \[\times \Big{(}H(\xi_{i}+\mu_{i}y-\mu_{i}\sigma_{i},\xi_{i})+O(\mu_{i}^{2 })\Big{)}\frac{|\mu_{i}y-\mu_{j}\sigma_{j}|^{2}-\mu_{j}^{2}}{\Big{(}\mu_{j}^{2 }+|\mu_{i}y-\mu_{j}\sigma_{j}|^{2}\Big{)}^{\frac{n}{2}}}\Big{|}dy+o\Big{(} \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}\] \[\leq \left\{\begin{array}{l}\alpha_{n}\sum\limits_{i=1}^{k}\mu_{i}^{ \frac{n-2}{2}}\Big{(}H(\xi_{i},\xi_{i})+O(\mu_{i}^{2})\Big{)}\mu_{j}^{-(n-2)}( \ln|\ln\mu_{i}|+1)\\ \times\int_{\frac{\mathcal{A}_{i}}{\mu_{i}}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{ 2}}\frac{1}{|y|^{n-2}}dy+o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)} \ \ \text{if}\ j<i,\\ \alpha_{n}\sum\limits_{i=1}^{k}\mu_{i}^{-\frac{n-2}{2}}(\ln|\ln\mu_{i}|+1) \Big{(}H(\xi_{i},\xi_{i})+O(\mu_{i}^{2})\Big{)}\\ \times\int_{\frac{\mathcal{A}_{i}}{\mu_{i}}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{ 2}}\frac{1}{|y-\frac{\rho_{i}}{\mu_{j}}\sigma_{j}|^{n-2}}dy+o\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}\ \ \text{if}\ j>i,\end{array}\right.\] \[= \begin{cases}O\Big{(}(\frac{\mu_{i}}{\mu_{j}})^{\frac{n-2}{2}}\mu_{j}^{ -\frac{n-2}{2}}\ln|\ln\mu_{i}|\Big{)}&\text{if }h=0,\\ O\Big{(}\mu_{i}^{-\frac{n-2}{2}}\ln|\ln\mu_{i}|\Big{)}&\text{if }h=1, \cdots,n,\end{cases}\] \[= O\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\ln\Big{|}\ln \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}.\] By the same argument, for \(h=1,\cdots,n\), we have \[\int_{\mathcal{A}_{i}}\Big{|}\Big{(}(-1)^{i}U_{\mu_{i},\xi_{i}}+( -1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}^{p-1}\bigg{(}\ln\ln\Big{(}e+ (-1)^{i}U_{\mu_{i},\xi_{i}}+(-1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)} +1\Big{)}\] \[\quad\times\sum_{i=1}^{k}(-1)^{i}(U_{\mu_{i},\xi_{i}}-PU_{\mu_{i },\xi_{i}})\psi_{\mu_{j},\xi_{j}}^{h}\Big{|}dx\] \[= O\Big{(}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\ln|\ln\mu_{i}| \Big{)}=O\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{ 1}{2}}\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}.\] Therefore, the second term in (5.33) becomes \[\int_{\Omega}\Big{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{ i}}\Big{)}^{p}\bigg{(}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}\bigg{)}\psi_{\mu_{j},\xi_{j}}^{h}\Big{|}dx-\int_{\Omega}\Big{|}V^{p} \ln\ln(e+V)\psi_{\mu_{j},\xi_{j}}^{h}\Big{|}dx\] \[= O\Big{(}\sum_{i=1}^{k}\mu_{i}^{\frac{n-2}{2}}\ln|\ln\mu_{i}| \Big{)}=O\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac {1}{2}}\ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}.\] For the first term in (5.33), if \(j=i\), from Lemma 5.5, (2.9), let \(x-\xi=\mu_{i}y\), then \[\int_{\Omega}\Big{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{ i}}\Big{)}^{p}\Big{[}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}\Big{]}\psi_{\mu_{i},\xi_{i}}^{h}\Big{|}dx\] \[= \sum_{i=1}^{k}\int_{\mathcal{A}_{i}}\bigg{|}\Big{(}(-1)^{i}U_{\mu _{i},\xi_{i}}+(-1)^{j}\sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}^{p}\] \[\times\Big{[}\ln\ln\Big{(}e+(-1)^{i}U_{\mu_{i},\xi_{i}}+(-1)^{j} \sum_{j\neq i}^{k}U_{\mu_{j},\xi_{j}}\Big{)}\Big{]}\psi_{\mu_{i},\xi_{i}}^{h} \Big{|}dx+o\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}\] \[= \mu_{1}^{n-\frac{n+2}{2}-\frac{n-2}{2}}\int_{\frac{\mathcal{A}_{i} }{\mu_{i}}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}}\bigg{|}\ln\ln\Big{(} e+(-1)^{i}\mu_{i}^{-\frac{n-2}{2}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n-2}{2}}}\] \[+(-1)^{j}\sum_{j\neq i}^{k}\frac{\mu_{j}^{\frac{n-2}{2}}}{\big{(}( \frac{\mu_{j}}{\mu_{i}})^{2}+|y-\frac{\sigma_{i}}{\mu_{i}}|^{2}\big{)}^{\frac{n+ 2}{2}}\mu_{i}^{n-2}}\Big{)}\psi^{h}(y)\Big{|}dy+o\Big{(}\frac{\varepsilon}{| \ln\varepsilon|^{2}}\Big{)}\] \[= \sum_{i=1}^{k}\ln\Big{|}\ln\mu_{i}^{-\frac{n-2}{2}}\Big{|}\int_{ \frac{\mathcal{A}_{i}}{\mu_{i}}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}} }\psi^{h}(y)dy+\sum_{i=1}^{k}\frac{1}{|\ln\mu_{i}|}\int_{\frac{\mathcal{A}_{i}} {\mu_{i}}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}} \tag{5.34}\] \[\times\bigg{[}|\ln\mu_{i}|\ln\bigg{(}1+\frac{\ln\Big{[}e^{1-\frac{ n-2}{2}|\ln\mu_{i}|}+\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}}\Big{]}}{\frac{n-2}{2}| \ln\mu_{i}|}\bigg{)}\bigg{]}\psi^{h}(y)dy+o\Big{(}\frac{\varepsilon}{|\ln \varepsilon|^{2}}\Big{)}.\] Moreover, let \(\Lambda(y)=\frac{1}{\left(1+|y-\sigma_{i}|^{2}\right)^{\frac{n+2}{2}}}|\ln\mu_{i}| \ln\bigg{(}1+\frac{\ln\left(e^{1-\frac{n-2}{2}|\ln\mu_{i}|}+\frac{1}{\frac{(1+|y -\sigma_{i}|^{2})^{\frac{n+2}{2}}}{2}}\right)}{\frac{n-2}{2}|\ln\mu_{i}|}\bigg{)} \psi^{h}(y)\) for \(h=1,\cdots,n\). Since \(\psi(y)\) is a odd function, we deduce \(\int_{\mathbb{R}^{n}}\Lambda(y)dy=0\). Further, by Lemma 5.5, there holds \[\int_{\mathbb{R}^{n}}\Lambda(y)dy-\int_{\frac{A_{i}}{\mu_{i}}} \Lambda(y)dy=\int_{\mathbb{R}^{n}\setminus\frac{A_{i}}{\mu_{i}}}\Lambda(y)dy\] \[\leq C|\ln\mu_{i}|\ln\bigg{(}1+\frac{\ln\left(e^{1-\frac{n-2}{2}|\ln \mu_{i}|}+\alpha_{n}\right)}{\frac{n-2}{2}|\mu_{i}|}\bigg{)}\int_{\mathbb{R}^{ n}\setminus\frac{A_{i}}{\mu_{i}}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}}| \psi^{h}(y)|dy\] \[\leq C\Big{(}\frac{2}{n-2}\ln\alpha_{n}+o(1)\Big{)}\mu_{i}^{n+1}=O( \mu_{i}^{n+1})=o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)} ^{\frac{n-1}{n-2}}\bigg{)}.\] Hence, for \(\mu_{i}\) small enough, we conclude that \[\int_{\Omega}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)}\psi ^{h}_{\mu_{j},\xi_{j}}dx \tag{5.35}\] \[= O\Big{(}\sum_{i=1}^{k}\mu_{i}^{n+1}\ln\Big{|}\ln\mu_{i}^{-\frac{ n-2}{2}}\Big{|}\Big{)}=o\bigg{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}} \Big{)}^{\frac{n-1}{n-2}}\bigg{)},\quad\text{for }h=1,\cdots,n.\] When \(h=0\), since \(U_{\mu_{i},\xi_{i}}\) is the unique positive solution of problem (1.4) and also, \(\psi^{0}_{\mu_{i},\xi_{i}}\) solves (2.7), then \[\int_{\mathbb{R}^{n}}U^{p}_{\mu_{i},\xi_{i}}\psi^{0}_{\mu_{i},\xi_{i}}dx=\int _{\mathbb{R}^{n}}\nabla U_{\mu_{i},\xi_{i}}\nabla\psi^{0}_{\mu_{i},\xi_{i}}dx =p\int_{\mathbb{R}^{n}}U^{p}_{\mu_{i},\xi_{i}}\psi^{0}_{\mu_{i},\xi_{i}}dx,\] which follows that \[\langle U_{\mu_{i},\xi_{i}},\psi^{0}_{\mu_{i},\xi_{i}}\rangle=\int_{\mathbb{R} ^{n}}U^{p}_{\mu_{i},\xi_{i}}\psi^{0}_{\mu_{i},\xi_{i}}dx=0.\] Thus, \(\int_{\mathbb{R}^{n}}\frac{1}{\left(1+|y-\sigma_{i}|^{2}\right)^{\frac{n+2}{2 }}}\psi^{0}(y)dx=0\) holds, also, \(\int_{\mathbb{R}^{n}\setminus\frac{A_{i}}{\mu_{i}}}\frac{1}{(1+|y-\sigma_{i}|^ {2})^{\frac{n+2}{2}}}\psi^{0}dy=0.\) From (5.34) and Lemma 5.5, there holds \[\int_{\Omega}\bigg{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{ i}}\Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)} \psi^{0}_{\mu_{i},\xi_{i}}\Big{|}dx\] \[= \sum_{i=1}^{k}\frac{1}{|\ln\mu_{i}|}\frac{2}{n-2}\int_{\mathbb{R} ^{n}}\bigg{|}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}}\ln\Big{(}\frac{1} {(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}}\Big{)}\psi^{0}(y)\bigg{|}dy+o\Big{(} \frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}.\] On the other hand, \[\int_{\Omega}\bigg{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{ i}}\Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}\psi^{h}_{\mu_{i},\xi_{i}}\Big{|}dx\] \[= \begin{cases}\frac{2}{n-2}a_{4}\sum\limits_{i=1}^{k}\frac{ \varepsilon}{|\ln\mu_{i}|}+O\Big{(}\frac{1}{|\ln\mu_{1}|}\Big{)}&\text{if }h=0,\\ o\bigg{(}\big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{)}^{\frac{n-1}{n-2} }\bigg{)}&\text{if }h=1,\cdots,n.\end{cases}\] By the same argument, if \(j\neq i\), one has \[\int_{\Omega}\Big{|}\Big{(}\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}} \Big{)}^{p}\ln\ln\Big{(}e+\sum_{i=1}^{k}(-1)^{i}U_{\mu_{i},\xi_{i}}\Big{)}\psi_{ \mu_{i},\xi_{i}}^{h}\Big{|}dx=o\Big{(}\Big{(}\frac{\varepsilon}{|\ln\varepsilon |^{2}}\Big{)}^{\frac{n-1}{n-2}}\Big{)}.\] Consequently, \[J_{3} = \int_{\Omega}\Big{(}f_{0}(V)-f_{\varepsilon}(V)\Big{)}\psi_{\mu_{ j},\xi_{j}}^{h}dx\] \[= \begin{cases}-\frac{2}{n-2}a_{4}\frac{\varepsilon}{|\ln\varepsilon |^{2}}-\frac{2}{n-2}a_{4}\sum\limits_{i=1}^{k}|\ln d_{i}|+o\Big{(}\frac{ \varepsilon}{|\ln\varepsilon|^{2}}\Big{)}&\text{ if }\,h=0,\\ O\Big{(}\big{(}\frac{\varepsilon}{|\ln\varepsilon|^{2}}\big{)}^{\frac{1}{2}} \ln\Big{|}\ln\frac{\varepsilon}{|\ln\varepsilon|^{2}}\Big{|}\Big{)}&\text{ if }\,h=1,\cdots,n.\end{cases}\] Combining \(J_{1}\)-\(J_{3}\), the proof of this lemma is completed. Finally, let us state that \(a_{4}\) is a positive constant. From (2.8), polar coordinates, integrating by parts, changing variables (\(s=r^{2}\), \(dr=\frac{1}{2}s^{-\frac{1}{2}}ds\)), and a fact that \(|\partial B_{1}(0)|=\frac{2\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}\), we obtain \[a_{4}= -\int_{\mathbb{R}^{n}}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2} {2}}}\ln\Big{(}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}}\Big{)}\psi^{0 }(y)dy\] \[= -\int_{B_{1}(0)}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}} \ln\Big{(}\frac{1}{(1+|y-\sigma_{i}|^{2})^{\frac{n+2}{2}}}\Big{)}\psi^{0}(y)dy\] \[= -\frac{(n-2)\alpha_{n}^{2^{*}}}{2}|\partial B_{1}(0)|\int_{0}^{ \infty}\frac{r^{n-1}}{(1+r^{2})^{\frac{n+2}{2}}}\ln\Big{(}\frac{1}{(1+r^{2})^ {\frac{n+2}{2}}}\Big{)}\frac{r^{2}-1}{(1+r^{2})^{\frac{n}{2}}}dr\] \[= \frac{(n-2)^{2}\alpha_{n}^{2^{*}}}{4}\frac{2\pi^{\frac{n}{2}}}{ \Gamma(\frac{n}{2})}\int_{0}^{\infty}\frac{r^{n-1}(r^{2}-1)}{(1+r^{2})^{n+1}} \ln(1+r^{2})dr\] \[= \frac{(n-2)^{2}\alpha_{n}^{2^{*}}}{2}\frac{\pi^{\frac{n}{2}}}{ \Gamma(\frac{n}{2})}\int_{0}^{\infty}\frac{1}{n}\Big{(}\frac{r}{1+r^{2}}\Big{)} ^{n}\frac{2r}{1+r^{2}}dr\] \[= \frac{(n-2)^{2}\alpha_{n}^{2^{*}}}{2n}\frac{\pi^{\frac{n}{2}}}{ \Gamma(\frac{n}{2})}\int_{0}^{\infty}\frac{s^{\frac{n}{2}}}{(1+s)^{n+1}}ds= \frac{(n-2)^{2}\alpha_{n}^{2^{*}}}{2n}\frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n }{2})}B(\frac{n}{2}+1,\frac{n}{2})ds\] \[= \frac{(n-2)^{2}\alpha_{n}^{2^{*}}}{2n}\frac{\pi^{\frac{n}{2}}}{ \Gamma(\frac{n}{2})}\frac{\Gamma(\frac{n}{2}+1)\Gamma(\frac{n}{2})}{\Gamma(n +1)}=\frac{\Gamma(\frac{n}{2})\pi^{\frac{n}{2}}}{4\Gamma(n+1)}n^{\frac{n}{2}}( n-2)^{\frac{n+4}{2}}.\] **Acknowledgments**.: The authors were supported by National Natural Science Foundation of China 11971392.
2301.09881
Fever: Optimal Responsive View Synchronisation
View synchronisation is an important component of many modern Byzantine Fault Tolerant State Machine Replication (SMR) systems in the partial synchrony model. Roughly, the efficiency of view synchronisation is measured as the word complexity and latency required for moving from being synchronised in a view of one correct leader to being synchronised in the view of the next correct leader. The efficiency of view synchronisation has emerged as a major bottleneck in the efficiency of SMR systems as a whole. A key question remained open: Do there exist view synchronisation protocols with asymptotically optimal quadratic worst-case word complexity that also obtain linear message complexity and responsiveness when moving between consecutive correct leaders? We answer this question affirmatively with a new view synchronisation protocol for partial synchrony assuming minimal clock synchronisation, called \emph{Fever}. If $n$ is the number of processors and $t$ is the largest integer $<n/3$, then Fever has resilience $t$, and in all executions with at most $0\leq f\leq t$ Byzantine parties and network delays of at most $\delta \leq \Delta$ after $GST$ (where $f$ and $\delta$ are unknown), Fever has worst-case word complexity $O(fn+n)$ and worst-case latency $O(\Delta f + \delta)$.
Andrew Lewis-Pye, Ittai Abraham
2023-01-24T09:37:13Z
http://arxiv.org/abs/2301.09881v4
# Fever: Optimal Responsive View Synchronisation ###### Abstract. View synchronisation is an important component of many modern Byzantine Fault Tolerant State Machine Replication (SMR) systems in the partial synchrony model. Roughly, the efficiency of view synchronisation is measured as the word complexity and latency required for moving from being synchronised in a view of one correct leader to being synchronised in the view of the next correct leader. The efficiency of view synchronisation has emerged as a major bottleneck in the efficiency of SMR systems as a whole. A key question remained open: Do there exist view synchronisation protocols with asymptotically optimal quadratic worst-case word complexity that also obtain linear message complexity and responsiveness when moving between consecutive correct leaders? We answer this question affirmatively with a new view synchronisation protocol for partial synchrony assuming minimal clock synchronisation, called _Fever_. If \(n\) is the number of processors and \(t\) is the largest integer \(<n/3\), then Fever has resilience \(t\), and in all executions with at most \(0\leq f\leq t\) Byzantine parties and network delays of at most \(\delta\leq\Delta\) after \(GST\) (where \(f\) and \(\delta\) are unknown), Fever has worst-case word complexity \(O(fn+n)\) and worst-case latency \(O(\Delta f+\delta)\). ## 1. Introduction Recent years have seen interest in developing protocols for Byzantine Agreement and State Machine Replication (SMR) that work efficiently at scale (Krishnam, 2017). In concrete terms, this means looking to minimise the latency and the word complexity per consensus decision as a function of the number of participants \(n\). Most commonly, this analysis takes place in the partial synchrony communication model, first suggested by Dwork, Lynch, and Stockmeyer (Dwork et al., 2012). The partial synchrony model forces the adversary to choose a point in time called the Global Stabilisation Time (\(GST\)) such that any message sent at time \(\mathtt{t}\) must arrive by time \(\max\{GST,\mathtt{t}\}+\Delta\). While \(\Delta\) is known, the value of \(GST\) is unknown to the protocol. This model forms a practical compromise between the synchronous model (where all message delays are bounded by \(\Delta\)), which is too optimistic, and the asynchronous model (where message delays are finite but unbounded), which is too pessimistic. In a recent line of works (Krishnam, 2017; Krishnam, 2017; Dwork et al., 2012) it has been shown that Byzantine Agreement and SMR can be solved with optimal resilience and with worst-case word complexity \(O(n^{2})\) after \(GST\). Here, optimal resilience means being able to handle up to \(t\) many Byzantine faults (Dwork et al., 2012), where \(t\) is the greatest integer less than \(n/3\). Given the lower bound of \(\Omega(n^{2})\) by Dolev and Reischuk (Dolev and Reischuk, 2017), this bound on word complexity is tight. **The optimistic case**. In practical settings, however, one typically cares not only about the worst-case, but also about the complexity and latency in the optimistic case when the actual (and unknown) number of failures \(f\) is less than the given bound \(t\). Indeed, this is one of the principal motivations for considering the partial synchrony model. In the asynchronous model, where randomness is required after the initial cryptographic setup (Dwork et al., 2012), one can already achieve word complexity which is _expected_\(O(n^{2})\) per consensus decision (Blekker, 2017). In the partially synchrony model, the hope is that one may be able to define protocols which have worst-case complexity (providing cryptographic assumptions hold) which is \(O(fn+n)\). Ideally, such protocols should also be _optimistically responsive_. Roughly, this means that the protocol should function at network speed: The protocol should be live during periods when message delay is less than the given bound \(\Delta\), but latency should be a function of the actual (unknown) message delay \(\delta\). This is important because the actual message delay \(\delta\) may be much smaller than \(\Delta\) when the latter value is conservatively set so as ensure liveness under a wide range of network conditions. More formally, we can say that a protocol is optimistically responsive if the latency after \(GST\) is \(O(\Delta f+\delta)\) - a precise definition will be given in Section 2. Existing protocols for the partial synchrony model that give optimal resilience and worst-case complexity \(O(n^{2})\) do not satisfy these requirements. For such protocols [9, 14], the worst-case complexity is \(O(n^{2})\) but not \(O(fn+n)\), while latency is \(O(n\Delta)\). **The bottleneck is view synchronisation**. Protocols for Byzantine Agreement and SMR typically divide the instructions into _views_, each with a dedicated leader that coordinates the protocol execution during that view. Since Hotstuff [19] shows how to achieve linear complexity within views, the remaining task is to define an efficient protocol that coordinates processors to execute instructions for the same view at the same time as each other. Accordingly, the task of defining efficient protocols for view synchronisation has become a principal focus [4, 9, 14, 15, 16], e.g. the protocols mentioned above, that achieve worst-case complexity \(O(n^{2})\) for Byzantine Agreement in the partial synchrony model, achieve this by defining an appropriate method of view synchronisation. **Clock assumptions**. There are actually two scenarios in which view synchronisation becomes a non-trivial task. The first is that processors do not begin the protocol with synchronised clocks or that, even if they do, clocks may experience arbitrary drift prior to \(GST\). Even if clocks are initially synchronised and have identical speeds, however, view synchronisation is equally problematic in the case that one requires optimistic responsiveness - the need to produce consensus decisions at 'network speed' means that, during asynchrony prior to \(GST\), some correct processors may progress through the protocol instructions arbitrarily faster than others. The standard approach when defining view synchronisation protocols has been to assume no conditions on initial clock synchronisation and no bound on clock drift prior to \(GST\) whatsoever, beyond the fact that all correct processors begin the protocol prior to \(GST\). In this paper, we take a different approach. We suppose that there may be scenarios in which the participants can form some sort of initial clock synchronisation (e.g. by physically meeting), and where clocks are sufficiently accurate that they will experience bounded drift in any period of asynchrony of realistic length. Atomic clocks can presently be purchased for a few thousand US dollars, for example, and have a typical error of only one second in 100 million years. In fact, we do not assume clocks are perfectly synchronised, but require that clock drift is bounded during periods of asynchrony and define a notion of _minimal initial clock synchronisation_ (see Section 2) which can realistically be easily achieved. In this scenario it is then the requirement for optimistic responsiveness which presents a barrier to view synchronisation. Using the assumption of minimal initial clock synchronisation, we are able to define an innovative view synchronisation protocol in which the correct processors send at most \(2n\) messages (combined) per view, and which is efficient in both the worst and optimistic cases. **The result**. All terms in Theorem 1 will be formally defined in Section 2. Roughly, the worst-case word complexity of a view synchronisation protocol is the maximum number of words (each of maximum length determined by a security parameter) that need to be sent by correct processors during synchrony to synchronise all correct processors on a view with a correct leader. Similarly, the worst-case latency is the maximum time one has to wait during synchrony before all correct processors synchronise on a view with correct leader. **Theorem 1**.: _Consider the partial synchrony model with maximum delay \(\Delta\) after GST and with minimal initial clock synchronisation. If \(t\) is the largest integer less than \(n/3\), there exists a view synchronisation protocol with resilience \(t\), such that for all executions with at most \(0\leq f\leq t\) Byzantine parties and network delays of at most \(\delta\leq\Delta\) after GST (where \(f\) and \(\delta\) are unknown):_ 1. _The worst-case word complexity is_ \(O(fn+n)\)_;_ 2. _The worst-case latency is_ \(O(\Delta f+\delta)\) _In particular, for \(f=0\) this means \(O(n)\) complexity and \(O(\delta)\) latency and for \(f=t\) this is \(O(n^{2})\) complexity and \(O(\Delta n)\) latency._ Theorem 1.1 obtains worst-case quadratic communication, constant latency per malicious processor, and responsiveness between consecutive honest leaders. This resolves the main open question raised in Cogsworth[15]. Combined with Hotstuff, Theorem 1.1 gives an optimally resilient SMR protocol for the partial synchrony model that: 1. In the worst-case, requires \(O(fn+n)\) words to be sent by correct processors after \(GST\) before confirmation of the first block of transactions after \(GST\), and; 2. Produces a first confirmed block of transactions after \(GST\) within time \(O(\Delta f+\delta)\) of \(GST\). Since \(GST\) is unknown to the protocol, note that similar bounds then hold for the word complexity and latency between honestly produced confirmed blocks after \(GST\). The key conceptual contribution of this work is to show that careful use of _local clocks_ can improve view synchronisation. Specifically, under minimal initial clock synchronisation, it is possible to maintain a _weak form of clock synchronisation_ even under complete network asynchrony (before \(GST\)). On the one hand, this weak form of synchronisation guarantees that, despite network asynchrony, the correct processor whose local clock is most advanced is advanced by a bounded amount relative to at least \(t\) other correct processors. On the other hand, this weak form of clock synchronisation enables processors to _algorithmically move local clocks forward_ to obtain responsiveness, which is captured by obtaining \(O(\delta)\) latency when \(f=0\). ### Related work Tendermint [7] showed how to use constant size messages for view-change. Casper FFG [8] extended this approach to allow pipelining. Hotstuff [19] extended these to define an SMR protocol achieving responsiveness and word complexity \(O(n)\) within views, but did not rigorously establish an efficient technique for view synchronisation. In response to this, a number of papers have described view synchronisation protocols with different trade-offs. Cogsworth [15] and Naor-Keidar [16] consider a setup in which leaders are chosen according to successive random permutations of the set of processors. They consider a static and _oblivious_ adversary, who must choose \(GST\) without knowledge of the sequence of randomly chosen leaders, and which must also choose processors to corrupt at the start of the protocol execution without this knowledge. While we do not need to make use of any randomness (beyond that required for the initial cryptographic setup) to establish Theorem 1.1, we also consider such a setup for the purpose of apples-to-apples comparisons in Table 1 (in the 'Expected Latency' and 'Expected Complexity' columns). Both Cogsworth and Naor-Keidar achieve expected latency \(O(\Delta)\) for such a static adversary, but this bound increases to \(O(f^{2}\Delta+\delta)\) in the case that the adversary is adaptive, i.e. if the adversary can choose which processors to corrupt as the execution progresses (and with knowledge as to their choice of \(GST\)). The principal improvement of Naor-Keidar over Cogsworth is to decrease the expected complexity from \(O(n^{2})\) in the case of a static and oblivious adversary to \(O(n)\). The expected complexity for Cogsworth becomes \(O(fn^{2}+n)\) in the case of an adaptive adversary, and the worst-case complexity is also \(O(fn^{2}+n)\). The expected complexity for Naor-Keidar becomes \(O(f^{2}n+n)\) in the case of an adaptive adversary, and the worst-case complexity is also \(O(f^{2}n+n)\). For a more detailed discussion of Cogsworth and Naor-Keidar, see the Appendix. While the published version of Hotstuff [19] did not describe any efficient method for view synchronisation, the original version (posted on the arXiv [1]) did roughly outline an approach to meeting the \(O(n^{2})\) worst-case complexity bound of Dolev-Reischuk. This approach was made precise and rigorously proved in [14] and [9]. These papers described view synchronisation protocols In Table 1, we assume the _bound_\(t\) on the number of Byzantine processors is the largest integer less than \(n/3\), so that \(t=\Theta(n)\), while \(0\leq f\leq t\) is the _actual_ number of Byzantine processors. 'Complexity' means 'word complexity'. Both latency and word complexity are defined in Section 2, as is the'minimal initial clock synchronisation' condition. We only distinguish explicitly between a static and adaptive adversary when this changes the corresponding bound. In Table 1, we assume the _bound_\(t\) on the number of Byzantine processors is the largest integer less than \(n/3\), so that \(t=\Theta(n)\), while \(0\leq f\leq t\) is the _actual_ number of Byzantine processors. 'Complexity' means 'word complexity'. Both latency and word complexity are defined in Section 2, as is the'minimal initial clock synchronisation' condition. We only distinguish explicitly between a static and adaptive adversary when this changes the corresponding bound. In Table 1, we assume the _bound_\(t\) on the number of Byzantine processors is the largest integer less than \(n/3\), so that \(t=\Theta(n)\), while \(0\leq f\leq t\) is the _actual_ number of Byzantine processors. 'Complexity' means 'word complexity'. Both latency and word complexity are defined in Section 2, as is the'minimal initial clock synchronisation' condition. We only distinguish explicitly between a static and adaptive adversary when this changes the corresponding bound. In Table 1, we assume the _bound_\(t\) on the number of Byzantine processors is the largest integer less than \(n/3\), so that \(t=\Theta(n)\), while \(0\leq f\leq t\) is the _actual_ number of Byzantine processors. 'Complexity' means 'word complexity'. Both latency and word complexity are defined in Section 2, as is the'minimal initial clock synchronisation' condition. We only distinguish explicitly between a static and adaptive adversary when this changes the corresponding bound. In Table 1, we assume the _bound_\(t\) on the number of Byzantine processors is the largest integer less than \(n/3\), so that \(t=\Theta(n)\), while \(0\leq f\leq t\) is the _actual_ number of Byzantine processors. 'Complexity' means 'word complexity'. Both latency and word complexity are defined in Section 2, as is the'minimal initial clock synchronisation' condition. We only distinguish explicitly between a static and adaptive adversary when this changes the corresponding bound. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Protocol & Expected & Worst-case & Expected & Worst-case & Minimal Initial \\ & Latency & Latency & Complexity & Complexity & Clock Sync \\ \hline Cogsworth & static adv: \(O(\Delta)\) & \(O(f^{2}\Delta+\delta)\) & static adv: \(O(n^{2})\) & \(O(fn^{2}+n)\) & Not needed \\ & adaptive adv: \(O(f^{2}\Delta+\delta)\) & adaptive adv: \(O(fn^{2}+n)\) & & & \\ \hline Naor-Keidar & static adv: \(O(\Delta)\) & \(O(f^{2}\Delta+\delta)\) & static adv: \(O(n)\) & \(O(f^{2}n+n)\) & Not needed \\ & adaptive adv: \(O(f^{2}\Delta+\delta)\) & & adaptive adv: \(O(f^{2}n+n)\) & & \\ \hline Lewis-Pye & \(O(n\Delta)\) & \(O(n\Delta)\) & \(O(n^{2})\) & \(O(n^{2})\) & Not needed \\ \hline Raresync & \(O(n\Delta)\) & \(O(n\Delta)\) & \(O(n^{2})\) & \(O(n^{2})\) & Not needed \\ \hline Fever & static adv: \(O(\Delta)\) & \(O(f\Delta+\delta)\) & static adv: \(O(n)\) & \(O(fn+n)\) & Needed \\ (this paper) & adaptive adv: \(O(f\Delta+\delta)\) & & adaptive adv: \(O(fn+n)\) & & Needed \\ \hline \hline \end{tabular} \end{table} Table 1: View Synchroniser Comparisons ## 2. The Setup We consider a set \(\Pi=\{p_{0},\ldots,p_{n-1}\}\) of \(n\) processors, and let \(t\) be the largest integer less than \(n/3\). Each processor \(p_{i}\) is told \(i\) as part of its input. For the proof of Theorem 1.1, we assume an adaptive adversary that is able to choose at most \(t\) processors to corrupt as the execution progresses. A processor that is corrupted by the adversary at any point in the execution is referred to as _Byzantine_, and may behave arbitrarily once corrupted. Processors that are not Byzantine are _correct_. We let \(f\) denote the actual number of Byzantine processors. **Cryptographic assumptions**. Our cryptographic assumptions are standard for papers on this topic. Processors communicate by point-to-point authenticated channels. We use a cryptographic signature scheme, a public key infrastructure (PKI) to validate signatures, and a threshold signature scheme (Brandes, 2002; Boutin et al., 2009). The threshold signature scheme is used to create a compact signature of \(m\)-of-\(n\) processors, as in other consensus protocols (Kal to begin the protocol execution (while other clocks may still be negative, so that those processors are still waiting to start), then at least \(t\) other correct processors begin the protocol execution within time \(\Gamma\). Note that this condition does not place any bound on the maximum difference between the clocks of correct processors. For the sake of simplicity, we will also initially assume that all correct processors have identical clock speeds. Then, in Section 5, we will consider realistic relaxations of this condition that suffice to give our results. **The underlying protocol**. We suppose view synchronisation is required for some underlying protocol (such as Hotstuff) with the following properties: * **Views**. Instructions are divided into views. Each view \(v\) has a designated _leader_, denoted \(\mathsf{lead}(v)\). For some parameter \(k\geq 3\) (which can be chosen to suit the protocol designer's needs), we suppose views are grouped into sets of \(k\), so that the leader1 for view \(v\) is processor \(p_{i}\) where \(i\coloneqq\lfloor v/k\rfloor\bmod n\). If \(v\bmod k=0\), then \(v\) is called 'initial'. Footnote 1: These assumptions are made for the purpose of proving Theorem 1. In verifying the bounds given in Table 1, we will also consider the possibility of random leader selection. * **Quorum certificates**. The successful completion of a view is marked by all processors receiving a _Quorum Certificate_ (QC) for view \(v\). The QC is a threshold signature of length \(O(\kappa)\) (for the security parameter \(\kappa\) that determines the length of signatures and hash values) combining \(n-t\) signatures from different processors testifying that they have completed the instructions for the view. In a chained implementation of Hotstuff, for example, the leader will propose a block, processors will send votes for the block to the leader, who will then combine those votes into a QC and send this to all processors. Alternatively, one could consider a (non-chained) implementation of Hotstuff, in which the relevant QC corresponds to a successful third round of voting. Note that the production of QCs is not a restrictive assumption, since if it is not satisfied one can easily amend the instructions of the protocol so that it is. * **Sufficient time for view completion**. We suppose there exists some known \(x\geq 2\) such that if \(\mathsf{lead}(v)\) is correct, if (the global time) \(\mathsf{t}\geq\mathrm{GST}\), and if at least \(n-t\) correct processors are in view \(v\) from time \(\mathsf{t}\) until either they receives a QC for view \(v\) or until \(\mathsf{t}+x\delta\), then all correct processors will receive a QC for view \(v\) by time \(\mathsf{t}+x\delta\), so long as all messages sent by correct processors while in view \(v\) are received within time \(\delta\leq\Delta\). For the sake of simplicity, we assume \(\Gamma\) from the definition of'minimal initial clock synchronisation' is equal to \(x\Delta-\) if these values differ then one can just take the maximum of the two values. **The view synchronisation task**. For \(\Gamma\) as above, we must ensure: 1. If a correct processor is in view \(v\) at time \(\mathsf{t}\) and in view \(v^{\prime}\) at \(\mathsf{t}^{\prime}\geq\mathsf{t}\), then \(v^{\prime}\geq v\). 2. There exists some correct \(\mathsf{lead}(v)\) and \(\mathsf{t}\geq GST\) such that each correct processor is in view \(v\) from time \(\mathsf{t}\) until either it receives a QC for view \(v\) or until \(\mathsf{t}+\Gamma\). Condition (1) above is required by standard view-based SMR protocols to ensure consistency. Since \(GST\) is unknown to the protocol, condition (2) suffices to ensure the successful completion of infinitely many views with correct leaders. By a _view synchronisation protocol_, we mean a protocol which determines when processors enters views and which satisfies conditions (1) and (2) above. **Complexity measures**. Our proofs are quite robust to the precise notions of latency and word complexity considered, and will hold for any of the definitions used in previous papers on the topic such as [4; 15; 16]. For the sake of concreteness, we fix complexity measures which are as strict as possible, and note that if we were to adopt the more relaxed measures used in (Kolmogorov, 1995), for example, then we could weaken the requirement that messages sent before \(GST\) are not lost. By a 'word', we mean a message of length \(O(\kappa)\), where \(\kappa\) is the security parameter determining the length of signatures and hash values. We make the following definitions. Let \(\mathsf{t}^{*}\) be the least time \(>GST\) at which the underlying protocol has some correct \(\mathsf{lead}(v)\) produce a QC for view \(v\). The worst-case word complexity is the maximum number of words sent by correct processors between time \(GST+\Delta\) and \(\mathsf{t}^{*}\). The worst-case latency is the maximum possible value of \(\mathsf{t}^{*}-GST\). **Defining optimistic responsiveness**. We do not need to define optimistic responsiveness to establish Theorem 1. For the sake of concreteness, however, we can define our view synchronisation protocol to be optimistically responsive if the worst-case latency is \(O(f\Delta+\delta)\), where \(f\) is the (unknown) number of Byzantine processors and \(\delta\leq\Delta\) is the actual (unknown) bound on message delay after \(GST\). ## 3. The Protocol Recall that views are grouped into sets of \(k\), so that the leader for view \(v\) is processor \(\lfloor v/k\rfloor\bmod n\). If \(v\bmod k=0\), then \(v\) is called 'initial'. To synchronise processors, we have a predetermined 'clock-time' corresponding to each view: The clock-time corresponding to view \(v\) is \(\mathsf{c}_{v}:=\Gamma v\). The rough idea is that, at certain points in the the execution (and to satisfy optimistic responsiveness), we have processors instantaneously forward their clock to some clock-time \(\mathsf{c}_{v}\) and enter view \(v\). We do this in such a way to ensure that, if \(p\) is the correct processor whose local clock is most advanced, then there are always at least \(t\) other correct processors whose local clocks are at most \(\Gamma\) behind \(p\)'s clock. This will suffice to ensure correct leaders are able to synchronise all correct processors after \(GST\). The instructions are defined simply as follows: **When processors enter views**. Recall that, at any point in the execution, \(c(p)\) is the value of processor \(p\)'s clock. If \(v\) is initial, then \(p\) enters view \(v\) when \(c(p)=\mathsf{c}_{p}\). If \(v\) is not initial, then \(p\) enters view \(v\) if it is presently in a view \(<v\) and it receives a QC (formed by the underlying protocol) for view \(v-1\). **View Certificates**. When a correct processor \(p\) enters a view \(v\) which is initial, it sends a \(\mathtt{view}\,v\) message to \(\mathsf{lead}(v)\). This message is just the value \(v\) signed by \(p\). Once \(\mathsf{lead}(v)\) receives \(t+1\mathtt{view}\,v\) messages from distinct processors, it combines these into a single threshold signature, which is a view certificate (VC) for view \(v\), and sends this VC to all processors.2 Footnote 2: It is convenient throughout to assume that when a leader sends a message to all processors, this includes itself. **When processors forward clocks**. At any point in the execution, if a correct processor \(p\) receives a QC for view \(v-1\) (formed by the underlying protocol) or a VC for view \(v\), and if \(c(p)<\mathsf{c}_{v}\), then \(p\) instantaneously forwards their clock to \(\mathsf{c}_{v}\). Pseudocode for the protocol is given in Algorithm 1. ``` 1:\(\mathsf condition on local clocks, described above, means that \(t\) other correct processors will also enter view \(v\) within a short time. Since \(\mathsf{lead}(v)\) only requires \(t+1\) signatures to form a VC for view \(v\), all correct processors will then receive a VC for view \(v\) within a short time. The underlying protocol will then have \(\mathsf{lead}(v)\) put together a QC for view \(v\). ``` 1:Local variables 2:\(c(p)\), initially \(0\)\(\triangleright\) This is the value of \(p\)'s clock 3:\(v\), initially \(0\)\(\triangleright\) This is the present view of \(p\). 4: 5:Global parameters 6:\(n\)\(\triangleright\) Number of processors 7:\(t\)\(\triangleright\) Largest integer \(<n/3\) 8:\(k:=3\)\(\triangleright\) Can take larger values. 9:\(\mathsf{c}_{v^{\prime}}:=v^{\prime}\Gamma\), \(v^{\prime}\in\mathbb{N}_{\geq 0}\)\(\triangleright\) Defines clock times 10:\(\mathsf{lead}(v^{\prime}):=p_{i}\) for \(i=\lfloor v^{\prime}/k\rfloor\bmod n\) and \(v^{\prime}\in\mathbb{N}_{\geq 0}\)\(\triangleright\) Specifies leaders 11: 12:Upon\(c(p)==c_{v^{\prime}}\) for \(v^{\prime}\) initial 13: Set \(v:=v^{\prime}\) 14: Send a view \(v\) message to \(\mathsf{lead}(v)\) 15: 16:Upon first seeing a QC for view \(v^{\prime}\geq v\) 17: Set \(v:=v^{\prime}+1\) 18: If \(c(p)<\mathsf{c}_{v^{\prime}+1}\) set \(c(p):=\mathsf{c}_{v^{\prime}+1}\) 19: 20:Upon first seeing a VC for initial view \(v^{\prime}>v\) 21: Set \(v:=v^{\prime}\) 22: If \(c(p)<\mathsf{c}_{v^{\prime}}\) set \(c(p):=\mathsf{c}_{v^{\prime}}\) 23: 24:If\(p==\mathsf{lead}(v^{\prime})\) for \(v^{\prime}\geq v\)then 25: Upon first seeing view(\(v^{\prime}\)) messages from \(t+1\) distinct processors 26: Form a VC for view \(v^{\prime}\) and send to all processors ``` **Algorithm 1** The instructions for processor \(p\). ## 4. The Proofs It is immediate from the instructions that if a correct processor enters a view \(v\) then it cannot subsequently enter any lower view. Recall that, at any point \(\mathsf{t}\) in an execution, \(T(\mathsf{t}):=\{c(p):\ p\text{ is correct}\}\). Our condition for'minimal initial clock synchronisation' required that a certain condition (\(\dagger_{\Gamma,0}\)) holds at the start of the protocol execution. This condition requires that if \(p\) is the correct processor whose local clock is most advanced, then at least \(t\) other correct processors have clocks that are at most \(\Gamma\) behind \(p\)'s clock. The key to the proof is to show that an analogous condition then holds at all times. Lemma 4.1 ().: _For all \(\mathsf{t}\) the following condition holds:_ 1. _For any_ \(\mathsf{c}\in T(\mathsf{t})\)_:_ \[|\{c^{\prime}\in T(\mathsf{t}):\ c^{\prime}\geq\mathsf{c}-\Gamma\}|\geq t+1.\] Before proving Lemma 4.1, we note that the lemma does _not_ place any bound on the maximum difference between the local clocks of correct processors. In fact, even if all clocks are initially perfectly synchronised, the local clocks of two correct processors can move arbitrarily far apart prior to \(GST\). Nevertheless, the fact that \((\hat{\mathfrak{t}}_{\mathrm{T},\mathfrak{t}})\) holds for all \(\mathfrak{t}\) will suffice to establish Theorem 4.1. Proof.: (Lemma 4.1) Since the local clocks of correct processors only ever move forward, it follows that at any point in an execution, if a correct processor \(p\) has already contributed to a QC or a VC for view \(v\), then \(c(p)\geq c_{v}\). To prove that \((\hat{\mathfrak{t}}_{\mathrm{T},\mathfrak{t}})\) holds for all \(\mathfrak{t}\), suppose towards a contradiction that there is a first point of the execution, \(\mathfrak{t}\) say, for which there exists some correct processor \(p\) such that \(|\{c\in T(\mathfrak{t}):\;c\geq c(p)-\Gamma\}|<t+1\). Then \(p\) must forward its clock at \(\mathfrak{t}\). There are two possibilities: 1. \(p\) forwards its clock because it receives a VC for some view \(v\) with \(c_{v}>c(p)\). In this case, there must exist at least one correct processor \(p^{\prime}\neq p\) which contributed to the VC for view \(v\). By the choice of \(\mathfrak{t}\), when \(p^{\prime}\) contributed to the VC at \(\mathfrak{t}^{\prime}\leq\mathfrak{t}\) we had \(|\{c\in T(\mathfrak{t}^{\prime}):c\geq c(p^{\prime})-\Gamma\}|\geq t+1\). Since \(c(p^{\prime})\geq c_{v}\) when it contributed to the VC, and since \(c(p)=c_{v}\) at \(\mathfrak{t}\), at \(\mathfrak{t}\) we have that \(|\{c\in T(\mathfrak{t}):c\geq c(p)-\Gamma\}|\geq t+1\) also, which gives the required contradiction. 2. \(p\) forwards its clock because it sees a QC. In this case, at least \(t+1\) correct processors must have contributed to the QC, which directly gives the required contradiction. **Lemma 4.2**: _If \(v\) is initial and \(\mathfrak{t}\) is the first time any correct processor enters a view \(\geq v\):_ 1. _A correct processor enters view_ \(v\) _at_ \(\mathfrak{t}\)_;_ 2. _No correct processor enters any view_ \(v^{\prime}>v\) _at_ \(\mathfrak{t}\)_, and;_ 3. \(c(p)\leq c_{v}\) _for all correct_ \(p\) _at_ \(\mathfrak{t}\)_._ Proof.: Consider the first time any correct processor \(p\) enters a view \(v^{\prime}\geq v\). It cannot be because \(p\) sees a VC for view \(v^{\prime}\), because some correct processor must then have contributed to that VC and already have been in view \(v^{\prime}\). It cannot be because \(p\) sees a QC for view \(v^{\prime}-1>v-1\), because \(t+1\) correct processors must have already contributed to that QC. It follows that the first view \(v^{\prime}\geq v\) entered by any correct processor is \(v\). When the first correct processor \(p\) enters view \(v\) we have \(c(p)=c_{v}\) (either simply because it reaches this value, or else because \(p\) sees a QC for view \(v-1\)), and that \(c(p^{\prime})\leq c_{v}\) for all correct \(p^{\prime}\) at this point. **Definition 4.3**: Let \(\mathfrak{t}(v)\) be the first time at which a correct processor enters view \(v\). Since correct processors enter an unbounded number of views, it follows from Lemma 4.2 that if \(v\) is initial then \(\mathfrak{t}(v)\downarrow\) and \(\mathfrak{t}(v^{\prime})>\mathfrak{t}(v)\) whenever \(v^{\prime}>v\) and \(\mathfrak{t}(v^{\prime})\downarrow\). Note also that if \(v\) is initial then, for \(j\in(0,k)\), a QC for view \(v+j\) cannot be formed prior to the formation of a QC for view \(v+j-1\). This follows because (since \(v\) is initial, and for \(j\) in the given range) correct processors do not enter view \(v+j\) without seeing a QC for view \(v+j-1\). The next lemma will be used to show that correct processors spend a sufficiently long time in each view that a correct leader after \(GST\) will be able to produce QCs. **Lemma 4.4**: _Suppose \(v\) is initial. For each \(j\in[0,k)\), let \(\mathsf{s}_{j}\) be the first time (if there exists such) at which a correct processor sees a QC for view \(v+j\). The first time at which any correct processor enters view \(v+k\) is the minimum amongst the values \(\{\mathfrak{t}(v)+k\Gamma\}\cup\{\mathsf{s}_{j}+(k-1-j)\Gamma:\;\mathsf{s}_{j}\downarrow\}\)._ Proof.: By Lemma 4.2, some correct processor \(p\) enters \(v\) at \(\mathfrak{t}(v)\), and all correct processors \(p^{\prime}\) have \(c(p^{\prime})\leq c_{v}\) at this point. As we reasoned in the proof of Lemma 4.2, it cannot be the case that the first time any correct processor enters a view \(v^{\prime}\geq v+k\) it is because it sees a VC for view or a QC for \(v^{\prime}-1\geq v+k\). It follows that the first time a correct processor \(p\) enters view \(v+k\) it is because its local clock has reached \(\mathsf{c}_{v+k}\). This happens either because \(p\) saw a QC for view \(v+j\) (\(j\in[0,k)\)) and then time \((k-1-j)\Gamma\) passed (meaning zero time if \(j=k-1\)), or else because \(p\) was the first correct processor to enter view \(v\) and time \(k\Gamma\) passed since that point. With Lemmas 4.1, 4.2 and 4.4 in place, the basic intuition behind the idea that a correct leader will produce a QC after \(GST\) is clear. Let \(\mathsf{lead}(v)\) be correct and such that no correct processor enters view \(v\) prior to \(GST\). From Lemma 4.2, it follows that no correct processor enters any view \(v^{\prime}>v\) prior to \(\mathsf{t}(v)\). By Lemma 4.1, at least \(t+1\) correct processors will have entered view \(v\) within time \(\Gamma\) - by Lemma 4.4, no correct processor will be in any view \(>v\) prior to the first of \(\mathsf{t}(v)+k\Gamma\) or else the formation of a QC for view \(v\). The correct processor \(\mathsf{lead}(v)\) will then form a VC for view \(v\), and all correct processors will be in view \(v\) by time \(\mathsf{t}+\Gamma+2\Delta\) unless a QC for view \(v\) has already been formed by this point. This means that all processors will receive a QC for view \(v\) by time \(\mathsf{t}+2\Gamma+2\Delta\). Since \(\Gamma\geq 2\Delta\) and \(k\geq 3\), this suffices. Now let us see the details. In the below, we prove more than the fact that a correct \(\mathsf{lead}(v)\) will produce a QC for one of the views in \([v,v+k)\). We show that \(\mathsf{lead}(v)\) will produce QCs for multiple successive views if \(k\) is large enough, since this will be useful in some implementations (such as chained implementations of Hotstuff etc). **Lemma 4.5**: _Suppose \(v\) is initial, \(\mathsf{lead}(v)\) is correct, and that \(\mathsf{t}(v)\geq GST\). Then correct processors will see QCs for all views in \([v,v+k-2)\) before entering view \(v+k\)._ Proof.: By Lemma 4.2, no correct processor has entered any view \(v^{\prime}>v\) at \(\mathsf{t}(v)\). By Lemma 4.1, \((\hat{\tau}_{\Gamma,\mathsf{t}(v)})\) is satisfied, which means at least \(t+1\)view \(v\) messages will have been sent to \(\mathsf{lead}(v)\) by \(\mathsf{t}(v)+\Gamma\) - by Lemma 4.4, no correct processor will be in any view \(>v\) prior to the point at which these \(t+1\)view \(v\) messages have been sent to \(\mathsf{lead}(v)\). Then \(\mathsf{lead}(v)\) will have sent out a VC for view \(v\) by \(\mathsf{t}(v)+\Gamma+\Delta\), which will be received by all correct processors by time \(\mathsf{t}(v)+\Gamma+2\Delta\). It then follows from Lemma 4.4, and since \(\Gamma\geq 2\Delta\), that a QC for each view \(j\in[0,v+k-2)\) will be seen by all correct processors by time \(\mathsf{t}(v)+\Gamma+2\Delta+(j+1)\Gamma\), prior to any point at which a correct processor enters view \(v+k\). **Lemma 4.6**: _The worst-case word complexity is \(O(fn+n)\) and the worst-case latency is \(O(\Delta f+\delta)\)._ Proof.: We deal with the word complexity first. Let \(p\) be the correct processor whose clock is most advanced at \(GST\) (breaking ties arbitrarily). Suppose \(p\) is in view \(v\) at \(GST\). Let \(v_{0}\) be the greatest initial view \(<v\) such that \(\mathsf{lead}(v_{0})\neq\mathsf{lead}(v)\) and \(\mathsf{lead}(v_{0})\) is correct. Let \(v_{1}\) be the least initial view \(>v\) such that \(\mathsf{lead}(v_{1})\) is correct. Since no correct processor will enter any view \(>v_{0}\) prior to the least of \(\mathsf{t}(v_{0})+k\Gamma\) or the first time at which a correct processor sees a QC for view \(v\), and since \((\hat{\tau}_{\Gamma,\mathsf{t}(v_{0})})\) is satisfied, \(\mathsf{lead}(v_{0})\) must have sent a VC for view \(v_{0}\) to all processors prior to \(GST\). All correct processors will therefore be in at least view \(v_{0}\) by \(GST+\Delta\). Lemma 4.5 shows that all correct processors will see a QC for view \(v_{1}\) before entering view \(v_{1}+k\). Let \(f^{*}\) be the number of Byzantine leaders for initial views in the interval \((v_{0},v_{1})\). Correct processors will send a maximum of \(2(f^{*}+3)n\) manyview messages (combined) between \(GST+\Delta\) and the time at which \(\mathsf{lead}(v_{1})\) produces a QC for view \(v_{1}\). If the underlying protocol has correct processors send \(O(n)\) messages per view (e.g. Hotstuff), then the underlying protocol will also have correct processors send \(O((f^{*}+3)n)\) messages during this interval. So the worst-case word complexity is \(O(fn+n)\), as required. Next, we consider the worst-case latency. If \(f>0\), then it suffices to observe that \(\mathsf{lead}(v_{1})\) will produce a QC for view \(v_{1}\) by time \(GST+k(f^{*}+3)\Gamma\). So suppose \(f=0\) and consider the number \(d\) of correct processors in view \(v\) at \(GST\). If \(d\geq t+1\), then \(\mathsf{lead}(v)\) will produce a QC for view within time \(O(\delta)\) according to our assumptions on the underlying protocol, unless at least \(t+1\) processors enter view \(v+k\) before this occurs. In the latter case, \(\mathsf{lead}(v+k)\) will produce a QC for view \(v+k\) within time \(O(\delta)\). If \(d<t+1\), then (the previous leader) \(\mathsf{lead}(v_{0})\) will produce a QC for view \(v\) within time \(O(\delta)\), unless at least \(t+1\) processors enter view \(v\) before this occurs. In the latter case, \(\mathsf{lead}(v)\) will produce a QC for view \(v\) within time \(O(\delta)\). Lemma 4.6 completes the proof of Theorem 1. We finish this section by justifying the entries of Table 1, which state that the expected latency is \(O(\Delta)\) and the expected word complexity is \(O(n)\) with a static adversary according to the model of (Kumar et al., 2017). According to this model, leaders are given by successive random permutations of the set of all processors. The adversary is static and _oblivious_, which means that they must choose which processors to corrupt at the start of the protocol execution without knowledge as to the random sequence of leaders, and must also choose \(GST\) without this knowledge. In this case, the expected value \(f^{*}\) from the proof of Lemma 4.6 is \(O(1)\), which means we get expected latency \(O(\Delta)\) and expected word complexity \(O(n)\), as required. ## 5. Ying up loose ends ### A note on optimistic responsiveness and Byzantine leaders In the proof of Lemma 4.6, it was only actually the number of Byzantine _leaders_ before the first correct leader after \(GST\) that mattered (rather than the total number of Byzantine parties) in establishing that the worst-case word complexity is \(O(fn+n)\). The proof that the worst-case latency is \(O(f\Delta+\delta)\) was somewhat more subtle, and considered the total number of Byzantine processors. Roughly, the difficulty occurs when none of the relevant leaders are Byzantine, but a correct processor has just entered initial view \(v\) at GST, while all other correct processors are still in previous views. In this scenario, the previous leader cannot produce a QC (if \(t\) parties are Byzantine while not in their role as leader). Meanwhile, the leader for view \(v\) has to wait time \(\Gamma+\delta\) before producing a VC. As we argued in the proof of Lemma 4.6, this is not an issue if we let \(f\) count the total number of Byzantine parties. Once a first correct leader produces a QC after \(GST\), however, this subtlety is no longer relevant. Let \(\mathsf{lead}(v)\) be correct and such that \(\mathsf{t}(v)\geq GST\). Let \(v^{\prime}\) be the least view with correct leader \(>v\) and suppose that the number of initial views with Byzantine leader in the interval \((v,v^{\prime})\) is \(f^{*}\). Then the proof of Lemma 4.6 is easily modified to show that correct processors send \(O(f^{*}n+n)\) many words between the times at which \(\mathsf{lead}(v)\) and \(\mathsf{lead}(v^{\prime})\) produce QCs, and that the time between these events is \(O(f^{*}\Delta+\delta)\). ### Revisiting the assumptions regarding clock synchronisation In Section 2 we assumed that all processors have identical clock speeds. We now consider to what extent we can relax this condition. As we do so, we also consider how realistic are the required assumptions in the context of reasonable bounds on network delays, the length of periods of asynchrony etc., and in a context where atomic clocks are available for use by processors. Recall that atomic clocks can reasonably be assumed to have error less than 1 second every 100 million years.3 Footnote 3: See, for example, [https://en.wikipedia.org/wiki/Atomic_clock](https://en.wikipedia.org/wiki/Atomic_clock) In the partial synchrony model it is only for the sake of technical convenience that we consider a single period of asynchrony and then a single period of synchrony after \(GST\). In reality, we are interested in contexts where network conditions oscillate between synchrony and asynchrony. We require our protocols to maintain consistency during periods of asynchrony, and to be live during periods of synchrony. To ensure that our analysis extends to such a scenario, let us therefore consider our requirements as the network oscillates between periods of synchrony and asynchrony in this fashion. Fix \(k\coloneqq 3\). A similar analysis will also apply for larger values of \(k\). Let us say an open interval \((t,t^{\prime})\) is synchronous if every message sent in this interval arrives within time \(\Delta\). Let \(\ell\coloneqq 3(t+3)\Gamma\), where \(t\) is the bound on the number of Byzantine processors. If \((\dagger_{\Gamma,t^{\prime}})\) holds for all \(t^{\prime}\in I\coloneqq(t,t+\ell)\) and if \(I\) is synchronous, the proofs of Section 4 established that some correct leader will produce a QC during interval \(I\) and send this to all processors. With this in mind, we inductively define a sequence of times \((t_{i})_{i\geq 0}\) as follows: * Let \(t_{0}\) be the least that \((t_{0},t_{0}+\ell)\) is synchronous. * Given \(t_{i}\), let \(t_{i+1}\) be the least \(t\geq t_{i}+\ell\) such that \((t,t+\ell)\) is synchronous. We suppose that every \(t_{i}\) is defined. For the sake of making things concrete, it is also useful to stipulate some specific values - a similar argument will hold for comparable values: * Suppose \(\Delta=1\) second. * Suppose \(t_{0}<10^{5}\) years and the maximum value \(t_{i+1}-t_{i}\) is less than \(10^{5}\) years. * For view \(v\), suppose \(t\) is the first time at which \(t+1\) correct processors are in view \(v\), and that \(t^{\prime}\) is the first time at which a correct processor sees a QC for view \(v\). Define \(u_{v}\coloneqq t^{\prime}-t\). When \(u_{v}\) is defined, we suppose it always has at least the minimum value \(u\). For the sake of concreteness, we suppose \(u=10^{-2}\) seconds. * Suppose that \((\dagger_{\Delta,0})\) holds, and that \(\Gamma=2\Delta\). Then we claim \((\dagger_{\Gamma,t})\) holds for all \(t\) - this is the condition required to ensure that every interval \((t_{i},t_{i}+\ell)\) has a correct leader produce a QC. Towards a contradiction, suppose there exists a least value \(i^{*}\) such that \((\dagger_{\Gamma,t})\) fails to hold for some \(t^{*}\) in the interval \([t_{i^{*}},t_{i^{*}+1})\). For each \(t\), let \(\Gamma(t)\) be the smallest \(\Gamma^{\prime}\) such that \((\dagger_{\Gamma^{\prime},t})\) holds. Note that: * Every interval \((t_{i},t_{i}+\ell)\) such that \(i<i^{*}\) has at least one correct processor synchronise the clocks of correct processors to within time \(\Delta\), i.e. \((\dagger_{\Delta,t})\) holds for some \(t\) in this interval. * When a correct processor sees a QC and forwards its clock at \(t\), this may cause \(\Gamma(t)\) to increase, e.g. if the leader is Byzantine and only sends the QC to certain processors, or if the QC is not sent during a synchronous interval. In this case, however, the maximum value of \(\Gamma(t)\) is still at most \(\Gamma-u\). * If a processor forwards its clock because it sees a VC at \(t\), this does not increase \(\Gamma(t)\). Define \(t\coloneqq t_{i^{*}-1}+\ell\) if \(i^{*}\neq 0\), and define \(t\coloneqq 0\) if \(i^{*}=0\). Let \(t^{*}\) be defined as above. We conclude that \(\Gamma(t)\) is at most \(\max\{\Delta,\Gamma-u\}\), to within a small error term which is the maximum drift of clocks within an interval of length \(\ell\). Since we suppose \(u=10^{-2}\) seconds, since our clocks have drift at most \(1\) second every \(100\) million years, and since some clocks may drift slow while others drift fast, this means that \(t^{*}-t>5\times 10^{5}\) years. This gives the required contradiction, since we assumed above that \(t_{0}<10^{5}\) years and \(t_{i+1}-t_{i}\) is less than \(10^{5}\) years for all \(i\). ## 6. Concluding Comments We have defined Fever, which is a novel view synchronisation protocol. If \(n\) is the number of processors and \(t\) is the largest integer \(<n/3\), then Fever has resilience \(t\), and in all executions with at most \(0\leq f\leq t\) Byzantine parties and network delays of at most \(\delta\leq\Delta\) after \(GST\) (where \(f\) and \(\delta\) are unknown), Fever has worst-case word complexity \(O(fn+n)\) and worst-case latency \(O(\Delta f+\delta)\). This improves significantly on the state-of-the-art. The trade-off is that Fever requires greater assumptions than previous view synchronisation protocols regarding the drift of clocks prior to \(GST\). We have argued in Section 5 that there are scenarios in which our required assumptions are reasonable. Atomic clocks can now be purchased for a few thousand US dollars, and we showed that under reasonable assumptions regarding network latency etc., a system implementing Fever will be able to handle periods of asynchrony of the order of \(10^{5}\) years. Of course, this is more than is reasonably required, and so even the use of less accurate clocks may suffice in many scenarios. Since there will certainly also be scenarios in which these additional assumptions are undesirable, it is a natural question as to whether they are necessary: **Question 1**: _Does there exists a view synchronisation protocol for the partial synchrony model that achieves the same efficiency bounds as Fever, but which can accommodate unbounded clock drift prior to GST?_
2308.14465
A note on $UFD$
We search for principal ideals. As a sample, let $R$ be a strongly-normal, almost-factorial, and complete-intersection local ring with a prime ideal $P$ of height one. If $depth(R/ P)\geq dim R-2$, we show $P$ is principal. As an immediate corollary, we apply some easy local cohomology arguments and reprove a celebrated theorem of Auslander-Buchsbaum, simplifying a result of Dao and Samuel. From this, we show the hypersurface property of rings of multiplicity at most three. As another application, we answer affirmatively a question posted by Braun.
Mohsen Asgharzadeh
2023-08-28T10:02:52Z
http://arxiv.org/abs/2308.14465v2
# A note on UFD ###### Abstract. We search for principal ideals. As a sample, let \(R\) be a strongly-normal, almost-factorial and complete-intersection local ring with a prime ideal \(\mathfrak{p}\) of height one. If \(\operatorname{depth}(R/\mathfrak{p})\geq\dim R-2\), we show \(\mathfrak{p}\) is principal. As an immediate corollary, we apply some easy local cohomology arguments and reprove a celebrated theorem of Auslander-Buchsbaum, simplifying a result of Dao and Samuel. From this, we show hypersurace property of rings of multiplicity at most three. As another application, we answer affirmatively a question posted by Braun. Key words and phrases:Complete-intersection; local cohomology; hypersurface; UFD 2010 Mathematics Subject Classification: Primary 13D45 ###### Contents * 1 Introduction * 2 Dao's 3-dimensional hypersurfaces * 3 A new proof of regular rings are UFD * 4 A problem by Samuel * 5 Relative situations * 6 A question by Braun ## 1. Introduction The unique factorization property of regular local rings was asked by Krull and solved by Auslander-Buchsbaum [10]. Despite a lot of common, there are essential gaps between UFD and regularity. In this regard, Samuel conjectured and Grothendieck proved: **Theorem 1.1**.: _(Grothendieck 1961) Let \((R,\mathfrak{m})\) be a local complete-intersection domain. If \(R_{P}\) is UFD for all \(P\) of height \(\leq 3\), then \(R\) is UFD._ For another presentation, see e.g. [4]. In fact, Samuel asked the problem for hypersurfaces, see [27]. A natural question arises: **Problem 1.2**.: _Is the UFD property in co-dimension 3 essential?_ Let \(k\) be a field of characteristic different from \(2\), and \(f\) a non-degenerate quadratic form in \(S_{n}:=k[X_{1},\dots,X_{n}]\). Samuel proved that * either \(\operatorname{Cl}(S_{3}/(f))=\mathbb{Z}/2\mathbb{Z}\) or else \(R\) is factorial. If \(k\) is algebraically closed then \(\operatorname{Cl}(S_{3}/(f))\) is \(\mathbb{Z}/2\mathbb{Z}\). * \(\operatorname{Cl}(S_{4}/(f))\) is either infinite cyclic or is zero. It is infinite cyclic if \(k\) is algebraically closed. Here, \(\operatorname{Cl}(-)\) means the divisor class group of \((-)\). The book [19] talks about this. According to these examples, we force to put some additional assumptions. In this regard, we present an elementary proof of: _Observation 1.3_.: (Dao) Let \((S,\mathfrak{n})\) be an equicharacteristic or unramified regular local ring and of dimension four. Let \(R\) be such that \(\widehat{R}=S/(f)\) for some \(f\in\mathfrak{n}\). If \(R\) is almost factorial with isolated singularity, then \(R\) is UFD. The proof uses some easy properties of local cohomology modules. This enables us to reprove the mentioned theorem of Auslander-Buchsbaum (see SS3). Our next goal is to understand the following amazing problem of Samuel: **Problem 1.4**.: _Let \(R\) be a \(d\)-dimensional Cohen-Macaulay ring with isolated singularity and \(d>3\). When is \(R\)_ We analyze this. Our investigation has an immediate application: **Corollary 1.5**.: _Let \((R,\mathfrak{m})\) be a complete ring of depth at least \(3\) containing \(\mathsf{Q}\) with isolated singularity where \(\dim R>4\). If \(e(R)\leq 3\) then \(R\) is hypersurface._ This extends a result of Huneke [22] who investigated \(e(R)<3\). We denote the set of prime ideals of height one by \(\operatorname{Spec}^{1}(R)\). It may be nice to mention that Auslander and Buchsbaum essentially focused on \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\) of finite projective dimension with \(\operatorname{depth}(R/\mathfrak{p})\geq\dim R-2\) (see [10, Cor 2]). In SS5 we deal with the Gorenstein analogue. Also, we show: **Theorem 1.6**.: _Let \((R,\mathfrak{m})\) be a strongly-normal, almost factorial complete-intersection ring and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{depth}(R/\mathfrak{p})\geq\dim R-2\), then \(\mathfrak{p}\) is principal._ This theorem slightly extends a recent result of Cesnavicius-Scholze [13, Theorem 1.1.3]. Finally, we apply our easy local cohomology arguments, and affirmatively answer the following question posted by Braun: _Question 1.7_.: (Braun, [11, Question 16]) Let \((R,\mathfrak{m})\) be a normal domain and \(I\lhd R\) a reflexive ideal with \(\operatorname{id}_{R}(I)<\infty\). Is \(I\) isomorphic to a canonical module? We remark that this is related to our investigation of principal ideal, and in fact to a dual this. ## 2. Dao's 3-dimensional hypersurfaces In this note \((R,\mathfrak{m},k)\) is a commutative noetherian local ring, and modules are finitely generated, otherwise specialized. The notation \(\operatorname{pd}_{R}(-)\) stands for the projective dimension of \((-)\). _Fact 2.1_.: The ring \(R\) is UFD iff any height one prime ideal is principal. The \(i^{th}\) local cohomology of \((-)\) with respect to an ideal \(\mathfrak{a}\) is defined by \(\operatorname{H}^{i}_{\mathfrak{a}}(M):=\varinjlim_{n}\operatorname{Ext}^{i} _{R}(R/\mathfrak{a}^{n},-)\). **Lemma 2.2**.: _(See [5, Lemma 3.2]) Assume \(t\) is an integer such that \(2\leq t\leq\operatorname{depth}(N)\) and \(\operatorname{Supp}_{R}(\operatorname{Ext}^{i}_{R}(M,N))\subseteq\{\mathfrak{m}\}\) for all \(i=1,\dots,t-1\). There is an injection \(\operatorname{Ext}^{t-1}_{R}(M,N)\hookrightarrow\operatorname{H}^{t}_{ \mathfrak{m}}(\operatorname{Hom}_{R}(M,N))\)._ Recall from Auslander [9] that \(M\) is called tor-rigid provided that the vanishing of a single \(\operatorname{Tor}^{R}_{j}(M,N)\) for some \(N\in mod(R)\) and for some \(j\geq 1\) forces the vanishing of \(\operatorname{Tor}^{R}_{i}(M,N)\) for all \(i\geq j\). **Lemma 2.3**.: _(Jothilingam, [18, Theorem]) Assume \(R\) is a local ring and let \(M\) be tor-rigid. If \(\operatorname{Ext}_{R}^{n}(M,M)=0\) for some \(n\geq 1\), then \(\operatorname{pd}_{R}(M)<n\)._ Here, the ring \(R\) is called almost factorial if the class group of R is torsion, i.e. \(R\) is \(\mathbb{Q}\)-factorial. **Lemma 2.4**.: _(Dao, see [14, Theorem 2.7(3)]) Let \(R\) be a local 3-dimensional hypersurface ring such that \(\widehat{R}=S/(f)\) where \((S,\mathfrak{n})\) is an equicharacteristic or unramified regular local ring and \(f\in\mathfrak{n}\). If \(R\) is almost factorial with isolated singularity, then every finitely generated \(R\)-module is tor-rigid._ By \((G_{i})\) (resp. \((\mathrm{R}_{i})\)) we mean \(R_{\mathfrak{p}}\) is Gorenstein (resp. regular) for all \(\mathfrak{p}\in\operatorname{Spec}(R)\) of height at most \(i\). Recall that a module \(M\) satisfies \((\mathrm{S}_{i})\) if \(\operatorname{depth}(M_{\mathfrak{p}})\geq\min\{i,\dim(M_{\mathfrak{p}})\}\) for all \(\mathfrak{p}\in\operatorname{Spec}(R)\). **Theorem 2.5**.: _Let \((R,\mathfrak{m})\) be a local 3-dimensional hypersurface ring such that \(\widehat{R}=S/(f)\) where \((S,\mathfrak{n})\) is an equicharacteristic or unramified regular local ring and \(f\in\mathfrak{n}\). If \(R\) is almost factorial with isolated singularity, then \(R\) is UFD._ Proof.: Let \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\), i.e., a prime ideal of height one. Since the ring is with isolated singularity, \(\mathfrak{p}\) is locally principal over the punctured spectrum (see Fact 2.1). By Serre, \((\mathrm{S}_{2})+(\mathrm{R}_{1})\) characterizes normality. So, \(R\) is normal. In fact, it is supernormal. Applying this along with the determinate trick, we observe that \[R\subseteq\operatorname{Hom}_{R}(\mathfrak{p},\mathfrak{p})\subseteq \overline{R}=R\quad(*)\] Here, \((-)^{*}\) means \(\operatorname{Hom}_{R}(-,R)\). As \(\mathfrak{p}=\mathfrak{p}^{**}\) we know \(\operatorname{depth}(\mathfrak{p})=\operatorname{depth}(\operatorname{hom}( \mathfrak{p}^{*},R))\geq 2\). In view of Lemma 2.2 we know there is an injection \[\operatorname{Ext}_{R}^{1}(\mathfrak{p},\mathfrak{p})\hookrightarrow\mathrm{H }_{\mathfrak{m}}^{2}(\operatorname{Hom}_{R}(\mathfrak{p},\mathfrak{p})) \stackrel{{(*)}}{{=}}\mathrm{H}_{\mathfrak{m}}^{2}(R).\] Since \(\operatorname{depth}(R)=3\) we have \(\mathrm{H}_{\mathfrak{m}}^{2}(R)=0\). Consequently, we deduce that \(\operatorname{Ext}_{R}^{1}(\mathfrak{p},\mathfrak{p})=0\). According to Lemma 2.4, \(\mathfrak{p}\) is tor-rigid. Let us apply Jothilingam's result to deduce that \(\mathfrak{p}\) is free. As free ideals are principal, we observe that \(\mathfrak{p}\) is principal. By Fact 2.1\(R\) is UFD. ## 3. A new proof of regular rings are UFD In this section we apply some local cohomology and reprove a celebrated theorem of Auslander-Buchsbaum [10]. We apply their strategy, but almost every thing is different from [10]. **Lemma 3.1**.: _Let \(R\) be of depth at most two, and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{pd}_{R}(R/\mathfrak{p})<\infty\), then \(\mathfrak{p}\) is principal._ Proof.: Suppose first that \(\mathfrak{p}=\mathfrak{m}\). This says \(R\) is DVR, and the claim is clear. Now, we can assume \(\mathfrak{p}\neq\mathfrak{m}\). Then \(\operatorname{depth}(R/\mathfrak{p})>0\). By Auslander-Buchsbaum formula, \[\operatorname{depth}(R)=\operatorname{depth}(R/\mathfrak{p})+\operatorname{ pd}_{R}(R/\mathfrak{p}).\] This gives \(\operatorname{pd}_{R}(R/\mathfrak{p})\leq 1\), and so \(\mathfrak{p}\) is free. **Proposition 3.2**.: _Let \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{pd}_{R}(R/\mathfrak{p})\leq 2\), then \(\mathfrak{p}\) is principal._ Proof.: The proof is by induction on \(d:=\dim(R)\). By Lemma 3.1, we may assume \(\operatorname{depth}(R)\geq 3\). Following induction hypothesis, \(\mathfrak{p}\) is locally free over the punctured spectrum (see Fact 2.1). Let \(R\to R\) defined via the assignment \(x\mapsto rx\). There is a natural map \(\pi:R\to\operatorname{Hom}_{R}(\mathfrak{p},\mathfrak{p})\) sending \(r\) into \(\mu_{r}\). Since \(R\) is domain, \(\mathfrak{p}\) is torsion-free. Consequently, the map \(\pi\) is injective. Then we have \[0\longrightarrow R\stackrel{{\pi}}{{\longrightarrow}} \operatorname{Hom}_{R}(\mathfrak{p},\mathfrak{p})\longrightarrow C:=\operatorname {coker}(\pi)\longrightarrow 0\quad(+)\] Since \(\mathfrak{p}\) is locally principal over the punctured spectrum, \(C\) is of finite length. By Grothendieck's vanishing theorem \(\operatorname{H}_{\mathfrak{m}}^{+}(C)=0\). We plug this in the long exact sequence of local cohomology modules induced by \((+)\) to deduce the following exact sequence \[0=\operatorname{H}_{\mathfrak{m}}^{1}(C)\longrightarrow\operatorname{H}_{ \mathfrak{m}}^{2}(R)\longrightarrow\operatorname{H}_{\mathfrak{m}}^{2}( \operatorname{Hom}_{R}(\mathfrak{p},\mathfrak{p}))\longrightarrow\operatorname {H}_{\mathfrak{m}}^{2}(C)=0\quad(\dagger)\] Apply this along with \[\operatorname{Ext}_{R}^{1}(\mathfrak{p},\mathfrak{p})\hookrightarrow \operatorname{H}_{\mathfrak{m}}^{2}(\operatorname{Hom}_{R}(\mathfrak{p}, \mathfrak{p}))\stackrel{{(\dagger)}}{{=}}\operatorname{H}_{ \mathfrak{m}}^{2}(R)=0.\] Since \(\operatorname{pd}_{R}(\mathfrak{p})\leq 1\) it is rigid. Let us apply Jothilingam's result to deduce that \(\mathfrak{p}\) is free*. Footnote *: or even without any use of Jothilingam. **Corollary 3.3**.: _Any 3-dimensional regular ring is UFD._ **Corollary 3.4**.: _(Auslander-Buchsbaum) Any regular ring is UFD._ Proof.: Recall that the desired property is reduced to the 3-dimensional case. Now, apply the previous corollary. _Fact 3.5_.: (See [11, Theorem A]) Let \(A\) be a commutative noetherian ring and \(M\) a finitely generated \(A\)-module. Suppose that * \(\operatorname{pd}(M)<\infty\), * \(\operatorname{End}_{A}(M)\) is a projective \(A\)-module, * \(M\) is reflexive. Then \(M\) is a (locally) Gorenstein \(A\)-module. Over normal rings we have: _Observation 3.6_.: (Kaplansky's trick) Let \(R\) be normal, and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{pd}_{R}(R/\mathfrak{p})<\infty\), then \(\mathfrak{p}\) is principal. Proof.: It is easy to see \(\operatorname{Hom}_{R}(\mathfrak{p},\mathfrak{p})\) is projective and \(\mathfrak{p}\) is reflexive. It remains to apply Fact 3.5. Strongly normal means \((\operatorname{S}_{3})+(\operatorname{R}_{2})\). Let us extend Kaplansky's trick: **Proposition 3.7**.: _Let \(R\) be strongly normal and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\mathfrak{p}\) is tor-rigid, then \(\mathfrak{p}\) is principal._ Proof.: The proof is by induction on \(d:=\dim(R)\). Due to \((\operatorname{R}_{2})\) condition we may assume that \(d>2\). In the light of \((\operatorname{S}_{3})\) condition we may and do assume that \(\operatorname{depth}(R)\geq 3\). By repeating the previous argument \(\operatorname{Ext}_{R}^{1}(\mathfrak{p},\mathfrak{p})\hookrightarrow \operatorname{H}_{\mathfrak{m}}^{2}(\operatorname{Hom}_{R}(\mathfrak{p}, \mathfrak{p}))=\operatorname{H}_{\mathfrak{m}}^{2}(R)=0\). It remains to use tor-rigidity, and conclude that \(\mathfrak{p}\) is principal. _Observation 3.8_.: Let \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{pd}_{R}(R/\mathfrak{p})<\infty\). Then \(\mathfrak{p}\) is principal iff \(\mathfrak{p}\) is tor-rigid. Proof.: It is easy to see. **Corollary 3.9**.: _Let \((R,\mathfrak{m})\) be a local hypersurface ring such that \(\widehat{R}=S/(f)\) where \((S,\mathfrak{n})\) is a complete unramified regular local ring and \(f\) is a regular element of \(S\) contained in \(\mathfrak{n}^{2}\). Let \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{pd}_{R}(R/\mathfrak{p})<\infty\). Then \(\mathfrak{p}\) is principal._ Proof.: Recall from [23, Theorem 3] every finitely generated module of finite projective dimension is tor-rigid. Now apply Observation 3.8. The notation \(\operatorname{id}_{R}(-)\) stands for injective dimension of \((-)\). _Observation 3.10_.: Let \(R\) be \(3\)-dimensional, and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{id}_{R}(R/\mathfrak{p})<\infty\), then \(\mathfrak{p}\) is principal. Proof.: By a result of Peskine-Szpiro [26], \(R\) is Gorenstein. This implies that \(\operatorname{pd}_{R}(R/\mathfrak{p})<\infty\). As \(\dim(R)=3\) and \(\operatorname{depth}(R/\mathfrak{p})>0\), Auslander-Buchsbaum formula says that \(\operatorname{pd}_{R}(R/\mathfrak{p})<3\). In view of Proposition 3.2\(\mathfrak{p}\) is principal. **Corollary 3.11**.: _Let \(R\) be \((\operatorname{R}_{1})\) and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{id}_{R}(R/\mathfrak{p})<\infty\), then \(\mathfrak{p}\) is principal._ Proof.: Recall that \(R\) is Gorenstein. In particular, \(R\) is \((\operatorname{S}_{2})\) and so normal. Also, \(\operatorname{pd}_{R}(R/\mathfrak{p})<\infty\). Now, apply Observation 3.6. _Remark 3.12_.: There are strongly normal rings such as \(R\) equipped with \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\) such that \(\operatorname{id}_{R}(\mathfrak{p})<\infty\). But, \(\mathfrak{p}\) is not principal. In particular, tor-rigidity assumption is needed in Proposition 3.7. Proof.: It is enough to consider to the case for which the canonical ideal is a prime ideal. To be more explicit, let \(S:=\mathbb{C}[[x,y,z,u,v]]\) and put \(R=S/(yv-zu,yu-xv,xz-y^{2})\). It is easy and well-known that \(R\) is \(3\)-dimensional Cohen-Macaulay, and the Jacobian criterion implies that it is an isolated singularity. In particular, \(R\) is strongly normal. It remains to note that the canonical module is \((u,v)\). _Conjecture 3.13_.: Let \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{pd}_{R}(R/\mathfrak{p})<\infty\), then \(\mathfrak{p}\) is principal. ## 4. A problem by Samuel Rings of this section are of zero characteristic. We are interested in the following problem even we force to put some additional assumptions. **Problem 4.1**.: _(See [27, Last line]) Let \(R\) be a \(d\)-dimensional Cohen-Macaulay ring with isolated singularity and \(d>3\). When is \(R\)_ UFD? _Remark 4.2_.: Let us collect a couple of remarks and examples: 1. Let \(k\) be a field and \(X,Y,Z,W\) be indeterminates. Let \(R\) be the algebra generated by all the monomials involved in \(\{X,Y,Z,W\}\) of degree \(3\). By Grobner, the ring \(R\) is Cohen-Macaulay and of dimension four. Since \(\operatorname{Proj}(R)\cong\mathbb{P}_{k}^{3}\), \(R\) is of graded isolated singularity. But, \(R\) is not UFD. Indeed, suppose it is UFD. On the one hand, due to a result of Murphy, see e.g. [19, Theorem 12.3], \(R\) should be Gorenstein. On the other hand, \(R\) is not Gorenstein, since otherwise \(4\equiv_{3}0\) which is impossible. In sum, \(R\) is not UFD. 2. The divisor class group of a subring of polynomials is torsion. In particular, the ring \(R\) from item i) is almost factorial. 3. The example i) is so special, see [24, Theorem 1.1]. * Cohen-Macaulay rings with isolated singularity are more general version of Cohen-Macaulay rings of finite Cohen-Macaulay type. Let us ask the problem in this situation. Indeed, it is true for the invariant rings: Let \(S=\mathbb{C}[[x_{1},\cdots,x_{d}]]\) and \(d>3\). Let \(G\) be a finite group acting faithfully on \(S\). Suppose \(R:=S^{G}\) is of finite Cohen-Macaulay type. Then a result of Auslander-Reiten [7] implies that the action is trivial. In particular, \(R=\mathbb{C}[[x_{1},\cdots,x_{d}]]\) which is UFD by 3.4. By \(\mu(-)\) we mean the minimal number of elements that needs to generate \((-)\). **Proposition 4.3**.: _Let \((R,\mathfrak{m},k)\) be \(d\)-dimensional, Cohen-Macaulay complete containing \(\mathbb{Q}\) and satisfying \((\mathbb{R}_{d-1})\). If \(d>4\) and \(\mu(\mathfrak{m})\leq d+2\), then \(R\) is UFD._ Proof.: Suppose \(\mathfrak{m}=(x_{1},\ldots,x_{d+2})\) and let \(A:=k[[X_{1},\ldots,X_{d+2}]]\) and denote the natural surjection \(\pi:A\to R\). Let \(\mathfrak{p}:=\ker(\pi)\). Since \(R\) is \((\mathbb{S}_{2})\) and \((\mathbb{R}_{1})\) it is normal domain, in particular \(\mathfrak{p}\) is prime. As \(A\) is catenary, \(\mathfrak{p}\) is of height two. Now, we are going to use [16, Theorem 4.9], and deduce that \(\mathfrak{p}=(a,b)\) for some regular sequence \(a,b\). We proved that \(R\) is complete-intersection. So, by a result of Grothendieck (see Theorem 1.1), \(R\) is UFD. _Example 4.4_.: There is a \(3\)-dimensional Cohen-Macaulay complete ring \((R,\mathfrak{m})\) containing \(\mathbb{Q}\) and satisfying \((\mathbb{R}_{2})\). Also, \(\mu(\mathfrak{m})\leq\dim(R)+2\), but \(R\) is not UFD. Proof.: Let \(S:=\mathbb{C}[[x,y,z,u,v]]\) and put \(R=S/(yv-zu,yu-xv,xz-y^{2})\). Recall that \(R\) is \(3\)-dimensional Cohen-Macaulay with isolated singularity. It is clear that \(\mu(\mathfrak{m})=\dim(R)+2\). Here we claim that \(R\) is not UFD. Indeed, suppose it is UFD. By the mentioned result of Murphy, it should be Gorenstein. But \(R\) is not Gorenstein. _Fact 4.5_.: (Huneke, see [22, main theorem]) Let \(A\) be a complete local domain containing an infinite field. Suppose \(A\) is \((\mathbb{S}_{n})\). If \(e(A)<n\), then \(A\) is Cohen-Macaulay. **Corollary 4.6**.: _Let \((R,\mathfrak{m},k)\) be \(d\)-dimensional complete normal ring containing \(\mathbb{Q}\) and satisfying \((\mathbb{R}_{d-1})\) with \(d>4\). If \(e(R)\leq 2\) then \(R\) is UFD._ Proof.: The ring \(R\) satisfies \((\mathbb{S}_{2})\). According to Fact 4.5\(R\) is Cohen-Macaulay. Following Abhyankar's inequality [1] we know \(\mu(\mathfrak{m})-\dim R+1\leq e(R)=2\). Let us combine this along with [25, Exercise 21.2] and observe that \(R\) is complete-intersection*. So, by a result of Grothendieck (see Theorem 1.1), \(R\) is UFD. Footnote *: Huneke [16, Corollary 4.12] showed that \(R\) is hypersurface provided it contains a field by weaker assumption. Let us apply our elementary approach. **Corollary 4.7**.: _Adopt the notation of Problem 4.1 with \(d>4\). If \(e(R)\leq 3\) then \(R\) is UFD._ Proof.: Following Abhyankar \(\mu(\mathfrak{m})-\dim R+1\leq e(R)=3\). In the light of Proposition 4.8 we get the desired claim. **Proposition 4.8**.: _Let \((R,\mathfrak{m},k)\) be \(d\)-dimensional, Gorenstein complete ring containing \(\mathbb{Q}\) and satisfying \((\mathbb{R}_{d-1})\). If \(d>7\) and \(\mu(\mathfrak{m})\leq d+3\), then \(R\) is UFD._ Proof.: _Discussion 4.9_.: One may reformulate Samuel's problem: * Let \(R\) be a \(d\)-dimensional, Gorenstein ring with isolated singularity. If \(d>3\) then is \(R\) UFD? 2. This is not true: By using Veronese rings, and by the vein such as Remark 4.2, one can find the counter-example, as the divisor class group of a Veronese ring is not trivial, see [19, 16.5]. 3. However, the problem 4.9(i) is true if the multiplicity is minimal. 4. In addition to Problem 4.9(i) assume \(d>7\) and \(\mu(\mathfrak{m})\leq d+3\). Then \(R\) is UFD. Indeed, suppose \(\mathfrak{m}=(x_{1},\ldots,x_{d+3})\) and let \(A:=k[[X_{1},\ldots,X_{d+3}]]\) and denote the natural surjection \(\pi:A\to R\). Look at \(\mathfrak{p}:=\ker(\pi)\) which is prime. As \(A\) is catenary, \(\mathfrak{p}\) is of height three. Now, we are going to use [21], and deduce that \(R\) is complete-intersection. So, by a result of Grothendieck (see Theorem 1.1), \(R\) is UFD. If \(G\) is an abelian group, following Fossum's book there is a Dedekind domain \(\mathrm{A}\) with \(\mathrm{Cl}(A)=G\). Let us ask: _Question 4.10_.: Let \(R\) be a Gorenstein ring with isolated singularity. If \(\dim R>3\) when is \(\mathrm{Cl}(R)\) cyclic? **Corollary 4.11**.: _Let \((R,\mathfrak{m},k)\) be a complete ring of depth at least \(3\) containing \(\mathbb{Q}\) and satisfying \((\mathrm{R}_{d-1})\) where \(d:=\dim R>4\). If \(e(R)\leq 3\) then \(R\) is hypersurface._ Proof.: The ring satisfies \((\mathrm{S}_{3})\). According to Fact 4.5\(R\) is Cohen-Macaulay. If \(e(R)\leq 2\) the claim is well-known by Huneke. So, we may assume that \(e(R)=3\). Following Abhyankar \(\mu(\mathfrak{m})\leq d+2\). Thanks to Proposition 4.8, \(R\) is UFD and following Murphy (see [19, Theorem 12.3]), \(R\) is Gorenstein. Suppose on the way of contradiction that \(R\) is not hypersurface, then \(d+1<\mu(\mathfrak{m})\leq d+2\). This says that the ring is of minimal multiplicity. Since the residue field is infinite, there is a system of parameter \(\underline{x}:=x_{1},\ldots,x_{d}\) so that \(\mathfrak{m}^{2}=(\underline{x})\mathfrak{m}\). Let us look at the Gorenstein ring \(\overline{R}:=R/(\underline{x})\). It is easy to see \(\overline{\mathfrak{m}}^{2}=0\). In other words, \(\mathrm{Soc}(\overline{R})=\overline{\mathfrak{m}}\), which is \(1\)-dimensional. Let \(x_{d+1}\in\mathfrak{m}\) be such that \(\overline{x}_{d+1}\) is a generator for \(\mathrm{Soc}(\overline{R})\). This shows that \(\mathfrak{m}:=(\underline{x},x_{d+1})\). Let \(A:=k[[X_{1},\ldots,X_{d+1}]]\) and denote the natural surjection \(\pi:A\to R\) sending \(X_{i}\) to \(x_{i}\). Let \(\mathfrak{p}:=\ker(\pi)\). Since \(R\) is \((\mathrm{S}_{2})\) and \((\mathrm{R}_{1})\) it is normal domain, in particular \(\mathfrak{p}\) is prime. As \(A\) is catenary, \(\mathfrak{p}\) is of height one. By UFD property of regular rings there is some \(f\) such that \(\mathfrak{p}=(f)\). We proved that \(R=A/(f)\), i.e., it is hypersurface. This contradiction completes the proof. ## 5. Relative situations We start by the following recent result conjectured by Gabber: _Fact 5.1_.: (Cesnavicius-Scholze, see [13, Theorem 1.1.3]) Let \((R,\mathfrak{m})\) be a complete intersection. If \(\dim(R)\geq 3\), then \(\mathrm{Pic}(\mathrm{Spec}(R)\setminus\{\mathfrak{m}\})_{\mathrm{tors}}=0\). A _quasi-deformation_ of \(R\) is a diagram \(R\to A\leftarrow Q\) of local homomorphisms, in which \(R\to A\) is faithfully flat, and \(A\twoheadleftarrow Q\) is surjective with kernel generated by a regular sequence. The _complete intersection dimension_ of \(M\), see [3], is: \[\mathrm{CI}\text{-}\mathrm{dim}_{R}(M)=\inf\{\mathrm{pd}_{Q}(M\otimes_{R}A)- \mathrm{pd}_{Q}(A)\mid R\to A\twoheadleftarrow Q\text{ is a quasi-deformation}\}.\] **Theorem 5.2**.: _Let \((R,\mathfrak{m})\) be a strongly normal almost factorial complete-intersection ring and \(\mathfrak{p}\in\mathrm{Spec}^{1}(R)\). If \(\mathrm{depth}(R/\mathfrak{p})\geq\dim R-2\), then \(\mathfrak{p}\) is principal._ Proof.: The proof is proceed by induction on \(d:=\dim(R)\). Suppose first that \(d<4\). Due to the \((R_{2})\) condition, we may adopt the situation of Fact 5.1. Recall in this case that \(\mathrm{Pic}(\mathrm{Spec}(R)\setminus\{\mathfrak{m}\})\stackrel{{ \cong}}{{\longrightarrow}}\mathrm{Cl}(R)\) as the ring is with isolated singularity (see [19, Proposition 18.10(b)]). Thanks to the almost factorial assumption, \[\mathrm{Cl}(R)=\mathrm{Cl}(R)_{\mathrm{tors}}=\mathrm{Pic}\left(\,\mathrm{ Spec}(R)\setminus\{\mathfrak{m}\}\right)_{\mathrm{tors}}=0.\] Since the ring is normal we deduce that \(R\) is UFD, and in particular \(\mathfrak{p}\) is principal. Now, assume \(d>3\). One may reformulate \(\operatorname{depth}(R/\mathfrak{p})\geq\dim R-2\) by \(\operatorname{CI-dim}(R/\mathfrak{p})\leq 2\). Since \(\operatorname{CI-dim}(S^{-1}M)\leq\operatorname{CI-dim}(M)\) (see [3, 1.6]) we may assume that \(\operatorname{depth}(R_{Q}/\mathfrak{p}R_{Q})\geq\dim(R_{Q})-2\) for all prime ideal \(Q\) such that \(Q\in\operatorname{Var}(\mathfrak{p})\). In the light of [19, Cor 7.2] we know \(\operatorname{CI}(R)\twoheadrightarrow\operatorname{CI}(S^{-1}R)\) is surjective. So, \(\operatorname{CI}(S^{-1}R)\) is torsion, i.e., \(S^{-1}R\) is almost factorial. Due to inductive hypothesis, we deduce that \(\mathfrak{p}\) is locally principal over \(\operatorname{Spec}(R)\setminus\{\mathfrak{m}\}\). Recall that \(d>3\) and \(\operatorname{depth}(R/\mathfrak{p})\geq\dim R-2\geq 2\). From this and \(0\to\mathfrak{p}\to R\to R/\mathfrak{p}\to 0\) we deduce that \(\operatorname{depth}(\mathfrak{p})\geq 3\). Recall the ring is normal and of depth at least four. Now, we apply Lemma 2.2 \[\operatorname{Ext}_{R}^{2}(\mathfrak{p},\mathfrak{p})\hookrightarrow\operatorname {H}_{\mathfrak{m}}^{3}(\operatorname{Hom}_{R}(\mathfrak{p},\mathfrak{p}))\cong \operatorname{H}_{\mathfrak{m}}^{3}(R)=0.\] We combine this along with [2] and deduce that \(\operatorname{pd}_{R}(\mathfrak{p})\) is finite. This means that \(\operatorname{pd}_{R}(R/\mathfrak{p})=\operatorname{CI-dim}(R/\mathfrak{p})\leq 2\). It remains to apply Proposition 3.2, and deduce that \(\mathfrak{p}\) is principal*. Footnote *: or even without any use of Proposition 3.2. _Conjecture 5.3_.: Suppose \(\dim(R)\geq 4\) with isolated singularity, and let \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\) be such that \(\operatorname{CI-dim}_{R}(\mathfrak{p})<\infty\). Then \(\mathfrak{p}\) is principal. _Discussion 5.4_.: An \(R\)-module \(M\) is called _totally reflexive_ provided that: * the natural map \(M\to M^{**}\) is an isomorphism, * \(\operatorname{Ext}_{R}^{i}(M,R)=\operatorname{Ext}_{R}^{i}(M^{*},R)=0\) for all \(i\geq 1\). The _Gorenstein dimension_ of \(M\), denoted \(\operatorname{Gdim}_{R}(M)\), is defined to be the infimum of all nonnegative integers \(n\), such that there exists an exact sequence \(0\to G_{n}\to\cdots\to G_{0}\to M\to 0\), in which each \(G_{i}\) is a totally reflexive \(R\)-module. Every finitely generated module over a Gorenstein ring has finite Gorenstein dimension. Moreover, if \(R\) is local and \(\operatorname{Gdim}_{R}(M)<\infty\), then it follows that \(\operatorname{Gdim}_{R}(M)=\operatorname{depth}R-\operatorname{depth}_{R}(M)\), and we call it Auslander-Bridger formula. If \(\operatorname{Gdim}_{R}(k)<\infty\) then \(R\) is Gorenstein. For more details, see [10]. _Example 5.5_.: Let \(R\) be a \(3\)-dimensional Gorenstein ring which is not UFD, e.g. \(R=k[[x,y,u,v]]/(u^{2})\). Let \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\) be non-principal, e.g., \(\mathfrak{p}:=(u,v)\). Then \(\operatorname{Gdim}(R/\mathfrak{p})\leq 2\) but \(\mathfrak{p}\) is not free. _Question 5.6_.: Let \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\) be such that \(\operatorname{Gdim}(R/\mathfrak{p})\leq 2\). When is \(\mathfrak{p}\) totally reflexive? **Lemma 5.7**.: _Let \(R\) be of depth at most two, and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). If \(\operatorname{Gdim}(R/\mathfrak{p})<\infty\), then \(\mathfrak{p}\) is totally reflexive._ Proof.: Suppose first that \(\mathfrak{p}=\mathfrak{m}\). Then \(\operatorname{Gdim}_{R}(k)<\infty\). This shows \(R\) is Gorenstein, and \(1\)-dimensional as \(\mathfrak{m}\) is of height one. In this case any ideal is totally reflexive. Now, assume \(\mathfrak{p}\neq\mathfrak{m}\). Then \(\operatorname{depth}(R/\mathfrak{p})>0\) as \(\mathfrak{p}\) is prime. By Auslander-Bridger formula, \(\operatorname{Gdim}(R/\mathfrak{p})\leq 1\) because \(R\) is of depth at most two. Thanks to \(0\to\mathfrak{p}\to R\to R/\mathfrak{p}\to 0\) we observe \(\mathfrak{p}\) is totally reflexive. **Proposition 5.8**.: _Let \(R\) be \((\mathbb{S}_{3})\) and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\) be such that \(\mathfrak{p}^{*}\) is \((\mathbb{S}_{3})\). If \(\operatorname{Gdim}(R/\mathfrak{p})\leq 2\), then \(\mathfrak{p}\) is totally reflexive._ Proof.: The proof is by induction on \(d:=\dim R\). The case \(d<3\) is in Lemma 5.7. So, we may assume \(d>2\). Consequently, \(\operatorname{depth}(R)\geq 3\). Recall that \(\operatorname{Gdim}(R_{Q}/\mathfrak{p}R_{Q})\leq 2\) for all \(Q\in\operatorname{Var}(\mathfrak{p})\setminus\{\mathfrak{m}\}\). Thanks to inductive hypothesis \(\mathfrak{p}R_{Q}\) is totally reflexive for all \(Q\in\operatorname{Spec}(R)\setminus\{\mathfrak{m}\}\). From this \(E:=\operatorname{Ext}_{R}^{1}(\mathfrak{p},R)\) is of finite length. If a module \((-)\) has finite \(\operatorname{Gdim}\), then \(\operatorname{Gdim}(-)=\sup\{i:\operatorname{Ext}_{R}^{i}(-,R)\neq 0\}\). So, it is enough to show \(E=0\). There is a free module \(F\) and a totally reflexive module \(T\) such that the sequence \(0\to T\to F\to\mathfrak{p}\to 0\) is exact. This gives \(0\to\mathfrak{p}^{*}\to F^{*}\to T^{*}\to E\to 0\). Let us breakdown it into two short exact sequences * \(0\to\mathfrak{p}^{*}\to F^{*}\to S\to 0\), * \(0\to S\to T^{*}\to E\to 0\). \(ii)\) yields that \(0=\operatorname{H}_{\mathfrak{m}}^{0}(T^{*})\to\operatorname{H}_{\mathfrak{m} }^{0}(E)\to\operatorname{H}_{\mathfrak{m}}^{1}(S)\to\operatorname{H}_{ \mathfrak{m}}^{1}(T^{*})=0\), and so \(E=\operatorname{H}_{\mathfrak{m}}^{0}(E)\cong\operatorname{H}_{\mathfrak{m} }^{1}(S)\). Since \(\operatorname{depth}(R)>2\) from \(\mathfrak{i})\) we get that \(\operatorname{H}_{\mathfrak{m}}^{1}(S)\cong\operatorname{H}_{\mathfrak{m}}^{2 }(\mathfrak{p}^{*})\). Combining these, imply that \(E\cong\operatorname{H}_{\mathfrak{m}}^{2}(\mathfrak{p}^{*})=0\). The above shows: **Corollary 5.9**.: _Let \(R\) be \(3\)-dimensional Gorenstein and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). Then \(\mathfrak{p}^{*}\) is generalized Cohen-Macaulay._ **Corollary 5.10**.: _Let \(R\) be \(3\)-dimensional normal Gorenstein and \(\mathfrak{p}\in\operatorname{Spec}^{1}(R)\). Then \(\mathfrak{p}\) is generalized Cohen-Macaulay._ Proof.: The normality condition implies \(\mathfrak{p}\) is reflexive. Thanks to the previous corollary, \(\mathfrak{p}^{*}\) is generalized Cohen-Macaulay. In the light of [6, Corollary 4.4] we observe that \(\mathfrak{p}=(\mathfrak{p}^{*})^{*}\) is generalized Cohen-Macaulay. In the book [16] Kaplansky's trick (see Observation 3.6) drives from the syzygy theorem of Evans and Griffith plus the direct summand conjecture. This suggests to find the Gorenstein analogue of syzygy theorem. The natural candidate is: **Question 5.11**.: Let \(R\) be a normal local ring and which satisfies \((\operatorname{S}_{k})\). Let \(M\) be \(k\)-th syzygy and of finite \(G\)-dimension. Suppose \(M\) is not totally reflexive. When is \(\operatorname{rank}(M)\geq k\)? Over normal rings, positive answer to this gives an affirmative solution to Question 5.6. **Question 5.12**.: Let \(\mathfrak{p}\in\operatorname{Spec}(R)\) be such that \(\operatorname{Gdim}(R/\mathfrak{p})<\infty\). When is \(R\) generically Gorenstein? Generically Gorenstein is the \((G_{0})\) condition. ## 6. A question by Braun The rings in this section are equipped with a kanonical module, for example homomorphic image of Gorenstein rings are of this mood. **Question 6.1**.: (Braun, [11, Question 16]) Let \((R,\mathfrak{m})\) be a normal domain and \(I\lhd R\) a reflexive ideal with \(\operatorname{id}_{R}(I)<\infty\). Is \(I\) isomorphic to a canonical module? By [11, Page 682], the only positive evidence we have is when \(R\) is also Gorenstein. The \(2\)-dimensional case answered in [4]: **Lemma 6.2**.: _Let \((R,\mathfrak{m})\) be a normal domain of dimension \(2\) with a canonical module and \(I\lhd R\) be reflexive with \(\operatorname{id}_{R}(I)<\infty\). Then \(I\) isomorphic to a canonical module._ Proof.: (See [11, Theorem C]) Let \(A\) be a commutative Noetherian ring and \(M\) a finitely generated \(A\)-module. Suppose that * \(\operatorname{id}(M)<\infty\), * \(\operatorname{End}_{A}(M)\) is a projective \(A\)-module, * \(\operatorname{Ext}_{A}^{1}(M,M)=0\). Then \(M\) is a (locally) Gorenstein \(A\)-module. Now, we are ready to prove: **Theorem 6.4**.: _Question 6.1 is true._ Proof.: The proof proceeds by induction on \(d:=\dim(R)\). Thanks to Lemma 6.2 we may assume that \(d=\dim R>2\) and suppose the desired claim is satisfied for normal rings of dimensions less than \(d\). Now, let \(I\vartriangleleft R\) be reflexive with \(\operatorname{id}_{R}(I)<\infty\). Following Bass' conjecture \(R\) is Cohen-Macaulay. In other words, we may assume that \[d=\dim R=\operatorname{depth}(R)>2\quad(+).\] Recall that localization of \(I\) is a divisorial reflexive ideal and of finite injective dimension. In particular, by applying the inductive hypothesis, we deduce that \(I\) is locally isomorphic with the canonical module over \(\operatorname{Spec}(R)\setminus\{\mathfrak{m}\}\). Recall that \[\operatorname{Ext}_{R}^{+}(I,I)_{Q}=\operatorname{Ext}_{R_{Q}}^{+}(I_{Q},I_{ Q})=\operatorname{Ext}_{R_{Q}}^{+}(\omega_{R_{Q}},\omega_{R_{Q}})=0\] for all \(Q\in\operatorname{Var}(I)\setminus\{\mathfrak{m}\}\). Also, \(\operatorname{Ext}_{R}^{+}(I,I)_{Q}=0\) if \(Q\notin\operatorname{Var}(I)\) because \(I_{Q}=R_{Q}\). From this \(\operatorname{Ext}_{R}^{+}(I,I)\) is of finite length. Since the ideal is reflexive, we know \(\operatorname{depth}(I)\geq 2\). This allows us to apply Lemma 2.2 with \(t:=2\). Recall that \(R\) is normal. Applying this along with the determinate trick, we observe that \[R\subseteq\operatorname{Hom}_{R}(I,I)\subseteq\overline{R}=R\quad(*)\] By Lemma 2.2 we know there is an injection \[\operatorname{Ext}_{R}^{1}(I,I)\hookrightarrow\operatorname{H}_{\mathfrak{m} }^{2}(\operatorname{Hom}_{R}(I,I))\stackrel{{(*)}}{{=}} \operatorname{H}_{\mathfrak{m}}^{2}(R)\stackrel{{(+)}}{{=}}0.\] Consequently, \(\operatorname{Ext}_{R}^{1}(I,I)=0\). In the light of Fact 6.3 we observe that \(I\) is a Gorenstein module, and so Cohen-Macaulay. But, \(I\) is of full support. Since \(I\) is both maximal Cohen-Macaulay and of finite injective dimension, we know that \(I\simeq\oplus_{n}\omega_{R}\) for some \(n\). Recall that \(\operatorname{Hom}_{R}(\omega_{R},\omega_{R})=R\). This shows \[R\stackrel{{(*)}}{{\cong}}\operatorname{Hom}_{R}(I,I)\cong \operatorname{Hom}_{R}(\oplus_{n}\omega_{R},\oplus_{n}\omega_{R})\cong \oplus_{n^{2}}\operatorname{Hom}_{R}(\omega_{R},\omega_{R})\cong\oplus_{n^{2} }R,\] i.e., \(n^{2}=1\). Consequently, \(I\) is isomorphic to the canonical module. _Acknowledgement_.: I thank Olgur Celikbas for useful comments on the earlier draft.
2307.12849
Improving Students With Rubric-Based Self-Assessment and Oral Feedback
Rubrics and oral feedback are approaches to help students improve performance and meet learning outcomes. However, their effect on the actual improvement achieved is inconclusive. This paper evaluates the effect of rubrics and oral feedback on student learning outcomes. An experiment was conducted in a software engineering course on requirements engineering, using the two approaches in course assignments. Both approaches led to statistically significant improvements, though no material improvement (i.e., a change by more than one grade) was achieved. The rubrics led to a significant decrease in the number of complaints and questions regarding grades.
Sebastian Barney, Mahvish Khurum, Kai Petersen, Michael Unterkalmsteiner, Ronald Jabangwe
2023-07-24T14:48:28Z
http://arxiv.org/abs/2307.12849v1
# Supporting Students Improve with Rubric-Based Self-Assessment and Oral Feedback ###### Abstract Rubrics and oral feedback are approaches to help students improve towards learning outcomes. However, their effect on the actual improvement achieved is inconclusive. This paper evaluates the effect of rubrics and oral feedback on student's learning outcomes. An experiment has been conducted in a software engineering course on requirements engineering, using the two approaches on assignments in the course. Both approaches led to statistically significant improvements, though no material improvement was achieved (i.e. a change by more than one grade). The rubrics led to a significant decrease in the number of complaints/questions regarding grades. Software engineering, rubric based evaluation, feedback, higher education pedagogy ## I Introduction This paper is motivated by the authors' observation that students feel that they deserve a higher grade, and hence ask for explanations. One potential reason is the gap in perception between teachers and students of what is expected in an assignment or an exam. When closing this gap a reasonable assumption is that students improve with respect to learning outcomes as they know what is expected. Two well known methods are used as interventions to close the gap between the perception of teachers and students, and it is evaluated whether this improves the learning outcomes. The first intervention is self-assessment using rubrics. Rubrics are split into several criteria (e.g. writing, selection of alternatives, reflection on solutions, and so forth). For each criteria levels are defined representing the level of understanding, preferably following the Structure of Observed Learning Outcomes (SOLO) taxonomy [1]. Research has shown that the impact of rubrics and self-assessment on learning outcomes is inconclusive [2], though it is acknowledged that rubrics have the potential of improving student's performance [3]. In addition, there are very few studies that are related to software engineering or computer science, which is the focus of the course in this study. Hence, there is a need to investigate and contribute further evidence by evaluating rubric based evaluation and self-assessment in software engineering courses. The second intervention is verbal feedback. Studies report that students desire verbal feedback and perceive it as useful in order to improve [4]. In fact, they desire to receive feedback before assignment submissions [5]. The actual effect on learning outcomes and grades, however, is an open research question (cf. [5]). Hence, in this study the intervention is used as desired by the students in the previously mentioned studies, i.e. before submitting the final version of an assignment. This study makes a contribution to the research gap by evaluating the improvements achieved by the students. The interventions are applied in a master's level course on large-scale requirements engineering in Sweden, with a total 67 participants. In Sweden students are allowed multiple submissions of each assessment task, and there are limitations imposed on the amount of time given to teachers to grade each assessment task. The two interventions are: 1) rubrics-based self evaluation, 2) teacher's feedback. The remainder of the paper is structured as follows. Background and related work is introduced in Section II. The research design is presented in Section III, with results detailed in Section IV. A discussion of the results is made in Section V, and conclusions are drawn in VI. ## II Background and Related Work This section presents a literature review regarding the two interventions to be empirically evaluated in this study. The first intervention is the introduction of rubrics for self-evaluation to improve student's learning. The second intervention is oral feedback on student's work. The review focuses on rubrics and feedback in general, not only for the higher education. One might argue that the difference between older and younger students is that the older students would like to have more independence in how they learn, i.e. they are likely to be more self-directed. However, as pointed out in [6] there are adults that are highly dependent on structure and guidance, while at the same time there are children that are self-directed learners. Hence, rubrics are potentially useful for all levels of education, and empirical results are (partially) generalizable between them. This is also indicated by reviews on the topic as they are including primary, secondary, and tertiary education in their assessment (see e.g. [7, 2]). ### _Rubrics for Self-Evaluation_ Rubrics represent the cognitive level achieved by a student with respect to learning outcome (see Figure 1 for the rubric used in this study). Rubrics should be designed according to the SOLO taxonomy where the level of achievement in the rubric maps to the levels in the Structure of Observed Learning Outcomes (SOLO) taxonomy [1]. The levels of the SOLO taxonomy range from pre-structural (presentation and reporting of incoherent information) to extended abstract (connecting information within a topic area and being able to transfer/generalize the information to other areas). In between those levels, there are different degrees of connection between information. First, the authors focus the review on self-evaluation in general without a particular focus on rubrics. Ross [7] reviewed literature regarding the reliability of self-assessment and its usefulness in improving student performance. In his review Ross [7] considered primary, secondary, and tertiary education (i.e. from middle school to university level). With regard to reliability the literature review evaluated whether students are consistent in their self-evaluation across different tasks (intra-rater reliability). Overall, the findings indicate that students are consistent. Particularly students who have been trained in evaluation criteria were consistent. The time between tasks and the age of the students being evaluated impacts consistency, older students (14-16 years) are more consistent than younger students. When comparing self-assessment results with teachers' assessment or peer assessment (inter-rater reliability) the results vary. The results showing high inter-rater reliability can be criticized regarding their validity (unclear evaluation criteria, few replications, groups of students not comparable). The review of literature also revealed that students have a tendency to overestimate themselves, have interest bias, or are unable to apply assessment criteria. Hence, a factor that can increase the inter-rater reliability is when students are trained in self-assessment. With regard to improvements with regard to learning outcomes two groups of studies are identified. One group reports positive results arguing that self-assessment positively affects self-efficacy, motivation, and hence leads to stronger achievement. The other group reports negative results as students might select unrealistic goals, and hence adopt ineffective learning behavior. Or, if the goals are clearly given and they feel they cannot achieve them, this can lead to loss in motivation, and hence grades would not improve. The review by Jonsson and Svingby [2] is very similar to the review of Ross [7], both aim at answering the same research questions regarding inter-rater reliability and improvement. However, in [2] the focus is placed on rubric-based evaluation and self as well as peer assessment. The review also includes literature from middle school to university level. With regard to inter-rater reliability the review found that students using rubrics are reasonably consistent with values above 70%. With regard to agreement the results are more reliable than without rubrics. However, as pointed out by Jonsson and Svingby this does not necessarily make the evaluation better, as the rubric might only align views. At the same time the rubric might be flawed with respect to reflecting the desired learning outcomes in a good way. The results related to students improving through rubrics is inconclusive. Some studies report on an overall improvement. One example of such a study is [3] showing improvements in subsequent assignments within a course, as well as in comparison to results from previous years. The study by Green and Bowser [8] included in the review is of particular interest as it focuses on higher education and students conducting literature review and analysis, which is very similar to the task the students in this study have to conduct. Green and Bowser's study showed that students improve on some rubric criteria (three improvements), while not showing any change on two criteria, and even a negative impact on five criteria. The comparison was made by sampling reviews from two groups, one guided by rubrics and one not guided by rubrics. Andrade [9] provided an experience report of the usage of rubrics. Based on her experience a number of conclusions with respect to rubrics have been drawn. Rubrics make it easier for the teacher to explain what is expected, and students feel that rubrics are useful. This was evident as students asked for rubrics as soon as they were used to them. Rubrics also help in explaining expectations clearly, i.e. they are not hidden anymore. In fact, often teachers have the assumptions that the students should know what is "good" and what is "poor", however, this is often not the case. Splitting rubrics into categories also supports students in identifying their strengths and weaknesses. Overall, based on conversations with students the teacher experienced that students learned more content with the introduction of rubrics. Looking at the computer science and software engineering literature only few studies on self-assessment and rubrics can be found (cf. [10]). For computer science a web application development project used rubrics defined together with students. The results were that rubrics (1) help in defining ones own achievement goals (i.e. desired outcome); (2) are good at assessing against self-defined standards; (3) lead to higher satisfaction with grades. Furthermore, rubrics have been used in grading essays on ethics in computer science [11]. To test the rubrics two teachers graded one assignment independently and compared the results. There were only slight differences in grading. In outcome, the rubrics improved the criterion "English Language" while another criterion "Technical Details" went down. The explanation provided was that students noticed that language is important. A reason for the negative change for the criterion "Technical Details" was explained by the nature of the first assignment. In the first assignment the students could choose their own topic for the essay. which was not the case in the second assignment. In software engineering rubrics have been suggested as a tool in higher education (cf. [12, 13]), but no detailed evaluations and empirical results were reported. ### _Feedback (Verbal/Oral)_ Blair and McGinty [4] conducted a study using action research at two universities focusing on political science/history students. The study was motivated by the observation that students have problems to understand feedback, and teachers have problems to provide good explanations that help students understand. A large portion of students at the two studied universities are convinced that verbal feedback helps them improve their learning. In fact, students desired to receive feedback after exams, and also receive feedback prior to the submission of assignments. However, according to Blair and McGinty [4] there exists a gap between the desire of students to get oral feedback, and the willingness of staff to provide that. In that situation, the authors encourage the use of verbal peer feedback, e.g. through the discussion of examples of essays. Gibbs et al. [14] surveyed two universities (A and B) focusing on the programs in physics, chemistry, and bio chemistry. Teachers at university A mainly provided written feedback, while university B focused on oral feedback. Their findings are that from an effort perspective oral feedback is better for the teacher as it requires less time in comparison to writing detailed feedback. However, a problem observed with oral feedback was that it is not long-lasting as students can not easily refer back to the information provided. Also, there is a difference in perception of what feedback is. Teachers felt that they provide feedback all the time (e.g. in lectures, laboratories, workshops, and informally). However, the students do not count that information as feedback. This might also be a reason why the students do not undertake proper effort in documenting and storing the feedback provided. Jollands et al. [5] investigated social sciences/engineering classes on university level by discussing in focus groups of 10 to 15 students. Their goal was to find out what good feedback is (written and verbal). For verbal feedback they defined criteria of when the feedback is successful. For example, in groups verbal feedback is only successful if it is focused on the knowledge gap of the student. In the class room the feedback is only useful if the answers encourage the students, and are constructive. Verbal feedback in class discussions is only useful if the students provide good and constructive comments leading to a valuable discussion. However, verbal feedback might lead to neglecting students that are less likely to approach the teacher, are not confident to ask questions, are not attending the class, and so forth. An open question is if good feedback in the class will increase grades, which was proposed as a future direction for research. ## III Research Design ### _Context (Course and Students)_ The experiment was conducted in an academic setting, with 42 engineering graduate students at Blekinge Institute of Technology. It was conducted as a mandatory although non-graded exercise during a 7.5 ECTS merits master's course in large-scale requirements engineering (LSRE). Participation was mandatory and despite the ethical issues of forcing subjects to participate in a study, it was believed that the interventions introduced had several pedagogical benefits in the course. The students were instead given the option to exclude their individual results from the study, an option not utilized by any student. ### _Research Questions_ The main aim of this paper is to determine if rubric-based self-assessment and teacher's verbal feedback can be used to improve student learning outcomes. The first research question and set of hypotheses address rubric-based self assessment. **RQ1:** Can rubric-based self-assessments be used to support students improve with respect to learning outcomes? The hypotheses tested in answering this question is thus: \(H_{10}\)**:** The use of rubric-based self assessment does not change students' ability with respect to learning outcomes. \(H_{1a}\)**:** The use of rubric-based self assessment can change students' ability with respect to learning outcomes. The second research question and set of hypotheses address teacher's verbal feedback. **RQ2:** Can teacher's verbal feedback be used to support students improve with respect to learning outcomes? The hypotheses tested in answering this question is thus: \(H_{20}\)**:** The use of teacher's verbal feedback does not change students' ability with respect to learning outcomes. \(H_{2a}\)**:** The use of teacher's verbal feedback can change students' ability with respect to learning outcomes. ### _Design and Instrumentation_ There are two assignments in the LSRE course. Each was broken into two parts, with the intervention applied in between. All students were subject to the same treatments for each assignment. * Assignment 1 (A1) aimed to get students to reproduce the concepts learned. Students are not required to reflect or perform critical analysis in A1. A1 was to be done individually. * Assignment 2 (A2) required students to reflect and critically analyze solutions they propose. A2 is done in pairs. A1-Part 1: Description of A1 was given to the students. However, the students were not made aware of the intervention. The evaluation rubric for the assignment was made available on the course homepage and is shown in Figure 1. The students were given a deadline to submit A1-Part 1. A1-Part 2: Shortly after the deadline for A1-Part 1, the students were informed about A1-Part 2 by the teacher. For this part students needed to assess their own assignment against the rubric and update their assignment based on this experience. Students were given two weeks to submit the rubric-based assessment and updated assignment. Students were also made aware that their course grade would only be based on the assignment submitted as A1-Part 2 A2-Part 1: A description of A2 was given to the students, however, the students were not made aware of the intervention. The evaluation rubric was also made available on the course homepage. The students were told to present their work in a 30-minute session, during whichthe teacher would provide live feedback on their assignments with an emphasis on the reflections and critical analysis parts of the assignment. Students were given a deadline for A2-Part 1, and assigned to presentation times one day after submission of this part. A2-Part 2: After the presentation and teacher's feedback each group was told to use this feedback to update their initial submission. The students were given a deadline for this task, and informed that their final grade for this task would be based entirely on A2-Part 2 ### _Data Collection and Analysis_ Data was collected at a number of points during this course as part of this study: 1. After the introductory lecture, a questionnaire was used to collect students' desired and expected grades for each assessment task in the course. 2. A1-Part 1 was graded by a teacher, however, at no stage was this grade made known to the students. 3. For A1, the students' grades from their rubric-based self-assessments were collected. 4. A1-Part 2 was graded by the same teacher responsible for grading A1-Part 1. 5. A2-Part 1 was graded by a teacher, however, at no stage was this grade made known to the students. 6. Identifiers for students who presented their work and received verbal feedback from the teacher were recorded. 7. A2-Part 2 was graded by the same teacher responsible for grading A2-Part 1. 8. A post-course questionnaire asked students to rate the helpfulness of the two interventions in improving their grade. To test if a statistically significant improvement was made between Part 1 and Part 2 in each assignment, two-tailed paired t-tests were applied to the grades in each group. ## IV Results ### _Intervention 1: Rubric-Based Self-Assessment_ The results of this section are limited to the 40 students that completed Part 1, the rubric-based self reflection and Part 2 for Assignment 1. At the start of the course students were asked about the grades they desired and expected to achieve for each assessment task in the course. The results for Assessment 1 are shown in Figure 2, with almost all students desiring and expecting an \(A\), \(B\) or \(C\). The students' expected grade distribution is slightly lower than their desired grade distribution. The distribution of grades for Part 1, the Self Reflection and Part 2 for Assignment 1 are shown in Figure 2. The distribution of grades for Part 1 is slightly below the students' expected grades. At no stage where students made aware of their grade for Part 1. The students were effective at assessing their own work against the rubric for the Self Reflection exercise, awarding grades \(A\) through \(F\). The students self-assessment matched the teachers grade in 45% of cases and was accurate to within one level for 85% of cases. The remaining 15% of students differed by two grade levels. Students were equally likely to overestimate and underestimate their grades. Students were given two weeks to complete the Self Reflection exercise and submit Part 2 for Assignment 1. The results saw 9 of the 40 students improve their grade one level between Part 1 and Part 2 (eg. \(B\to A\)), with the remainder of the students seeing no material change in their grades. This result showed a statistically significant change in Fig. 1: Rubric for Assignment 1 Fig. 2: Grade distributions for Assignment 1 the grades using a two-tailed paired _t_-test (\(p=0.0017\)), allowing the null hypothesis to be rejected (\(H_{10}\)). However, this reflection on their own work did not translate into a large improvement in terms of the grades assigned, as shown in Table I. Students perceived the rubric-based intervention as very helpful for improving their grade. In a post course questionnaire students were asked to what degree the agreed with the statement that the rubric-based intervention was helpful in improving the assignment. Responses were taken in the form of a five point Likert scale (1=strongly disagree, 5=strongly agree), with an average response of 4.7. The questionnaire was completed by 9 of the 40 students who completed Part 1, the Rubric-Based Self Assessment and Part 2 of Assignment 1. ### _Intervention 2: Teacher's verbal feedback_ The results of this section are limited to the 19 students that completed Part 1, the Presentation and Part 2 of Assignment 2. Students were asked about the grades they desired and expected to achieve for each assessment task at the start of the course. The results for Assessment 2 are shown in Figure 3, with all students desiring and expecting an \(A\), \(B\) or \(C\). The distribution students' desired and expected grades is higher than with the first Assignment, despite this being a larger and more complex assignment. Further, the expected results are higher than the desired results, indicating the students expect to exceed their desires in terms of the grade on this assignment. The distribution of grades for Part 1 and Part 2 of Assignment 2 are shown in Figure 3. The distribution of grades for Part 1 is far below the students' desired and expected grades with 10 out of 19 students failing. The highest grade awarded two _B_s, followed by two _C_s and five _E_s. At no stage where students made aware of their grade for Part 1. After completing Part 1, students made an oral presentation to the lecturer to receive feedback on their assignment. After receiving this feedback students were given two weeks to update their assignments. Part 2 saw a statistically significant change from Part 1 using a two-tailed paired _t_-test (\(p=0.0044\))--allowing the null hypothesis to be rejected (\(H_{20}\)). As summarized in Table I, two students raised their grades two levels (ie. \(E\to C\)), and six students raised their grade one level (ie. \(2\times B\to A\) and \(4\times F\to E\)). The remaining 11 students saw no material change in their grade. With this assignment it was also possible to determine the amount of changed material between Part 1 and Part 2 of the Assignment. The students who improved their \(B\) to an \(A\) did so by only changing 3% of their assignment, the smallest change seen between the two parts. The students who saw a material improvement in their grade changed on average 63% of their assignments for the Part 2, while the students who did not see any material improvement in their grade changed on average 54% of their assignments for Part 2. Of the 19 students who completed Assignment 2 as planned, 10 respondent to the post course questionnaire. The students strongly agreed with the statement that "the presentation exercise and feedback from the teacher as part of Assignment 2 was very helpful in approving the assignment." Students answered on a five-point Likert scale (1=strongly disagree, 5=strongly agree), with an average result of 4.5. ### _Grading Complaints_ A significant change was also seen in the number of complaints about grades. In this implementation of the course there was only one complaint about grades. Previous years, that have not included these interventions to help students improve their grades, have seen approximately 10 complaints about grades. ## V Discussion As shown in the survey, from the beginning of the course, students have high desires and expectations in terms of their grades. However, the results of the assignments show that the students are neither meeting their desires nor expectations. The fact that students expect to receive grades higher than they desire for the more complex A2, suggests they perceive that the course will be easy. For students whose grades improved, it is possible that some of the improvement can be contributed to the additional time spent by students on the assignments as part of the interventions. However, for students who did not achieve any improvement in their grades, other factors might have reduced students' ability to spend the amount time on their assignment they perceived necessary, with students citing other courses, their masters' theses and personal reasons as factors limiting their ability to take full opportunity of the interventions. Thus, as future work, authors are planning to conduct an exploratory case study to interview students from both samples (improved Fig. 3: Grade distributions as part of Assignment 2 grades sample and no improvement sample) to investigate which factors related to the two interventions and which factors besides the interventions affected/not affected their grades in resubmissions. Further, it is important to emphasize that such interventions need to be planned, coordinated and applied across the masters program to reap the greatest benefits. Further, it is possible that the students only improved in terms of performance against the rubric, and not in terms of the intended learning outcomes [2]. Some aspects assessed by the rubric are far more important than others at measuring students learning outcomes. For example, it is possible for a student to improve their grade through _English_ while keeping a poor _analysis and discussion_; while another student who makes a significant improvements to the _analysis and discussion_ may not see any material improvement in grade due to poor _English_. Thus, it needs to be further explored what aspects students improved in their assignments that led to an improvement in their grades. Further, the success of each approach is dependent on the quality of the rubric and the teacher's oral feedback. Changes to either could increase or decrease students' ability to meet learning outcomes. From teachers perspective, the current system for assigning and controlling teaching hours does not support teachers to spend additional time helping students improve (for example, by spending more time to give oral feedback). A teacher can do so, but he/she will not receive additional teaching hours for this activity. Given such a constraint, the improvements in grades through the interventions become significant. This fact points to the need for the system to allow and accommodate efforts to better support students learning. A significant result is the reduction in the number of complaints about grades. This result suggests that the interventions provided students with a greater understanding of what was expected from them from the assessment tasks. However, the teachers hoped this understanding would lead to a greater increasing in learning outcomes as measured through grades. ## VI Conclusion This paper evaluates two approaches to support students improve against course learning outcomes--rubric-based self assessment and teach's oral feedback. Both techniques are shown to lead to improved learning outcomes when applied to a completed assignment and students are given time to address the feedback provided. The increase in learning outcomes, however, was much more limited than the teachers expected. The major change seen from the implementation of these interventions was an increase in student understanding of teachers' expectations. Compared to previous years, there were far fewer complaints about grades. The rubric-based self assessment requires and investment of students' time, while the teacher's oral feedback requires an investment of both the students' and teachers' time. Given the improvement against learning outcomes was more limited than expected, it remains an open question as to whether the cost-benefit of these approaches is sufficient to justify their use. The author's recommend further empirical studies, preferably designed to allow comparisons between different approaches.
2305.15787
$Z_{cs}$, $Z_c$ and $Z_b$ states under the complex scaling method
We investigate the $Z_b$, $Z_c$ and $Z_{cs}$ states within the chiral effective field theory framework and the $S$-wave single channel molecule picture. With the complex scaling method, we accurately solve the Schr\"odinger equation in momentum space. Our analysis reveals that the $Z_b(10610)$, $Z_b(10650)$, $Z_c(3900)$ and $Z_c(4020)$ states are the resonances composed of the $S-$wave $(B\bar{B}^{*}+B^{*}\bar{B})/\sqrt{2}$, $B^{*}\bar{B}^*$, $(D\bar{D}^{*}+D^{*}\bar{D})/\sqrt{2}$ and $D^{*}\bar{D}^*$, respectively. Furthermore, although the $Z_{cs}(3985)$ and $Z_{cs}(4000)$ states exhibit a significant difference in width, these two resonances may originate from the same channel, the $S-$wave $(D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D})/\sqrt{2}$. Additionally, we find two resonances in the $S-$wave $D_s^*\bar{D}^*$ channel, corresponding to the $Z_{cs}(4123)$ and $Z_{cs}(4220)$ states that await experimental confirmation.
Jian-Bo Cheng, Bo-Lin Huang, Zi-Yang Lin, Shi-Lin Zhu
2023-05-25T07:01:52Z
http://arxiv.org/abs/2305.15787v2
# \(Z_{cs}\), \(Z_{c}\) and \(Z_{b}\) states under the complex scaling method ###### Abstract We investigate the \(Z_{b}\), \(Z_{c}\) and \(Z_{cs}\) states within the chiral effective field theory framework and the \(S\)-wave single channel molecule picture. With the complex scaling method, we accurately solve the Schrodinger equation in momentum space. Our analysis reveals that the \(Z_{b}(10610)\), \(Z_{b}(10650)\), \(Z_{c}(3900)\) and \(Z_{c}(4020)\) states are the resonances composed of the \(S-\)wave (\(B\bar{B}^{*}+B^{*}\bar{B}\))/\(\sqrt{2}\), \(B^{*}\bar{B}^{*}\), \((D\bar{D}^{*}+D^{*}\bar{D})/\sqrt{2}\) and \(D^{*}\bar{D}^{*}\), respectively. Furthermore, although the \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) states exhibit a significant difference in width, these two resonances may originate from the same channel, the \(S-\)wave (\(D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D})/\sqrt{2}\). Additionally, we find two resonances in the \(S-\)wave (\(D_{s}^{*}\bar{D}^{*}\)) channel, corresponding to the \(Z_{cs}(4123)\) and \(Z_{cs}(4220)\) states that await experimental confirmation. ## I Introduction In the past decade, ongoing experimental efforts have led to the discovery of a series of heavy quarkonium-like states known as the \(XYZ\) states. The charged \(Z\) states like \(Z_{c}(3900)\) and \(Z_{c}(4020)\) provide strong evidence of the exotic states, as they involve the light quarks to explain their non-zero electric charge. Experimental advancements in the \(Z_{b}\) sector can be traced back to 2011 when the Belle collaboration reported two charged exotic candidates, \(Z_{b}(10610)\) and \(Z_{b}(10650)\)[1], which were later confirmed in subsequent studies [2; 3]. Multiple hidden-charm tetraquark candidates of the \(Z_{c}\) states have been observed by the BESIII, Belle and CLEO collaborations in electron-positron annihilation, including the charged and neutral \(Z_{c}(3900)\) and \(Z_{c}(4020)\) states [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. These states, with their masses near the thresholds of \(B^{(*)}\bar{B}^{*}\) and \(D^{(*)}\bar{D}^{*}\), have been widely interpreted as the molecule states in the papers [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. Additionally, the existence of the strange partners with the \(Q\bar{Q}s\bar{q}^{\prime}\)\((q,q^{\prime}=u,d)\) configurations is predicted by the SU(3)-flavor symmetry, and indeed they have been discovered in recent years. In 2021, the BESIII collaboration observed an exotic hadron near the mass thresholds of \(D_{s}^{-}D^{*0}\) and \(D_{s}^{*}\bar{D}^{0}\) in the processes \(e^{+}e^{-}\to K^{+}D_{s}^{-}D^{*0}\) and \(K_{S}^{0}D_{s}^{*-}D^{0}\)[31]. The corresponding mass and width fitted with a Breit-Wigner line shape are \[M[Z_{cs}(3985)]=3982.2^{+1.8}_{-2.6}\pm 2.1\ \text{MeV}\] \[\Gamma[Z_{cs}(3985)]=12.8^{+5.3}_{-4.4}\pm 3.0\ \text{MeV}. \tag{1}\] Last year, they observed a neutral \(Z_{cs}(3985)^{0}\) in the processes \(e^{+}e^{-}\to K_{S}^{0}D_{s}^{+}D^{*-}\) and \(K_{S}^{0}D_{s}^{*-}D^{-}\)[32]. The mass and width of the neutral \(Z_{cs}(3985)^{0}\) have been determined to be (\(3992.2\pm 1.7\pm 1.6\)) MeV and (\(7.7^{+4.1}_{-3.8}\pm 4.3\)) MeV, respectively. Its mass, width and cross section are similar to those of the charged \(Z_{cs}(3985)^{+}\), which suggests that the neutral \(Z_{cs}(3985)^{0}\) is the isospin partner of the \(Z_{cs}(3985)^{+}\). Furthermore, in 2021, the LHCb collaboration reported a series of distinct \(Z_{cs}\) states. In the hidden charm decay process \(B^{+}\to J/\psi\phi K^{+}\), they observed two \(Z_{cs}\) states with \(J^{P}=1^{+}\)[33]. One of these \(Z_{cs}\) states is the \(Z_{cs}(4000)^{+}\), which is discovered with high significance. Its mass and width are measured to be \[M[Z_{cs}(4000)]=4003\pm 6^{+4}_{-14}\ \text{MeV}\] \[\Gamma[Z_{cs}(4000)]=131\pm 15\pm 26\ \text{MeV}, \tag{2}\] respectively. Additionally, the other \(Z_{cs}\) state, \(Z_{cs}(4220)^{+}\), has a mass of \(4216\pm 24^{+43}_{-30}\) MeV and a width of \(233\pm 52^{+97}_{-73}\) MeV. The LHCb collaboration considers the \(Z_{cs}(4000)^{+}\) and \(Z_{cs}(3985)^{+}\) to be distinct states due to their apparently different widths, despite their close mass. This discovery of the exotic \(Z_{cs}\) hadrons inspired various theoretical interpretations, including the compact tetraquark picture [34; 35; 36], the molecule picture [37; 38; 39; 40; 41; 42; 43; 44], the mixing scheme [45; 46; 47; 48; 49] and the cusp effect [50]. When examining the BESIII and LHCb observations of the \(Z_{cs}\) states, some authors of the Refs. [41; 51; 52] proposed that the \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) are the same entity, whereas the Refs. [34; 35; 39; 44; 47; 48] considered them to be distinct hadrons. Moreover, one can gain further insights from the comprehensive reviews published in recent years [53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66]. In Refs. [35; 39], the authors considered the \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) as the SU(3)-flavor partners of \(Z_{c}(3900)\), whose neutral nonstrange members have opposite \(C\) parity. The authors suggested the \(Z_{cs}(4000)/Z_{cs}(3985)\) is the pure molecular state composed of \((|\bar{D}_{s}^{*}D\rangle+/-|\bar{D}_{s}D^{*}\rangle)/\sqrt{2}\). In addition, they also predicted the existence of a molecule composed of \(\bar{D}_{s}^{*}D^{*}\) which may be confirmed by the BESIII in the subsequent experiment [67]. However, the huge difference of their widths seems still hard to interpret. In this study, we employ the chiral effective field theory (ChEFT) to investigate the properties of the \(Z_{b}\), and \(Z_{cs}\) states in the molecular picture. To explore the existence and relationships of the possible resonances, we utilize the complex scaling method (CSM) [68; 69], which is a powerful tool that provides a consistent treatment of the bound states and resonances. We focus solely on the \(S-\)wave open-charm interaction, while neglecting the possible contributions from the hidden charm. As illustrated in our previous works [70; 71], we consider the cross diagram \(D\bar{D}^{*}\leftrightarrow D^{*}\bar{D}\) of the one-pion-exchange (OPE) contribution. This contribution introduces a complex potential arising from the three-body decay effect, which we take into account when investigating the widths of the resonances. This paper is organized as follows. In Sec. II, we introduce our framework explicitly. In Sec. III, we present the effective Lagrangians and potentials. In Sec. IV, we solve the complex scaled Schrodinger equation and give the results of the \(Z_{b}\), \(Z_{c}\) and \(Z_{cs}\). The last section V is a brief summary. ## II Framework In this study, we consider the \(Z_{b}\), \(Z_{c}\) and \(Z_{cs}\) states as the molecular systems with the quantum numbers \(I^{G}(J^{PC})=1^{+}(1^{+-})\), \(I^{G}(J^{PC})=1^{+}(1^{+-})\) and \(I(J^{P})=1/2(1^{+})\), respectively. The specific molecule systems under investigation are \((B\bar{B}^{*}+B^{*}\bar{B})/\sqrt{2}\), \(B^{*}\bar{B}^{*}\), \((D\bar{D}^{*}+D^{*}\bar{D})/\sqrt{2}\), \(D^{*}\bar{D}^{*}\), \((D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D})/\sqrt{2}\) and \(D_{s}^{*}\bar{D}^{*}\). In the earlier work [17], the \(Z_{b}\) states were proposed as the bound states of \(\left[B\bar{B}^{*}+B^{*}\bar{B}\right]/\sqrt{2}\) and \(B^{*}\bar{B}^{*}\). The authors considered the \(D\)-wave channel and found that the \(S-D\) wave mixing effect could contribute significantly. Recent experiments [2; 3] have uncovered additional evidence supporting the interpretation of the \(Z_{b}\) states as the resonances. These findings show that the masses of the \(Z_{b}\) states are higher than the threshold of the \(B^{(*)}\bar{B}^{*}\) pairs, and they can decay into the \(B^{(*)}\bar{B}^{*}\) channel with the partial widths in the range of tens of MeV. These findings strongly favor the resonance interpretation over the bound state scenario. The present CSM work confirms that the \(D\)-wave channel has a minimal impact on the mass and width of the states. Therefore, we ignore the \(D\)-wave channel in this work. On the other hand, we find that the coupled channel effect between \((B\bar{B}^{*}+B^{*}\bar{B})/\sqrt{2}\) and \(B^{*}\bar{B}^{*}\) is negligible for the near threshold states. In addition, there are inelastic channels in the final decay process, like \(\Upsilon(nS)\pi\), that could be the constituents of the \(Z_{b}\) states as well. However, the couplings strength between the \(Z_{b}\) and the hidden-bottom channels is apparently smaller than that between \(Z_{b}\) and the open-bottom channels. Therefore, the influence of the correction from the hidden-bottom channels should not be significant. Furthermore, for the \(Z_{c}\) and \(Z_{cs}\) systems, we adopt the same assumption that the inelastic hidden-heavy channels are not the primary constituents. As a result, we focus on the simplest case, considering only the \(S-\)wave open-heavy single channel. The masses of the charmed meson and exchanged light mesons are collected in Table 1. We take the isospin average masses to deal with the isospin conserving process. ### A brief discussion on the CSM We first provide a brief overview of the CSM proposed by Aguilar, Balslev and Combes in the 1970s [68; 69], commonly known as the ABC theorem. The CSM is a powerful approach that allows for the treatment of resonances in a manner similar to the bound states. The transformation of the radial coordinate \(r\) and its conjugate momentum \(k\) in the CSM are defined by: \[U(\theta)r=re^{i\theta},\qquad U(\theta)k=ke^{-i\theta}. \tag{3}\] After the complex scaling operation, the Schrodinger equation \[\frac{p^{2}}{2m}\phi_{l}(p)+\int\frac{p^{\prime 2}dp^{\prime}}{(2\pi)^{3}}V_{l, l^{\prime}}(p,p^{\prime})\phi_{l^{\prime}}(p^{\prime})=E\phi_{l}(p) \tag{4}\] in the momentum space becomes \[\frac{p^{2}e^{-2i\theta}}{2m}\tilde{\phi}_{l}(p)+\int\frac{p^{ \prime 2}e^{-3i\theta}dp^{\prime}}{(2\pi)^{3}}V_{l,l^{\prime}}(pe^{-i\theta},p^{ \prime}e^{-i\theta})\tilde{\phi}_{l^{\prime}}(p^{\prime})\] \[=E\tilde{\phi}_{l}(p), \tag{5}\] with the normalization relation \[\frac{e^{-3i\theta}}{(2\pi)^{3}}\int_{0}^{\infty}\tilde{\phi}_{l}(p)^{2}p^{2}dp =1, \tag{6}\] where \(l,l^{\prime}\) are the orbital angular momenta, and \(p\) represents the momentum in the center-of-mass frame. The potential \(V_{l,l^{\prime}}\) after partial wave decomposition can be expressed as \[V_{l,l^{\prime}} = \int d\mathbf{\Omega}^{\prime}\int d\mathbf{\Omega}\sum_{m_{l^{\prime}}=- l^{\prime}}^{l^{\prime}}\langle l^{\prime},m_{l^{\prime}};s,m_{j}-m_{l^{\prime}}|j,m_{j}\rangle \tag{7}\] \[\times \sum_{m_{l}=-l}^{l}\langle l,m_{l};s,m_{j}-m_{l}|j,m_{j}\rangle \mathcal{Y}^{*}_{l^{\prime},m_{l^{\prime}}}(\theta^{\prime},\phi^{\prime})\] \[\times \mathcal{Y}_{l,m_{l}}(\theta,\phi)\langle s,m_{j}-m_{l^{\prime}}| \mathcal{V}|s,m_{j}-m_{l}\rangle,\] \begin{table} \begin{tabular}{c c c c} \hline \hline Mesons & Mass(MeV) & Mesons & Mass(MeV) \\ \hline \(D^{+}\) & 1869.66 & \(B^{*}\) & 5324.70 \\ \(D^{0}\) & 1864.84 & \(D_{s}^{+}\) & 1968.34 \\ \(D^{*+}\) & 2010.26 & \(D_{s}^{*+}\) & 2112.2 \\ \(D^{*0}\) & 2006.85 & \(\pi^{\pm}\) & 139.57 \\ \(B^{+}\) & 5279.34 & \(\pi^{0}\) & 134.98 \\ \(B^{0}\) & 5279.65 & & \\ \hline \hline \end{tabular} \end{table} Table 1: The masses of the charmed, bottomed and pion mesons, which are taken from Ref. [72]. where \(s\) and \(j\) represent the total spin and total angular momentum of systems, \(m_{l}\) is the corresponding magnetic quantum number. The \(\mathcal{Y}_{l,m_{l}}(\theta,\phi)\) represents the spherical harmonics associated with the angular coordinates \(\theta\), \(\phi\). The potential operator \(\mathcal{V}\) acts on the states \(|s,m_{j}-m_{l^{\prime}}\rangle\) and \(|s,m_{j}-m_{l}\rangle\). After performing the complex scaling operation, the resonance pole crosses the branch cut into the first Riemann sheet when the rotation angle \(\theta\) reaches a sufficiently large value, as depicted in Fig. 1. Consequently, the wave functions of the resonances become square-integrable, similar to those of the normalizable bound states. Further information on this technique can be found in Refs. [73; 74]. ### Analyticity of the OPE potentials for the \(Dd^{*}\) system In our previous works [70; 71], we investigated the double-charm tetraquark system using the CSM method. Notably, we found that the \(D\bar{D}^{*}\) system exhibits a unique characteristic where the zeroth component of the transferred momentum of the exchanged pion exceeds the pion mass. This leads to an imaginary part in the OPE potential. If a pole is obtained in this system, it would correspond to an energy with an imaginary part, which can be interpreted as its half-width. In the current study, we encounter this situation when examining the OPE potential in the \((D\bar{D}^{*}+D^{*}\bar{D})/\sqrt{2}\) system with \(1^{+}(1^{+-})\). When considering the process \(D\bar{D}^{*}\to D^{*}\bar{D}\), one can get the OPE potential as follows \[V_{\pi}\propto\frac{g^{2}}{2f_{\pi}^{2}}\frac{(\mathbf{\epsilon}^{*}\cdot\mathbf{q})( \mathbf{\epsilon}\cdot\mathbf{q})}{\mathbf{q}^{2}+m_{\pi}^{2}-q_{0}^{2}}, \tag{8}\] where \(q\) represents the transferred momentum of the pion, and \(q_{0}\) is its zeroth component. Since \(q_{0}\approx m_{D^{*}}-m_{D}>m_{\pi}\), the poles of the OPE potential are located on the real transferred momentum axis. However, when performing the integral along the real \(p^{\prime}\) axis in Eq. (4), we encounter a numerical divergence. Fortunately, the CSM can resolve this divergence issue without altering the analyticity of the OPE potential. Through a complex scaling operation, the pole of the OPE potential is rotated away from the real momentum axis in the momentum plane. As a result, the integral along the real momentum axis bypasses the pole, effectively avoiding divergence. As shown in Fig. 2, we denote the total energy of the \(D\bar{D}^{*}/D^{*}\bar{D}\) system as \(E\) and assume the \(D\) meson to be on-shell. In this case, the expression for \(q_{0}\) is given by \(q_{0}=E-\sqrt{m_{D}^{2}+\mathbf{p}^{2}}-\sqrt{m_{\bar{D}}^{2}+\mathbf{p}^{\prime 2}}\). With the heavy quark approximation, we neglect the kinetic energy contribution to \(q_{0}\) and introduce an energy shift \(E\to E+m_{D}+m_{D^{*}}\). As a result, we obtain \(q_{0}=E+m_{D^{*}}-m_{D}\). In other processes, the three-body effect vanishes, and we should consider the different values of \(q_{0}\) in the OPE potential. The specific values of \(q_{0}\) for each case are summarized in Table 2. ## III Lagrangians and potentials For the interaction of two heavy mesons, the chiral effective Lagrangians are constructed based on the heavy quark symmetry and SU(3)-flavor symmetry. The ex \begin{table} \begin{tabular}{c c c c} \hline \hline Process & \(D\bar{D}^{*}\to D^{*}\bar{D}\) & \(D^{*}\bar{D}^{*}\to D^{*}\bar{D}^{*}\) & \(B\bar{B}^{*}\to B^{*}\bar{B}\) \\ \(q_{0}\) & \(E+m_{D^{*}}-m_{D}\) & \(0\) & \(m_{B^{*}}-m_{B}\) \\ \hline \hline \end{tabular} \end{table} Table 2: The \(q_{0}\) is the zeroth component of the transferred momentum. \(E\) is the total energy relative to the corresponding threshold. The other cases not listed all give \(q_{0}=0\). Figure 1: The eigenvalue distribution of the complex scaled Schrödinger equation for the two-body systems. plicit expressions are given by \[\mathcal{L} = -i\langle H^{(Q)}_{b}v\cdot(\delta_{ba}\partial+i\Gamma_{ba})\bar{H}^ {(Q)}_{a}\rangle+g\langle H^{(Q)}_{b}\mathbb{A}^{\mu}_{ba}\gamma_{\mu}\gamma_{5} \bar{H}^{(Q)}_{a}\rangle \tag{9}\] \[-i\langle\bar{H}^{(Q)}_{b}v\cdot(\delta_{ba}\partial+i\Gamma_{ba} )\bar{H}^{(Q)}_{a}\rangle+g\langle\bar{H}^{(Q)}_{b}\mathbb{A}^{\mu}_{ba}\gamma_ {\mu}\gamma_{5}\bar{H}^{(Q)}_{a}\rangle\] where \(H^{(Q)}\) is defined as \[H^{(Q)}_{a} = \frac{1+\not{v}}{2}\left[P^{*\mu}_{a}\gamma_{\mu}-P_{a}\gamma_{5} \right]. \tag{10}\] And \(\bar{H}^{(Q)}_{a}\), \(\bar{H}^{(\bar{Q})}\) and \(\bar{\bar{H}}^{(\bar{Q})}_{a}\) are \[\bar{H}^{(Q)}_{a} = \gamma_{0}H^{(Q)\dagger}_{a}\gamma_{0}=\left[P^{*\dagger\mu}_{a} \gamma_{\mu}+P^{\dagger}_{a}\gamma_{5}\right]\frac{1+\not{v}}{2},\] \[\bar{H}^{(\bar{Q})} = \left[\bar{\bar{P}}^{*\mu}_{a}\gamma_{\mu}+\bar{P}_{a}\gamma_{5} \right]\frac{1-\not{v}}{2}\quad\text{and}\] \[\bar{\bar{H}}^{(\bar{Q})}_{a} = \gamma_{0}\bar{H}^{(\bar{Q})\dagger}\gamma_{0}=\frac{1-\not{v}}{2 }\left[\bar{\bar{P}}^{*\dagger\mu}_{a}\gamma_{\mu}-\bar{P}^{\dagger}_{a}\gamma _{5}\right], \tag{11}\] respectively, with \(P^{(*)}_{a}=\left(D^{(*)0},D^{(*)+},D^{(*)+}_{s}\right)\) and \(\tilde{P}^{(*)}_{a}=\left(D^{(*)-},\bar{D}^{(*)0},\bar{D}^{(*)-}_{s}\right)\). The light meson concerned parts are given that \[\mathbb{A}_{\mu} = \frac{i}{2}[\xi^{\dagger}(\partial_{\mu}\xi)+(\partial_{\mu}\xi) \xi^{\dagger}],\quad\Gamma_{\mu}=\frac{i}{2}[\xi^{\dagger}(\partial_{\mu}\xi) -(\partial_{\mu}\xi)\xi^{\dagger}],\] \[\xi = \exp[\frac{i\mathcal{M}}{f_{\pi}}]\quad\text{and} \tag{12}\] \[\mathcal{M} = \left(\begin{array}{ccc}\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{ \sqrt{6}}&\pi^{+}&K^{+}\\ \pi^{-}&\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}&K^{0}\\ K^{-}&\bar{K}^{0}&-\frac{2}{\sqrt{6}}\eta\end{array}\right), \tag{13}\] where the pion decay constant \(f_{\pi}\) is equal to 132 MeV. The coupling constant associated with the \(\pi\) exchange is \(g=0.59\)[75]. The corresponding OPE potential in momentum space can be expressed as follows \[V^{D\bar{D}^{\prime}/BB^{*}} = -\frac{g^{2}}{2f_{\pi}^{2}}\frac{(\mathbf{\epsilon}^{*}\cdot\mathbf{q})( \mathbf{\epsilon}\cdot\mathbf{q})}{\mathbf{q}^{2}+m_{\pi}^{2}-q_{0}^{2}},\] \[V^{D^{*}\bar{D}^{*}/B^{*}\bar{B}^{*}} = -\frac{g^{2}}{2f_{\pi}^{2}}\frac{(\mathbf{\mathcal{T}}_{1}\cdot\mathbf{q}) (\mathbf{\mathcal{T}}_{2}\cdot\mathbf{q})}{\mathbf{q}^{2}+m_{\pi}^{2}-q_{0}^{2}},\] where \(\mathbf{\mathcal{T}}_{1}\) and \(\mathbf{\mathcal{T}}_{2}\) represent the spin 1 operator with the forms \(\mathbf{\mathcal{T}}_{1}=-i\mathbf{e}^{\dagger}_{3}\times\mathbf{\epsilon}_{1}\) and \(\mathbf{\mathcal{T}}_{2}=-i\mathbf{e}^{\dagger}_{4}\times\mathbf{\epsilon}_{2}\). Since we focus solely on the \(S\)-wave interactions, we can replace the above spin-dependent operator with \((\mathbf{\epsilon}^{*}\cdot\mathbf{q})(\mathbf{\epsilon}\cdot\mathbf{q})\to\frac{1}{3}\mathbf{q}^{2}\) and \((\mathbf{T}_{1}\cdot\mathbf{q})(\mathbf{T}_{2}\cdot\mathbf{q})\to\frac{1}{3}\mathbf{q}^{2}\mathbf{T}_{ 1}\cdot\mathbf{T}_{2}\). Regarding the contact term interaction, we adopt the form derived in Ref. [28]. Upon performing the partial wave decomposition, one can obtain the \(S\)-wave contact potential as \[\left[V_{ct}\right]_{l,l^{\prime}} = \tilde{C}_{s}+C_{s}(p^{2}+p^{\prime 2}),\] where \(\tilde{C}_{s}\) and \(C_{s}\) represent the partial wave low energy constants (LECs). We restrict our analysis to the lowest-order interaction and do not consider higher-order effects, such as the one-loop contribution. To obtain the effective potentials, we introduce a Gaussian regulator to the potentials as follows \[V_{l,l^{\prime}}=V_{l,l^{\prime}}\exp\left(-\frac{p^{\prime 2}}{\Lambda^{2}}- \frac{p^{2}}{\Lambda^{2}}\right), \tag{15}\] where \(\Lambda\) is the cutoff parameter. The parameters \(\Lambda\), \(\tilde{C}_{s}\) and \(C_{s}\) can be adjusted while keeping the coupling constants in the OPE potential fixed. ## IV Numerical results During the numerical calculation process, we discretize the Schrodinger Eq. (4) in momentum space using the Gaussian quadrature approach. We approximate the integral over the potential as a weighted sum over \(N\) integration points for \(p=k_{j}\) (\(j=1,N\)): \[\int_{0}^{\infty}dp^{\prime}p^{\prime 2}V(p,p^{\prime})\phi(p^{ \prime})\simeq\sum_{j=1}^{N}\omega_{j}p_{j}^{2}V(p,p_{j})\phi(p_{j}),\] \[\frac{p^{2}}{2m}\phi(p)+\frac{1}{(2\pi)^{3}}\sum_{j=1}^{N}\omega_ {j}p_{j}^{2}V(p,p_{j})\phi(p_{j})=E\phi(p),\] where \(p_{j}\) and \(\omega_{j}\) represent the Gaussian quadrature points and weights, respectively. Furthermore, for clarity, we will omit the orbital angular momentum subscript from this point onward. In Eq. (III), we have \(N\) unknowns \(\phi(k_{j})\) and an unknown \(\phi(k)\). To avoid the need to determine the entire function \(\phi(k)\), we restrict the solution to the same values of \(k_{i}\) used to approximate the integral. This lead to \(N\) coupled linear equations: \[\frac{p_{i}^{2}}{2m}\phi(p_{i})+\frac{1}{(2\pi)^{3}}\sum_{j=1}^{N}\omega_{j}p_{j }^{2}V(p_{i},p_{j})\phi(p_{j})=E\phi(p_{i}). \tag{17}\] Therefore, the Schrodinger equation can be expressed in matrix form as \[[H][\phi]=E[\phi], \tag{18}\] with explicit matrices form In this subsection, we investigate the exotic hadrons \(Z_{b}\) and \(Z_{c}\) using ChEFT. A similar study has been performed in Ref. [28], where the \(Z_{c}(3900)\) and \(Z_{c}(4020)\) (\(Z_{b}(10510)\) and \(Z_{b}(10650)\)) are interpreted as \(\left[D\bar{D}^{*}+D^{*}\bar{D}\right]/\sqrt{2}\) and \(D^{*}\bar{D}^{*}\) (\(\left[B\bar{B}^{*}+B^{*}\bar{B}\right]/\sqrt{2}\) and \(B^{*}\bar{B}^{*}\)) molecule with \(J^{P}=1^{+}(1^{+-})\), respectively. However, in our present work using CSM, we find that the contributions from the \(D\)-wave constituents are negligible. Therefore, we neglect the \(S-D\) mixing effect and solely focus on the \(S\)-wave channel in this section. In our analysis, as shown in Table 3, we perform a fit of the LECs for the two \(Z_{b}\) and \(Z_{c}\) states. Comparing our results with those in Ref. [28], we find a similar cutoff value \(\Lambda\) within a reasonable range. However, the LECs \(\tilde{C}_{s}\) and \(C_{s}\) exhibit some variations, which could be attributed to our omissions of the \(D\)-wave channel and the higher order contribution. Additionally, we calculate the root-mean-square (RMS) radii, as shown in Table 3, and find that the sizes of the \(Z_{b}\) states are smaller than those of the \(Z_{c}\) states. Interestingly, the sizes of the two \(Z_{b}\) (\(Z_{c}\)) states are nearly identical. Moreover, the corresponding wave functions, as depicted in Fig. 5, exhibit a striking resemblance. This phenomenon is reasonable since our analysis in this work does not account for the higher-order spin-dependent correction terms. The satisfaction of the heavy quark spin symmetry justifies the similarities in the energy, decay width, size and wave function observed in the \(Z_{b}\) and \(Z_{c}\) states. As discussed in Ref. [70], the \(DD^{*}/D\bar{D}^{*}\) system considered as the \(T_{cc}^{+}/X(3872)\) state can decay into the three-body open-charm channels \(DD\pi/D\bar{D}\pi\). In the case of the isovector \(D\bar{D}^{*}\) system, it is also necessary to consider the influence of the three-body decay. The numerical results in the scheme we adopt, shown in Table 3 (row "Adopt"), are very close to the results under the instantaneous approximation \(q_{0}=0\). This implies that the mass, width and size have minimal changes. The reason why the choice of \(q_{0}\) matters for the \(T_{cc}^{+}\) system but not for the \(Z_{c}\) system can be understood as follows. The mass of the \(T_{cc}^{+}\) state is below the threshold of the \(DD^{*}\) system, making the two-body decay process kinetically forbidden. Therefore, the three-body decay becomes the dominant decay modes, and the value of \(q_{0}\), which partly reflects the three-body decay width, becomes important. On the other hand, the \(Z_{c}(3900)^{+}\) state is clearly above the threshold of the \(D\bar{D}^{*}\) system, allowing for the two-body decay process. Since the contribution from the three-body decay is significantly smaller in this case, the choice of \(q_{0}\) does not significantly alter the results. ### The \(Z_{cs}\) system In Refs. [35; 39], the \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) states are discussed as the SU(3)-flavor partners of \(Z_{c}(3900)\), with their neutral nonstrange members having opposite C parity. The authors suggest that the \(Z_{cs}(4000)/Z_{cs}(3985)\) state can be described as a pure molecular state composed of \((|D_{s}\bar{D}^{*}\rangle+/-|D_{s}^{*}\bar{D}\rangle)/\sqrt{2}\). Furthermore, they also predicted the existence of a \(D_{s}^{*}\bar{D}^{*}\) molecular state, which is potentially supported by the recent work of the BESIII collaboration [67]. These studies provide interesting insights into the nature and composition of the \(Z_{cs}\) states under the molecule picture. However, an issue that remains unresolved is the significant difference in the widths between the \(Z_{cs}(3985)\) and the \(Z_{cs}(4000)\). To address this difference, we propose an alternative explanation where these two states are considered as two resonances associated with the same system, namely \((D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D})/\sqrt{2}\). According to our proposal, the \(Z_{cs}(3985)\) corresponds to the resonance with a narrower width, while the \(Z_{cs}(4000)\) corresponds to the resonance with a broader width, as illustrated in Fig. 4. This interpretation differs from the prevailing viewpoints in literature. In the previous subsection IV.1, we found that the \(\left[D\bar{D}^{*}+D^{*}\bar{D}\right]/\sqrt{2}\) and \(D^{*}\bar{D}^{*}\) systems, associated with the two \(Z_{c}\) states, exhibit similar outcomes due to the heavy quark spin symmetry. Thus, it is feasible to employ the same parameters for them. Following this scheme, we use the same parameters for both the \((D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D})/\sqrt{2}\) and \(D_{s}^{*}\bar{D}^{*}\) systems. By adopting the available experimental data of \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\), we determine the central values and errors of \(\Lambda\), \(\tilde{C}s\) and \(C_{s}\), and perform calculations for the \(D_{s}^{*}\bar{D}^{*}\) system. The corresponding parameter values, masses, widths and sizes can be found in Table 4. In the framework of ChEFT, it is generally expected that the cutoff region should exceed the pion mass \(m_{\pi}\) while not significantly exceeding 0.5 GeV, as the higher-mass mesons (\(\sigma\), \(\rho\), \(\omega\), etc.) are integrated out. Conse \begin{table} \begin{tabular}{c c c c} \hline \hline System & Threshold & \([m,\Gamma]_{\rm pole}({\rm MeV})\) & \([m,\Gamma]_{\rm exp}({\rm MeV})\) & RMS(fm) \\ \hline \(\left[D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D}\right]/\sqrt{2}\)(1*) & 3976.1 & \([3982.4^{+2.2}_{-2.1},14.1^{+3.7}_{-3.6}]\) & \([3982.5^{+2.8}_{-3.3},12.8^{+6.1}_{-5.3}]\) & \(1.89^{+0.13}_{-0.11}+0.43^{+0.09}_{-0.14}\) \\ \(\left[D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D}\right]/\sqrt{2}\)(2*) & & \(\left[4010.7^{+6.3}_{-6.2},119.6^{+14.5}_{-14.7}\right]\) & \([4003^{+17.5}_{-15.2},131^{+30.0}_{-30.0}]\) & \(1.78^{+0.18}_{-0.14}+1.31^{+0.08}_{-0.07}\) \\ \(D_{s}^{*}\bar{D}^{*}(1)\) & 4119.1 & \([4125.2^{+2.2}_{-1.1},13.2^{+3.5}_{-3.4}]\) & \([4123.5^{+1.3}_{-1.3},-]\) & \(1.89^{+0.13}_{-0.11}+0.43^{+0.09}_{-0.14}\) \\ \(D_{s}^{*}\bar{D}^{*}(2)\) & & \(\left[4152.7^{+6.1}_{-6.0},115.0^{+14.0}_{-14.2}\right]\) & \(\left[4216^{+49}_{-38},233^{+110}_{-90}\right]\) & \(1.78^{+0.18}_{-0.14}+1.31^{+0.08}_{-0.07}\) \\ \hline \hline \end{tabular} \end{table} Table 4: The poles are all listed with the number \(I(J^{P})=1/2(1^{+})\). The fitted parameter for the \(D_{s}^{*}\bar{D}^{(*)}\) system are \(\Lambda=0.192^{+0.012}_{-0.013}\) GeV, \(\tilde{C}_{s}=6.8^{+2.8}_{-2.7}\times 10^{2}\) GeV\({}^{-2}\) and \(C_{s}=-186.9^{+50.4}_{-64.4}\times 10^{2}\) GeV\({}^{-4}\). The RMS is the root-mean-square radius in the CSM, which has been discussed in the Ref. [76]. Its real part is interpreted as an expectation value, and the imaginary part corresponds to a measure of the uncertainty in observation. The states labeled as “1**” and “2**” correspond to the input states. The symbol “-” indicates that the width of the \(Z_{cs}(4123)\) state has not been confirmed by experiment yet. \begin{table} \begin{tabular}{c c c c c} \hline \hline System & Threshold & \([m,\Gamma]_{\rm pole}({\rm MeV})\) & \([m,\Gamma]_{\rm exp}({\rm MeV})\) & RMS(fm) \\ \hline \(\left[B\bar{B}^{*}+B^{*}\bar{B}\right]/\sqrt{2}\) & 10604.2 & \(\left[10606.9^{+1.8}_{-1.5},15.0^{+3.4}_{-3.2}\right]\) & \(\left[10607.2^{+2.0}_{-1.5},10.8^{+4.2}_{-2.4}\right]\) & \(0.70^{+0.07}_{-0.01}-0.15^{+0.09}_{-0.10}\) \\ \(B^{*}\bar{B}^{*}\) & 10649.4 & \(\left[10652.2^{+1.8}_{-1.6},14.8^{+3.4}_{-3.2}\right]\) & \(\left[10652.2^{+1.5}_{-1.5},11.5^{+2.2}_{-2.2}\right]\) & \(0.70^{+0.07}_{-0.02}-0.15^{+0.09}_{-0.11}\) \\ \(\left[D\bar{D}^{*}+D^{*}\bar{D}\right]/\sqrt{2}({\rm Adopt})\) & 3875.8 & \(\left[3884.3^{+0.6}_{-0.6},26.0^{+1.4}_{-1.4}\right]\) & \([3881.7^{+2.3}_{-2.3},26.6^{+3.0}_{-3.4}\right]\) & \(1.21^{+0.06}_{-0.05}+0.12^{+0.03}_{-0.03}\) \\ \(\left[D\bar{D}^{*}+D^{*}\bar{D}\right]/\sqrt{2}({\rm Inst})\) & 3875.8 & \(\left[3884.8^{+0.6}_{-0.6},25.8^{+1.4}_{-1.4}\right]\) & \([3881.7^{+2.3}_{-2.3},26.6^{+3.0}_{-3.4}\right]\) & \(1.20^{+0.06}_{-0.05}+0.13^{+0.03}_{-0.03}\) \\ \(D^{*}\bar{D}^{*}\) & 4017.1 & \(\left[4025.8^{+0.6}_{-0.6},24.0^{+1.3}_{-1.4}\right]\) & \(\left[4025.5^{+3.6}_{-5.6},26.0^{+6.0}_{-6.0}\right]\) & \(1.20^{+0.06}_{-0.05}+0.13^{+0.03}_{-0.03}\) \\ \hline \hline \end{tabular} \end{table} Table 3: The extracted poles for all states are listed with the quantum numbers \(I^{G}(J^{PC})=1^{+}(1^{+-})\). The fitted parameter for the \(B^{*}\bar{B}^{(*)}\) system are \(\Lambda=0.510^{+0.027}_{-0.041}\) GeV, \(\tilde{C}_{s}=0.48^{+0.15}_{-0.13}\times 10^{2}\) GeV\({}^{-2}\) and \(C_{s}=-5.4^{+0.63}_{-0.65}\times 10^{2}\) GeV\({}^{-4}\). The fitted parameter for the \(D^{*}\bar{D}^{(*)}\) system are \(\Lambda=0.300^{+0.012}_{-0.013}\) GeV, \(\tilde{C}_{s}=2.86^{+0.21}_{-0.22}\times 10^{2}\) GeV\({}^{-2}\) and \(C_{s}=-59.9^{+2.8}_{-3.1}\times 10^{2}\) GeV\({}^{-4}\). The RMS is the root-mean-square radius in the CSM, which has been discussed in the Ref. [76 quently, the \(\Lambda\) adopted in this study, \(0.3\sim 0.5\) GeV, is reasonable for the \(Z_{c}\) and \(Z_{b}\) cases. However, in the case of \(Z_{cs}\), the OZI suppression prohibits the contributions from either OPE or one-kaon-exchange. As a result, the contact term becomes the only interaction that needs to be considered. This can be viewed as effectively integrating out the pion and kaon fields. Therefore, we adopt a smaller value of \(\Lambda\approx 0.2\) GeV for the \(Z_{cs}\) cases. According to the results in Table 4, the newly reported \(Z_{cs}(4123)\) by BESIII collaboration [67] could correspond to the narrower \(D_{s}^{*}\bar{D}^{*}\) state, although the experimental width is yet to be confirmed. Its estimated mass is around 4125.2 MeV and width is approximately 13.2 MeV. Furthermore, the \(Z_{cs}(4220)\) is anticipated to correspond to a broader resonance with its central values of the mass and width at 4152.7 MeV and 115.0 MeV. Indeed, the mass and width of \(Z_{cs}(4220)\) both fall within the two-standard-deviation region of the experimental data. Furthermore, as shown in Fig. 5 and Table 4, the narrow (or broader) resonances exhibit remarkably similar wave functions and sizes. ## V Summary In this study, we employ the ChEFT to investigate the hidden-heavy tetraquark states with \(I^{G}(J^{PC})=1^{+}(1^{+-})\) and the hidden-charm states with a strange quark with \(I(J^{P})=1/2(1^{+})\) in the molecule picture. The couplings between the \(S\)-wave open-heavy channel and other channels, such as the \(D\)-wave channel, the \(S\)-wave channel with different constituents, and the hidden-heavy channels, are expected to be small. Therefore, we focus on the \(S-\)wave open-heavy single channels: \(\left[D\bar{D}^{*}+D^{*}\bar{D}\right]/\sqrt{2},~{}D^{*}\bar{D}^{*},~{}\left[B \bar{B}^{*}+B^{*}\bar{B}\right]/\sqrt{2},~{}B^{*}\bar{B}^{*}\), \((D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D})/\sqrt{2}\) and \(D_{s}^{*}\bar{D}^{*}\). We employ the effective Lagrangians based on heavy quark symmetry and chiral symmetry, considering both contact and OPE diagrams. To investigate the possible resonances, we adopt the CSM to consistently analyze the bound states and resonances. In contrast to our previous work [70; 77], we perform the momentum space Schrodinger equation and discretize it using the Gaussian quadrature approach. In our investigation of the \(Z_{b}\) system, we fit experimental data to extract resonance parameters within the molecule picture. With \(\Lambda=0.510\) GeV, \(\tilde{C}_{s}=0.48\times 10^{2}\) GeV\({}^{-2}\) and \(C_{s}=-5.4\times 10^{2}\) GeV\({}^{-4}\), we obtain the mass and width values of \(10606.9\) MeV and \(15.0\) MeV for the \([B\bar{B}^{*}+B^{*}\bar{B}]\,/\sqrt{2}\) resonance, while \(10652.2\) MeV and \(14.8\) MeV for the \(B^{*}\bar{B}^{*}\) resonance. The RMS radii for these two resonances are both approximately \(0.70-0.15i\) fm. Similarly, we perform calculations for the \(Z_{c}\) system in the \(S-\)wave \(1^{+}(1^{+-})\) channels: \(\left[D\bar{D}^{*}+D^{*}\bar{D}\right]/\sqrt{2}\) and \(D^{*}\bar{D}^{*}\). Taking \(\Lambda=0.300\) GeV, \(\tilde{C}_{s}=2.86\times 10^{2}\) GeV\({}^{-2}\) and \(C_{s}=-59.9\times 10^{2}\) GeV\({}^{-4}\), we obtain the mass and width values of \(3884.3\) MeV and \(26.0\) MeV for the former resonance, while \(4025.8\) MeV and \(24.0\) MeV for the latter resonance. The RMS radii for both resonances are around \(1.20+0.13i\) fm. For the isovector \(\left[D\bar{D}^{*}+D^{*}\bar{D}\right]/\sqrt{2}\) system, we also consider the influence of the three-body decay. However, the numerical results under the instantaneous approximation with \(q_{0}=0\) (as shown in the "Inst" row of Table 3) are very close to the results of the "Adopt" row, indicating the minimal changes in the mass, width and size. Thus, we conclude that the 2-body decay process dominates the width of this resonance. We consider the hidden-charm tetraquark states with a strange quark and propose that the \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) resonances correspond to the same channel \((D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D})/\sqrt{2}\). Taking the data of \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) as input, we extract the central values and errors of the parameters \(\Lambda\), \(\tilde{C}_{s}\) and \(C_{s}\). With \(\Lambda=0.192\) GeV, \(\tilde{C}_{s}=6.8\times 10^{2}\) GeV\({}^{-2}\) and \(C_{s}=-186.9\times 10^{2}\) GeV\({}^{-4}\), we obtain the mass and width values of \(3982.4\) MeV and \(14.1\) MeV for the \(Z_{cs}(3985)\), while \(4010.7\) MeV and \(119.6\) MeV for the \(Z_{cs}(4000)\). The corresponding RMS radii are \(1.89+0.43i\) fm and \(1.78+1.31i\) fm, respectively. For the \(D_{s}^{*}\bar{D}^{*}\) system, we adopt the same parameters based on the heavy quark spin symmetry and also find two resonances. The narrower resonance has a mass of \(4125.2\) MeV and a width of \(13.2\) MeV, which nicely matches the observed \(Z_{cs}(4123)\) reported by the BESIII collaboration [67]. Hence, we interpret it as the \(Z_{cs}(4123)\), although the experimental width is yet to be confirmed. On the other hand, the broader resonance has a mass of \(4152.7\) MeV and a width of \(115.0\) MeV. We interpret it as the \(Z_{cs}(4220)\) observed by the LHCb collaboration [33], as its mass and width fall within the two-standard-deviation region of the experimental data. In summary, we apply the ChEFT to investigate the \(Z_{b},~{}Z_{c}\) and \(Z_{cs}\) states. Our analysis suggests that the \(Z_{b}(10610)\), \(Z_{b}(10650)\), \(Z_{c}(3900)\) and \(Z_{c}(4020)\) can be interpreted as the molecular states formed by the \(S-\)wave \(B\bar{B}^{*}\), \(B^{*}\bar{B}^{*}\), \(D\bar{D}^{*}\) and \(D^{*}\bar{D}^{*}\) constituents, respectively. Although the \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) states exhibit a significant width difference, these two resonances may originate from the same \(S-\)wave channel \((D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D})/\sqrt{2}\). We also find two resonances in the \(D_{s}^{*}\bar{D}^{*}\) channel, which can be identified as the \(Z_{cs}(4123)\) and \(Z_{cs}(4220)\). Our results provide a prediction for the width of the \(Z_{cs}(4123)\) that awaits experimental confirmation. Additionally, we offer a precise mass and width range for the \(Z_{cs}(4220)\), which can guide future experimental searches for the hidden-charm tetraquarks with a strange quark. ###### Acknowledgements. This research is supported by the National Science Foundation of China under Grants No. 11975033, No. 12070131001 and No. 12147168. The authors thank K. Chen, Y. Ma and B. Wang for helpful discussions.
2310.00882
Empirical Contrast Model for High-Contrast Imaging -- A VLT/SPHERE Case Study
The ability to accurately predict the contrast achieved from high contrast imagers is important for efficient scheduling and quality control measures in modern observatories. We aim to consistently predict and measure the raw contrast achieved by SPHERE/IRDIS on a frame by frame basis to improve the efficiency and scientific yield with SPHERE at the Very Large Telescope (VLT).Contrast curves were calculated for over 5 years of archival data using the most common SPHERE/IRDIS coronagraphic mode in the H2/H3 dual band filter, consisting of approximately 80,000 individual frames. These were merged and interpolated with atmospheric data to create a large data-base of contrast curves with associated features. An empirical power law model for contrast, motivated by physical considerations, was then trained and finally tested on an out-of-sample test data set. At an angular separation of 300 mas, the contrast model achieved a mean (out-of-sample) test error of 0.13 magnitudes with the residual 5-95% percentiles between -0.23 and 0.64 magnitude respectively. The models test set root mean square error (RMSE) between 250-600 mas was between 0.31 - 0.40 magnitudes which is equivalent with other state-of-the-art contrast models presented in the literature. In general, the model performed best for targets between 5-9 G-band magnitude, with degraded performance for targets outside this range. This model is currently being incorporated into the Paranal SCUBA software for first level quality control and real time scheduling support.
Benjamin Courtney-Barre, Robert De Rosa, Rosita Kokotanekova, Cristian Romero, Matias Jones, Julien Milli, Zahed Wahhaj
2023-10-02T03:52:17Z
http://arxiv.org/abs/2310.00882v1
# Empirical Contrast Model for High-Contrast Imaging ###### Abstract Context:The ability to accurately predict the contrast achieved from high contrast imagers is important for efficient scheduling and quality control measures in modern observatories Aims:We aim to consistently predict and measure the raw contrast achieved by SPHERE/IRDIS on a frame by frame basis to improve the efficiency and scientific yield with SPHERE at the Very Large Telescope (VLT). Methods:Contrast curves were calculated for over 5 years of archival data using the most common SPHERE/IRDIS coronagraphic mode in the H2/H3 dual band filter, consisting of approximately 80,000 individual frames. These were merged and interpolated with atmospheric data to create a large data-base of contrast curves with associated features. An empirical power law model for contrast, motivated by physical considerations, was then trained and finally tested on an out-of-sample test data set. Results:At an angular separation of 300 mas, the contrast model achieved a mean (out-of-sample) test error of 0.13 magnitudes with the residual 5-95% percentiles between -0.23 and 0.64 magnitude respectively. The models test set root mean square error (RMSE) between 250-600 mas was between 0.31 - 0.40 magnitudes which is equivalent with other state-of-the-art contrast models presented in the literature. In general, the model performed best for targets between 5-9 G-band magnitude, with degraded performance for targets outside this range. This model is currently being incorporated into the Paranal SCUBA software for first level quality control and real time scheduling support. Conclusions: ## 1 Introduction High contrast imagers have become central tools for the discovery and understanding of exoplanets and protoplanetary disk formation, demographics and dynamics around young stars. Instruments such as VLT's SPHERE (Beuzit et al., 2019; Fusco et al., 2015), Gemini's GPI (Macintosh et al., 2014), and Subaru's SCExAO (Sahoo et al., 2018) can achieve typical raw contrasts in the range of 10\({}^{-4}\) to 10\({}^{-6}\) between 0.1\({}^{\circ}\)-0.5\({}^{\circ}\) from the central star in near infra-red wavelengths. The foreseen instruments coming in the epoch of extremely large telescopes will push these boundaries even further (e.g. Brandl et al., 2021). The outstanding capabilities for these current and future instruments attached to world class telescopes comes with high demand, making the optimisation of telescope time an important task. This optimization generally requires accurate models to predict observational performance indicators (e.g. contrast for high contrast imagers) both prior and/or early on during the observation in order to select observations that optimally exploit the atmospheric conditions. Traditionally, short term scheduling and quality control in queue observations for high contrast imagers such as SPHERE are done primarily based on atmospheric, sidereal or airmass constraints. While strong correlations exist between turbulence and AO performance, there are often outliers where the measured contrast are considerably worse than what would be expected from the observed atmospheric conditions. This is typically due to local effects within the telescope or instrument. Without the ability to properly predict and measure the contrast, such observations may be scheduled and pass basic quality control checks despite not meeting the users scientific requirements. Therefore, to optimise telescope time and the ultimate data quality provided to users, high contrast imagers may benefit greatly from precise models to predict scientifically meaningful metrics, such as contrast or SNR that can then be measured in quasi-real time to evaluate the quality of the data. The ultimate goal being to perform short-term scheduling and quality control based on predicted and measured metrics that hold scientific significance. This, combined with the significant efforts of improving quality control software and standards (Thomas et al., 2020), and forecasting models at Paranal (Milli et al., 2020, 2019; Masciadri et al., 2020; Osborn & Sarazin, 2018), will greatly advance real-time decision making and quality control measures. The ability to accurately predict contrast for high contrast imagers on large telescopes is not a trivial task. While fundamental atmospheric limits are well characterized (Conan et al., 1995; Fusco & Conan, 2004; Aime & Soummer, 2004; Males et al., 2021), typically non-trivial local effects can dominate achievable contrast, for example quasi-static speckles caused by opto-mechanical imperfections and thermal drifts (Bloemhof et al., 2001; Soummer et al., 2007; Martinez et al., 2013; Vigan et al., 2022) or dome seeing (Tallis et al., 2020) and low wind effects (Milli et al., 2018; Sauvage et al., 2015). Given the maturity of instruments such as SPHERE, data-driven analysis and empirical models are a practical way to understand these limitations and the uncertainty that random telescope/instrumental processes have on observations. Some good examples of this are Martinez et al. (2013) work using SPHERE data to characterise speckle temporal stability in high Strehl regimes, Milli et al. (2018) work characterising the low wind effect on SPHERE, and Jones et al. (2022) data-driven analysis of SPHERE performance for faint targets. In the case of predicting contrast - various models have been explored in literature to empirically predict the on-sky observed contrast given atmospheric and instrumental conditions (Bailey et al., 2016; Courtney-Barrer et al., 2019; Xuan et al., 2018). In particular GPI's initial work using linear regression of AO telemetry and astronomical site monitoring data to predict contrast (Bailey et al., 2016), which was further advanced with neural networks that were able to predict the measured contrast using 6 input parameters that were available pre-observation with a contrast (magnitude) RMSE of 0.45 at 0.25" (Savransky et al., 2018). Correlations between measured contrast and AO error terms has also been shown in other work (e.g. Poyneer and Macintosh (2006); Poyneer et al. (2016)), and in general have been shown to provide good predictive capacity of the contrast in atmospheric limited regimes (Fusco et al., 2016; Sauvage et al., 2016; Macintosh et al., 2014). For this work we present a simple empirical model to predict the raw contrast measured by SPHERE with the goal to assist on-site quality control measures and short-mid term scheduling decisions. This paper begins with a brief overview of the SPHERE instrument followed by motivating an empirical model to fit the contrast data to. Section 4 will outline the data preparation and pre-processing that was done before fitting the contrast model, along with the algorithms used for fitting. Section 5 will present the results along with some discussion. Section 6 will conclude our findings and future outlook. ## 2 SPHERE/IRDIS The Spectro-Polarimetric High-contrast Exoplanet REsearcher (SPHERE) (Beuzit et al., 2008) is an extreme adaptive optics (AO) instrument installed on the Unit Telescope 3 (Mleipal) at the Paranal Observatory. Its primary science goal is imaging, low-resolution spectroscopic, and polarimetric characterization of extra-solar planetary systems at optical and near-infrared wavelengths. SPHERE consists of three science channels, the Integral Field Spectrograph (IFS) and the Infra-Red Dual-band Imager and Spectrograph (IRDIS), which both observe in the near-infrared, and the Zurich Imaging Polarimeter (ZIMPOL) for visible polarimetric observations. Each sub instrument has a series of coronagraphs and filters available in-addition to having an extreme AO system called SAXO (Fusco et al., 2016; Sauvage et al., 2016) placed in the common path of all sub instrument channels. SAXO operates up to a frequency of 1.38 kHz on bright targets with a 40x40 spatially filtered Shack-Hartmann (SH) wavefront sensor (WFS) measuring in the optical, and a 41x41 piezoelectric high-order deformable mirror for AO actuation. SAXO also uses a dedicated differential tip/tilt sensor (Baudoz et al., 2010) in the near-infrared to correct for wavelength dependent tip tilt between the near-infrared and optical science channels. For this work we tested our model on the most common SPHERE/IRDIS mode which uses an apodised Lyot coronagraph with the H2/H3 dual band filters centered at wavelengths 1.593\(\mu\)m and 1.667\(\mu\)m respectively. ## 3 Contrast Model We begin to motivate an empirical model for contrast with some statistical considerations of the measured intensity in the focal plane. It can easily be shown by Fourier optics that a phase aberration at some spatial frequency \(k\) in a pupil plane of a telescope gets mapped to a so called speckle in the focal plane at an angular coordinate of \(k\lambda\), where \(\lambda\) is the lights wavelength (Roddier, 2004). Such speckles are typically classified based on their temporal behaviour, which ultimately determines if (or how well) they can be suppressed by post processing reduction methods. This sets the fundamental contrast limits in ground based high contrast imagers (Males et al., 2021). The detection of real signals (such as a planet) within the circumstellar environment of a star requires statistical knowledge on the probability of some intensity measurement in the focal plane. Various authors (e.g. Canales and Cagiani (1999) and references therein) have shown under the assumption of long exposures that a point wise intensity measurement (I) generally follows a modified Rician probability density function: \[P(I)=\frac{I}{2\sigma^{2}}\exp\left(-\frac{I+s^{2}}{\sigma^{2}}\right)I_{0} \left(\frac{2s\sqrt{I}}{\sigma^{2}}\right) \tag{1}\] where \(I_{0}\) is the zero order modified Bessel function of the first kind. While \(s^{2}\) and \(2\sigma^{2}\) are related to the (long exposure) intensity of deterministic, and random speckle component of the wavefront respectively. Soummer et al. (2007) developed this statistical framework to derive a general expression for the expected point wise variance in a coronagraphic image as: \[\sigma_{I}^{2}=N(I_{s1}^{2}+NI_{s2}^{2}+2I_{t}I_{s1}+2NI_{t}I_{s2}+2I_{s1}I_{ s2})+\sigma_{p} \tag{2}\] where we have kept with the notation used in Soummer et al. (2007). Here 'I' generally denotes the intensity, \(\sigma_{p}^{2}\) is the variance of the photon noise, and N is the ratio of fast-speckle and slow-speckle life times. \(I_{c}\) is the intensity produced by the deterministic part of the wavefront, including static aberrations, while the \(I_{s}\) terms correspond to the halo produced by random intensity variations, i.e. atmospheric (\(I_{s1}\)) and quasi-static contributions (\(I_{s2}\)). In this generalized expression of the variance, several contributions can be identified by order of appearance: (1) the atmospheric halo; (2) the quasi-static halo; (3) the atmospheric pinning term, the speckle pinning of the static aberrations by the fast-evolving atmospheric speckles; (4) the speckle pinning of the static by quasi-static speckles; and finally (5) the speckle pinning of the atmospheric speckles by quasi-static speckles. Converting this to the expected contrast as a general function of radius (e.g. a typical contrast curve) requires calculating the sum of the pixel wise modified Rician density functions within a given annulus. No closed form analytic solution to this exists, although there are closed form approximations (Lopez-Salcedo, 2009). Under a strong assumption of independence between pixels and, for a given spatial frequency, equal probability of the aberrations direction (i.e. angular position of a speckle at a fixed radius), where \(\theta\) is the angular position of speckle at given radius, we can estimate the expected intensity variance within a thin annulus at radius r by simply scaling by the number of pixels in the annulus, in which case we can make the proportional approximation of the \(1\sigma\) contrast: \[\langle C(r)\rangle\propto\frac{\langle\sigma_{I}(r)\rangle}{I_{*}} \tag{3}\] where \(I_{*}\) is the stellar intensity in the science channel and \(\langle..\rangle\) is the expected value. As mentioned, this is a strong assumption that does not generally hold. For example, experience shows that there is typically anisotropy in the aberrations at a given spatial frequency, especially from biased wind directions of dominant turbulent layers. Nevertheless this basic assumption is useful for deriving first order contrast estimates. Analytically predicting each term in 2 prior to observation would require full knowledge of internal aberrations, wind velocity profiles and the ability to reconstruct modal distribution of incoming phase front in-order to predict AO residuals - which is a difficult task. Nevertheless, noting that any first order expansion of the coronagraph PSF term (\(I_{c}\)) and atmospheric speckle terms (\(I_{s1}\)) with regard to typical AO error budget terms would lead to various cross products of AO error budget terms in the pinned speckles; we could make the assumption that typically one of these terms will dominate the halo at a given radius and therefore propose to model the contrast as a product of AO cross terms, each with a power laws to give an appropriate weighting at a given radius. i.e. \[C(r)\approx x(r)\prod_{i=1}^{4}\Delta_{i}^{\alpha_{i}(r)} \tag{4}\] where x(r) and \(\alpha_{i}(r)\) are the fitted parameters for a given radius. From basic leave-one out analysis the \(\Delta\) terms considered for the following model are a combination of typical (unitless) AO error budget like terms: \[\Delta_{1} \equiv\Delta_{fa}=\frac{D}{r_{0}} \tag{5}\] \[\Delta_{2} \equiv\Delta_{servo}=\frac{\tau}{r_{0}}\] (6) \[\Delta_{3} \equiv\Delta_{SNR-WFS}=\frac{n_{p,wfs}}{\sqrt{n_{p,wfs}+N_{D} \left(n_{B}^{2}+\left(\frac{\tau_{0}}{G}\right)\right)}}\] (7) \[\Delta_{4} \equiv\Delta_{SNR-SCI}=\frac{1}{\sqrt{n_{p,xi}}} \tag{8}\] where D is the telescope diameter, \(r_{0}\) is the atmospheric coherence length (Fried parameter), \(\tau\) and \(\tau_{0}\) are the AO latency and atmospheric coherence time respectively, \(n_{p}\) is the number of detected photoelectrons per defined subaperture (sum of all pixels); \(N_{D}\) is the number of pixels in a subaperture, \(n_{B}\) is the number of detected background photoelectrons per subaperture; \(e_{n}\) is the read-noise in electrons per pixel, and G is the gain (Robert K. Tyson 2012). The values \(n_{p,\cdot}\) are generally inferred through fitted zero points and extinction coefficients to convert stellar magnitude to flux (see section 4.1). The residuals of such a model would therefore be due to the variance in non-AO related terms in equation 2. We also note that by construction (also through cross validation on training data) in a shot noise limited regime the product of: \[\Delta_{red}\equiv\Delta_{3}^{\alpha_{4}}\Delta_{4}^{\alpha_{4}}=n_{p,wfs}^{ \alpha_{3}/2}/n_{p,xi}^{\alpha_{4}/2}\] can be seen as reddening parameter. In the case of equal flux \(n_{p,wfs}=n_{p,sci}=n_{p}\) we get \(\Delta_{red}=n_{p}^{(\alpha_{3}-\alpha_{4})/2}\) and therefore one would expect \(\alpha_{4}>\alpha_{3}\) to maintain that brighter targets generally achieve better contrast. However in general \(n_{p,wfs}\neq n_{p,sci}\) and chromatic effects of red stars may play an important role (especially with the performance of the differential tip-tilt controller). This general model has a considerable advantage that it can capture non-linearities in the contrast performance and furthermore can be fitted linearly by simply considering the contrast magnitude: \(C_{m}=-5/2\log_{10}(C)\) such that: \[C_{m}(r)=X(r)-\frac{5}{2}\sum_{i=1}^{4}\alpha_{i}(r)\log_{10}(\Delta_{i}) \tag{9}\] where X(r) = \(-5/2\log_{10}(x(r))\) is the fitted intercept. On the training dataset we also allowed for a linear calibration of the fitted intercept \(X_{f}(r)\) with the model residuals given the partitioned instrumental state such that \(X(r)=X_{f}(r)-\Delta_{c}(r|state)\). Where \(\Delta_{c}(r|state)\) is the training model residual when filtering for a given observational state. The state filters considered were the sky transparency classification (e.g. thick, thin, clear, photometric) the AO gain/frequency setting, and the wavefront sensor spatial filter size (e.g. small, medium, large). This significantly improved the cross-validation performance of the model on the training data set, while maintaining a significant sample size for the general fitting of \(\alpha\) parameters. These calibrated offsets were then used for the out-of-sample model test (without re-calibration). ## 4 Data Preparation We developed a database of all public observations taken between 2015-2019 which were downloaded from the ESO SPHERE archive. This particular study focused on fitting the above described model for the most commonly observed SPHERE/IRDIS mode which uses an apodised Lyot coronagraph with the H2 (\(\lambda_{c}=1.593\mu\)m) filter which corresponds to the left detector in the H2/H3 dual band mode. The general FITS headers used to filter this data are displayed in table 1. After filtering and outlier rejection (described below) train (75%) and test (25%) data sets were split to have non-overlapping observation nights - meaning for any given sample in the train set, there did not exist a sample in the test set that was observed on the same night (and vis versa). This corresponded to 149 and 47 unique stars in the train and test sets respectively with only 4 stars shared between the two sets, totalling to nearly 80k raw coronagraphic frames to analyse. A 75/25% split provided a sufficient parameter space density to perform 10-fold cross validation on the training set, while allowing sufficient samples to avoid biases in the out-of-sample test. 1\(\sigma\) noise levels were estimated as a function of radius in coronagraphic data cubes (DPR TYPE = OBJECT) after some basic reduction (e.g. background subtraction, flat fielding, bad pixel masking and high-pass filtering). The standard deviation was calculated in an annulus with 4 pixel (\(\sim\lambda/D\)) from 82-1800mas, where radii that had pixels in non-linear regime of IRDIS detector (ADU \(>\) 20k) were masked. Additionally, each coronagraphic (OBJECT) frame was cross correlated with median coronagraphic image across the filtered data set to provide an additional criteria for outlier removal. From visual inspection of individual frames, anything with a cross correlation below 0.5 seemed to correspond to frames that had obvious issues such as bright companions in the field or where AO loops temporarily opened during an exposure. Therefore frames that had a cross correlation with the median image less than 0.5 were dropped from the analysis. In short exposures there were noticeable pinned speckles that were initially difficult predict from atmospheric conditions. This was significantly improved by co-adding coronagraph frames that had exposure times \(\leq\) 64s to roughly 64s. An example of this is shown in figure 1. A 2D Gaussian function was then fitted to the corresponding non-coronagraphic flux frames (DPR TYPE = OBJECT,FLUX) of the observation to estimate the peak flux. The contrast curve was then calculated by dividing the 1\(\sigma\) noise levels at a given radii by the estimated the peak flux when adjusting for the different integration times and neutral density filters. To correct for changing transparency between the non-coronagraphic (flux) and coronagraphic (object) frames, the measured contrast was multiplied by the ratio of aggregated wavefront sensor (SPARTA telemetry) flux data between the two periods. While no specific sky classifications (e.g. thin or thick clouds) were excluded from the data, we neglected data where there was significant (\(>\) 50%) variability in the wavefront sensor flux during an exposure, since this caused significant variability in the measured contrast that was unpredictable from a pre-observation perspective. The initial contrast curve database consisted of 27135 co-added contrast frames. Each contrast curve was associated with various features including the FITS headers from the initial files, and the full available range of atmospheric parameters available from ESO MASS-DIMM and meteorological archives which were interpolated to the mean coronagraphic frame timestamp. This included important parameters such as atmospheric seeing and coherence time. Atmospheric data prior to the last MASS-DIMM upgrade (April 2016) were neglected due to instrumental biases between the old and new systems. Data was then further filtered to exclude observations that were outside of standard operational conditions and/or where feature outliers were detected. Basic filters included that: * All SPARTA AO loops were closed * SPARTA differential tip/tilt loop is closed * Telescope was guiding on a guidestar * Atmospheric seeing and coherence time measurements were in the range 0-5" and 0-30ms respectively. * Raw coronagraphic image cross-correlation with median image \(>\) 0.5 * No low wind effected data (typically V\(<\)3m/s in data before Nov-2017) * wavefront sensor flux variability did not exceed more than 50% between frames. Data taken prior to the M2 spider re-coating done in November 2017 shows significantly different contrast statistics for wind speeds below 3m/s due to the low wind effect, which was largely fixed by the re-coating intervention (Milli et al. 2018; Sauvage et al. 2015). Figure 2 shows the measured raw contrast at 0.3" before and after the M2 spider re-coat for low (left plot) and nominal (right plot) wind speeds. It is clear that there was a large statistical improvement in the measured contrast from the intervention in the low wind case, while a statistically insignificant difference when observing in wind speeds \(>\)3m/s after the re-coating. Note that data taken with fast AO modes (1200Hz, 1380Hz) were neglected in these histograms to avoid biases since the 1380Hz AO mode was an upgrade that was not used in earlier data, particularly before the M2 spider re-coat. This prompted us to only neglect data taken before November 2017 with wind speed \(<\)3m/s. After this filtering process 8494 co-added frames remained for training and testing the model. ### Flux Model Calibrated instrumental zero points and extinction coefficients were required to estimate the photocurrent (ADU/s) received in both the wavefront sensor and science detector for a given SIMBAD magnitude and airmass prior to observation. Data was first filtered to consider SPHERE flux sequences and WFS data taken during periods classified as photometric using either the LP780 or OPEN spectral filter without restrictions on the WFS spatial filter size. From this filtered data the following model was fitted: \[M=\beta_{0}\left(2.5\log_{10}\left(\frac{F}{TG}\right)\right)+\beta_{1}X+ \beta_{2} \tag{10}\] Where M is the Simbad magnitude of the target star, F is the flux (ADU / s) measured in either the WFS (G band) which is \begin{table} \begin{tabular}{l c} \hline \hline Keyword & Value \\ \hline DPR TYPE & OBJECT\(/\) OBJECT\(/\)FLUX \\ INS COMB ICOR & N\(\_\)ALC\(\_\)YJH\(\_\)S \\ DPR TECH & IMAGE \\ INS1 FILT NAME & B\(\_\)H \\ INS COMB IFLT & DB\(\_\)H23 \\ INS4 FILT3 NAME & OPEN \\ \hline \end{tabular} \end{table} Table 1: Header keywords used to filter the data for the coronagraphic observations. Figure 1: An example of co-adding short exposure frames to 64s Figure 2: Normalized histogram of raw contrast at 0.3” in [left] low wind conditions (\(<3m/s\)) and [right] nominal (\(<3m/s\)) wind conditions before (red) and after (green) M2 spider re-coat that was completed in November 2017. Fast AO modes (1200Hz, 1380Hz) were neglected in these histograms to avoid biases since the 1380Hz AO mode was an upgrade that was not available in earlier observations before the M2 spider re-coat. summed over all sub apertures, or flux template from the science detector (H band), T is a scalar to represent the relative transmission of the spectral filter, G the detector gain, X is the targets airmass, and \(\beta_{0},\beta_{1},\beta_{2}\) are the fitted parameters corresponding to the telescope/instrument transfer function, extinction coefficient and zero point respectively. Fitted parameters from data taken in photometric conditions are outlined in table 2 and are consistent with previously measured extinction coefficients at Paranal (Patat et al., 2011). To account for sky transparency in the contrast model sky category offsets were calibrated on the train data set via the contrast residuals of the respective sky category partitioned data as described in section 3. Additionally figure 3 displays the results when the data was partitioned into sky-transparency categories as classified by the weather officer and the above the results when the data was partitioned into sky-transparency categories as classified by the weather officer and the above fitted photometric model applied to each respective sky category. Figure 4 shows that the mean absolute error between the measured and predicted WFS flux given the target magnitude scales monotonically with the weathers officer classification of the sky transparency. These results suggest that the models could be used for automatic sky transparency classification. This would be advantageous over the standard method of the weather officer going outside every 30 minutes to visually classify the whole sky, since the WFS measurements would be in real-time directly within the SPHERE field of view. ### Model Fitting The empirical contrast model presented in section 3 was fitted with the python scikit-learn package Pedregosa et al. (2011) using Bayesian regression. This model was tuned and ultimately fitted on the training data set (75% of the data) using 10-fold cross validation. The best fit parameters are reported as the mean of the 10-fold fit on the training set for the given radius, and respective uncertainties are \(\pm\) two standard deviations. After tuning via cross validation on the training data set, the model evaluated on the test data set (25% of the data). The results are presented in the following sections. ## 5 Results and Discussion Figure 5 shows the residuals of the contrast model as a function of separation from the central star for the train and test data sets, which shows good generalization to the out of sample test. It can also be seen that mean model residuals were well centered near zero, with 0.13 magnitude mean error, and residual 5-95% percentiles between -0.23, 0.64 magnitude respectively on the test set at 300mas. Figure 6 shows the general measured vs predicted raw 5\(\sigma\) contrast on the out-of-sample test set across all radii, along with the respective model RMSE at a given radial separations from the central star. Between 0.25\({}^{\circ}\)-1.00\({}^{\circ}\) the model achieved a RMSE between 0.17-0.40 magnitude on the out-of-sample test set. We also analyze the model performance over an aggregated grid of atmospheric and star brightness categories in figure 7. The atmospheric categories considered are those currently used (ESO period p106) as user constraints for grading observations. The categories are defined from cuts in the atmospheric seeing, coherence time joint probability distribution, with T.Cat10 corresponding to the best (top 10%) atmospheric conditions, and T.Cat85 to the worst. We also simply consider 3 star magnitude categories of bright (Gmag \(<\) 5), mid (5\(<\)Gmag\(<\)9) and faint (Gmag\(>\)9) targets. There is excellent agreement (\(<\)0.15mag residual at 0.5\({}^{\circ}\)) in the mid category range across all atmospheric conditions, however relatively worse performance for the bright and faint categories, particularly when in better atmospheric conditions. This would indicate that the underlying assumption that pinned atmospheric residuals dominates contrast is most valid in the mid category, while worst performance is seen for bright and faint target where, for example, static/quasi-static pinning may become dominant. The fitted parameters are shown in figure 8. We find fitted power indicies deviate considerably from the 5/3 power laws typically encountered in AO error budgets for phase residuals arising from limited spatial or temporal bandwidths. Between 250-500mas these typically range between 10-30% the 5/3 value for fitting errors and servo lag related terms. The fitted parameter typically approach zero (minimum sensitivity) near a radius of 800mas. This corresponds to the radius of the which is determined by the \begin{table} \begin{tabular}{|c||c|c|} \hline Parameter & WFS (G-Band) & H2 FILTER \\ \hline \(\beta_{0}\) & -1.057 & -0.899 \\ \(\beta_{1}\) & 0.127 & 0.147 \\ \(\beta_{2}\) & 25.260 & 17.705 \\ \hline \end{tabular} \end{table} Table 2: Fitted parameters for the WFS (G band) and Science flux frame (H band) magnitude to flux model in photometric conditions. Figure 3: G and H band photometric models applied to different sky transparency categories defined as Photometric (PH), Clear (CL), Thin (TN), Thick (TH) inter-actuator spacing of the deformable mirror. Interestingly the fitting term has two zero crossing points, going negative (albeit very near zero) between 750-1000mas which is around the scattering halo. This implies that contrast within this radii slightly degrades with lower seeing. Outside of this region we get the expected behaviour that contrast improves with lower seeing. However, it is clear that SPHERE contrast is much more sensitive to the atmospheric coherence time rather than coherence length (seeing). For example, at 300mas doubling the atmospheric coherence time (keeping all other variables equal) leads to an expected \(\sim 30\%\) reduction (improvement) in contrast, while doubling the Fried parameter (halving the seeing) only leads to a \(\sim 6\%\) reduction in contrast. Similar results were also found for the Gemini Planet Imager (Bailey et al., 2016). As expected \(\alpha_{SNR-WFS}<\alpha_{SNR-SCI}\) for all radii which, as discussed in section 3 implies that contrast generally improves with brightness assuming equal partitioning of flux between WFS and science channels in a shot noise limited regime. Analysing the reddening parameter we see that contrast generally degrades as targets become more red. For example, around 300mas considering the train set median Gmag=7, contrast degrades by roughly 20% per magnitude difference between WFS and Science channels. To compare results to other contrast models found in literature, we achieved a test contrast RMSE between 0.31 - 0.40 magnitudes between 250 - 600mas, or equivalently a log10 contrast RMSE of 0.13-0.16 respectively. This is comparable to the results of Savransky et al. (2018) which, when using a feedforward neural network with pre-observation data as input, achieved a test log10 contrast RMSE of 0.18 at 250mas. Such comparable results are encouraging given the relative simplicity and physical interpretability of the model presented in this work compared to more complex neural networks. Comparing the predictions of this model at 400mas to the current SPHERE exposure time calculator (ETC) offered by ESO (as of May 2023) when considering raw image predictions (EXPTIME 64s) without differential imagining (neglecting field rotation) we see in figure 9 that the ETC appears to provide very optimistic predictions for bright targets, and pessimistic for faint targets relative to the predictions of this work's model across all turbulence categories. This model also predicts that the contrast is less sensitive to changes in H-band flux compared to the the ETC which seems to also be reflected in the data at hand. The inclusion of this data to the ETC in short exposure time limits could be used to ultimately improve predictive accuracy of an ETC for SPHERE users. The contrast model presented here is currently being incorporated into Paranal's SCUBA software (Thomas et al., 2020) to be used as first level quality control and help improve the real-time decision process for SPHERE at Paranal. Figure 10 shows a real example of how this model could be used in operations for providing quick checks to ensure the measured raw contrast (\(\sim\)60s frame) is within the expected 95% test residual range of the model given the target and current atmospheric conditions. Statistics on the frame-by-frame contrast could then be used to grade the OB based on potential user constraints. Abnormal aberrations caused from instrumental effects that impact the contrast can also easily be detected and evaluated by users and/or operators in the context of expected performance. Based on the out-of-sample test results; at 0.3\({}^{\circ}\) the operator should be able to predict the contrast to less than 0.5 magnitude at a 2 sigma level. ## 6 Conclusions A simple product of power laws model was trained and tested on SPHERE/IRDIS coronagraphic data to predict contrast as a function of radius in the most commonly used H-band filter. When testing on out-of-sample test data, the model had a mean error of 0.13 magnitudes with residual 5-95% percentiles between -0.23 and 0.64 magnitude respectively at 300mas. The models test set RMSE between 250-600mas was between 0.31 - 0.40 magnitudes which was on-par with other state-of-the-art contrast models presented in literature. This model is currently being incorporated into the Paranal SCUBA software for first level quality control and real time scheduling support. Future work will consider fitting these model to other SPHERE instrumental modes. ###### Acknowledgements. This work has made use of the the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CESAM (Mars Figure 5: Train and test contrast curve residual heatmaps (2D histograms) with sample 1D histograms shown at a 300mas radius Figure 6: Test set results. [Left] measured vs predicted raw 5\(\sigma\) contrasts plotted in magnitudes. [Right] RMSE for the raw 5\(\sigma\) contrast magnitude vs radius seille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon.
2305.04852
Isotonic subgroup selection
Given a sample of covariate-response pairs, we consider the subgroup selection problem of identifying a subset of the covariate domain where the regression function exceeds a pre-determined threshold. We introduce a computationally-feasible approach for subgroup selection in the context of multivariate isotonic regression based on martingale tests and multiple testing procedures for logically-structured hypotheses. Our proposed procedure satisfies a non-asymptotic, uniform Type I error rate guarantee with power that attains the minimax optimal rate up to poly-logarithmic factors. Extensions cover classification, isotonic quantile regression and heterogeneous treatment effect settings. Numerical studies on both simulated and real data confirm the practical effectiveness of our proposal, which is implemented in the R package ISS.
Manuel M. Müller, Henry W. J. Reeve, Timothy I. Cannings, Richard J. Samworth
2023-05-08T16:50:07Z
http://arxiv.org/abs/2305.04852v2
# Isotonic subgroup selection ###### Abstract Given a sample of covariate-response pairs, we consider the subgroup selection problem of identifying a subset of the covariate domain where the regression function exceeds a pre-determined threshold. We introduce a computationally-feasible approach for subgroup selection in the context of multivariate isotonic regression based on martingale tests and multiple testing procedures for logically-structured hypotheses. Our proposed procedure satisfies a non-asymptotic, uniform Type I error rate guarantee with power that attains the minimax optimal rate up to poly-logarithmic factors. Extensions cover classification, isotonic quantile regression and heterogeneous treatment effect settings. Numerical studies on both simulated and real data confirm the practical effectiveness of our proposal, which is implemented in the R package ISS. ## 1 Introduction In regression settings, _subgroup selection_ refers to the challenge of identifying a subset of the covariate domain on which the regression function satisfies a particular property of interest. This is a post-selection inference problem, since the region is to be selected after seeing the data, and yet we still wish to claim that with high probability, the regression function satisfies this property on the selected set. Important applications can be found in precision medicine, for instance, where the chances of a desirable health outcome may be highly heterogeneous across a population, and hence the risk for a particular individual may be masked in a study representing the entire population. A natural strategy for identifying such group-specific effects is to divide a study into two stages, where the first stage is used to identify a potentially interesting subset of the covariate domain, and the second attempts to verify that it does indeed have the desired property (Stallard et al., 2014). However, such a two-stage process may often be both time-consuming and potentially expensive due to the inefficient use of the data, and moreover the binary second-stage verification may fail. In such circumstances, we are unable to identify a further subset of the original selected set on which the property does hold. In many applications, heterogeneity across populations may be characterised by monotonicity of a regression function in individual covariates. For instance, age, smoking, hypertension and obesity are among known risk factors for coronary heart disease (Torpy et al., 2009), while for individuals with hypertrophic cardiomyopathy, risk factors for sudden cardiac death (SCD) include family history of SCD, maximal heart wall thickness and left atrial diameter (O'Mahony et al., 2014). It is frequently of interest to identify a subset of the population deemed to be at low or high risk, for instance to determine an appropriate course of treatment. This amounts to identifying an appropriate superlevel set of the regression function. In this paper, we introduce a framework that allows the identification of the \(\tau\)-superlevel set of an isotonic regression function, for some pre-determined level \(\tau\). A key component of our formulation of the problem is to recognise that often there is an asymmetry to the two errors of including points that do not belong to the superlevel set, and failing to include points that do. For instance, in the case of hypertrophic cardiomyopathy, a false conclusion that an individual is at low risk of sudden cardiac death within five years, and hence does not require an implantable cardioverter defibrillator (O'Mahony et al., 2014), is more serious than the opposite form of error, which obliges a patient to undergo surgery and deal with the inconveniences of the implanted device. To introduce our isotonic subgroup selection setting, suppose that we are given \(n\) independent copies of a covariate-response pair \((X,Y)\) having a distribution on \(\mathbb{R}^{d}\times\mathbb{R}\) with coordinate-wise increasing regression function \(\eta\) given by \(\eta(x):=\mathbb{E}(Y|X=x)\) for \(x\in\mathbb{R}^{d}\). Thus \(Y=\eta(X)+\varepsilon\), where we additionally assume that \(\varepsilon\) is sub-Gaussian conditional on \(X\). Given a threshold \(\tau\in\mathbb{R}\), and writing \(\mathcal{X}_{\tau}(\eta):=\{x\in\mathbb{R}^{d}:\eta(x)\geq\tau\}\) for the \(\tau\)-superlevel set of \(\eta\), we seek to output an estimate \(\hat{A}\) of \(\mathcal{X}_{\tau}(\eta)\) with the first priority that it guards against the more serious of the two errors mentioned above. Without loss of generality, we take this more serious error to be including points in \(\hat{A}\) that do not belong to \(\mathcal{X}_{\tau}(\eta)\), and we therefore require Type I error control in the sense that \(\hat{A}\subseteq\mathcal{X}_{\tau}(\eta)\) with probability at least \(1-\alpha\), for some pre-specified \(\alpha\in(0,1)\). Subject to this constraint, we would like \(\mu(\hat{A})\) to be as large as possible, where \(\mu\) denotes the marginal distribution of \(X\). Figure 1: A visualisation with \(d=2\) and \(n=1000\). The unknown regression function is rescaled to the interval \([0,1]\), depicted by the multi-coloured surface. The unknown grey surface gives the \(0.5\)-superlevel set, of which the red area is selected by our proposed procedure \(\hat{A}^{\rm ISS}\). One plausible strategy to achieve this goal is to construct a one-sided, uniform confidence band for \(\eta\), and output the set on which the lower confidence limit is at least \(\tau\). Unfortunately, however, such an approach tends to have sub-optimal empirical performance (see Section 5), because the lower confidence bound is required to protect against exceeding \(\eta(x)\) at all points \(x\) in the covariate domain, whereas it is only points close to the boundary of the \(\tau\)-superlevel set for which there is significant doubt about their inclusion. We therefore adopt a different approach, and seek to compute at each observation a \(p\)-value for the null hypothesis that the regression function is below \(\tau\) based on an anytime-valid martingale procedure (Duan et al., 2020; Howard et al., 2021). The monotonicity of the regression function implies logical relationships between these hypotheses, but it is far from obvious how to combine the \(p\)-values effectively, particularly in the multivariate case, where we do not have a natural total ordering on \(\mathbb{R}^{d}\). Our strategy is to introduce a tailored multiple testing procedure with familywise error rate control, building on ideas of Goeman and Solari (2010) and Meijer and Goeman (2015). This allows us to construct our final output set \(\hat{A}^{\text{ISS}}\) as the upper hull of the observations corresponding to the rejected hypotheses; see Section 2 for a more formal description of our proposed procedure, which is both computationally feasible and does not require the choice of any smoothing parameters. Our methodology is implemented in the R package ISS(Muller et al., 2023); an illustration in a bivariate example is given in Figure 1. Our first theoretical result, in Section 3.1, verifies that \(\hat{A}^{\text{ISS}}\) does indeed control Type I error in the sense outlined above. We then turn our attention to power in Section 3.2, and provide both high-probability and expectation bounds on \(\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}^{\text{ISS}}\big{)}\). Our bound decomposes as a sum of two terms, where the first reflects the error incurred in determining whether each data point belongs to \(\mathcal{X}_{\tau}(\eta)\), and depends on the growth rate of the regression function as we move further into the \(\tau\)-superlevel set from its boundary. The second term represents the error arising from the uncertainty of whether or not regions between the data points belong to this superlevel set. Our final theoretical contribution, in Section 3.3, reveals that \(\hat{A}^{\text{ISS}}\) attains the optimal power in the sense of minimising \(\mathbb{E}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}^{\text{ ISS}}\big{)}\big{\}}\) up to poly-logarithmic factors, among all procedures that control the Type I error. In Section 4, we present various extensions that broaden the scope of our methodology. First, in Section 4.1, we describe alternative \(p\)-values that can be used, and that may yield more power for small and moderate sample sizes. Section 4.2 introduces three variants of \(\hat{A}^{\text{ISS}}\) that are tailored to specific settings including Gaussian errors, classification and heavy-tailed errors. Finally, in Section 4.3, we show how our proposal can be extended to cover heterogeneous treatment effect settings. Section 5 is devoted to a study of the empirical performance of \(\hat{A}^{\text{ISS}}\) in a wide range of settings, with 14 regression functions chosen to illustrate different characteristics of interest, as well as different sample sizes and dimensions. The broad conclusion across these many scenarios is that, compared with various alternative approaches, \(\hat{A}^{\text{ISS}}\) has the most power for isotonic subgroup selection. In Section 6, we illustrate the performance of \(\hat{A}^{\text{ISS}}\) on two real datasets, the first of which is taken from the AIDS Clinical Trials Group Study 175 (ACTG 175) (Juraska et al., 2022). Here, we consider two problems: first, we seek to identify a low-risk subgroup and, second, in the context of heterogeneous treatment effects, we aim to identify a subgroup of patients for whom a new therapy is at least as effective as the baseline medication. The second dataset concerns fuel consumption (Quinlan, 1993), where we seek to identify fuel-efficient cars based on their weight and engine displacement. The appendix consists of proofs of all of our main results, as well as statements and proofs of auxiliary results, further simulations and a discussion of an alternative and general approach to combining the \(p\)-values due to Meijer and Goeman (2015). Although this strategy is often highly effective in multiple testing problems, we show that, surprisingly, it has sub-optimal worst-case performance in our isotonic subgroup selection setting. Isotonic regression has a long history dating back to Ayer et al. (1955), Brunk (1955) and van Eeden (1956). Much recent interest has focused on risk bounds and oracle inequalities, which have been derived by Meyer and Woodroofe (2000), Zhang (2002), Chatterjee (2014), Chatterjee et al. (2015), Bellec (2018), Han et al. (2019), Deng and Zhang (2020), Fokianos et al. (2020) and Pananjady and Samworth (2022). Pointwise asymptotic confidence intervals in multivariate isotonic regression have been proposed by Deng et al. (2021), while confidence bands in the univariate case have been studied by Yang and Barber (2019). In the clinical trials community, the dangers of the naive approach to subgroup selection that ignores the key post-selection inference issue have been well understood for many years (Senn and Harrell, 1997; Feinstein, 1998; Rothwell, 2005; Wang et al., 2007; Kaufman and MacLehose, 2013; Altman, 2015; Zhang et al., 2015; Gabler et al., 2016; Lipkovich et al., 2017; Watson and Holmes, 2020). Valid approaches that control Type I error in the sense above have been proposed by Ballarini et al. (2018) and Wan et al. (2022) in the context of linear regression, and Reeve et al. (2021) for a smoothly-varying regression function. The asymmetry of the two losses in our framework has some similarities with that of Neyman-Pearson classification (Cannon et al., 2002; Scott and Nowak, 2005; Tong et al., 2016; Xia et al., 2021). There, covariate-response pairs \((X,Y)\) take values in \(\mathbb{R}^{d}\times\{0,1\}\), and we seek a classifier \(C:\mathbb{R}^{d}\to\{0,1\}\) that minimises \(\mathbb{P}\big{(}C(X)=0|Y=1\big{)}\) subject to an upper bound on \(\mathbb{P}\big{(}C(X)=1|Y=0\big{)}\). In addition to allowing continuous responses, another key difference of our paradigm is that we incur a Type I error whenever our selected set \(\hat{A}\) contains a single point that does not belong to the \(\tau\)-superlevel set of the regression function. In other words, instead of controlling averages over sub-populations, our framework provides guarantees at an individual level, which is ethically advantageous, e.g. in medical contexts. To conclude the introduction, we collect some notation used throughout the paper. Notation.For \(n\in\mathbb{N}\), let \([n]:=\{1,\ldots,n\}\) and let \([0]:=\emptyset\). Write \(x\wedge y:=\min(x,y)\) and \(x\lor y:=\max(x,y)\) for \(x,y\in\mathbb{R}\). Further, let \(x_{+}:=x\lor 0\) and \(\log_{+}x:=\log(x\lor e)\) for \(x\in\mathbb{R}\). Denote by \(\|\cdot\|_{\infty}\) the supremum norm on \(\mathbb{R}^{d}\), and given \(x\in\mathbb{R}^{d}\) and \(r>0\), define the closed supremum norm ball by \(B_{\infty}(x,r):=\{z\in\mathbb{R}^{d}:\|z-x\|_{\infty}\leq r\}\). For \(x_{1}=(x_{1}^{(1)},\ldots,x_{1}^{(d)})^{\top},x_{2}=(x_{2}^{(1)},\ldots,x_{2}^ {(d)})^{\top}\in\mathbb{R}^{d}\), we write \(x_{1}\preccurlyeq x_{2}\) (or, equivalently, \(x_{2}\succcurlyeq x_{1}\)) if \(x_{1}^{(j)}\leq x_{2}^{(j)}\) for all \(j\in[d]\). A function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is said to be (coordinate-wise) _increasing_ if \(f(x_{0})\leq f(x_{1})\) whenever \(x_{0}\preccurlyeq x_{1}\). A set \(U\subseteq\mathbb{R}^{d}\) is called an _upper set_ if, whenever \(x\in U\) and \(x\preccurlyeq x^{\prime}\), we have \(x^{\prime}\in U\). Given \(A\subseteq\mathbb{R}^{d}\), the _upper hull_ of \(A\) is the intersection of all upper sets that contain \(A\). For a Borel probability measure \(\mu\) on \(\mathbb{R}^{d}\), we let \(\operatorname{supp}(\mu)\) denote the _support_ of \(\mu\), i.e., the intersection of all closed sets \(C\subseteq\mathbb{R}^{d}\) with \(\mu(C)=1\). For \(\tau\in\mathbb{R}\) and \(f:\mathbb{R}^{d}\to\mathbb{R}\), we let \(\mathcal{X}_{\tau}(f):=\{x\in\mathbb{R}^{d}:f(x)\geq\tau\}\). A _graph_\(G=(I,E)\) consists of a non-empty, finite set \(I\) of _vertices_ and a set \(E\subseteq I\times I\) of _edges_. We say \(G\) is _directed_ if \((i,j)\in E\) does not imply \((j,i)\in E\). A _directed path_ from \(i\in I\) to \(j\in I\) is a collection of distinct vertices \(i_{0},i_{1},\ldots,i_{m}\in I\) for some \(m\in\mathbb{N}\) with \(i_{0}=i\) and \(i_{m}=j\) such that \((i_{k-1},i_{k})\in E\) for all \(k\in[m]\). A _cycle_ is a directed path from \(i\in I\) to itself. A _directed acyclic graph (DAG)_ is a directed graph that does not contain any cycles. Given a DAG \(G=(I,E)\), we write \(L(G):=\{i\in I:(i,i^{\prime})\notin E\text{ for all }i^{\prime}\in I\}\) for the set of its _leaf_ nodes and \(\{i\in I:(i^{\prime},i)\notin E\text{ for all }i^{\prime}\in I\}\) for its _root_ nodes. For \(i\in I\), let \(\operatorname{pa}_{G}(i):=\{i^{\prime}\in I:(i^{\prime},i)\in E\}\) denote the set of _parents_ of node \(i\) and, similarly, write \(\operatorname{ch}_{G}(i):=\{i^{\prime}\in I:i\in\operatorname{pa}_{G}(i^{ \prime})\}\) for the set of _children_ of node \(i\). Further, defining \(\operatorname{an}_{G}^{1}(i):=\operatorname{pa}_{G}(i)\) and \(\operatorname{an}_{G}^{k+1}(i):=\bigcup_{j\in\operatorname{an}_{G}^{k}(i)} \operatorname{pa}_{G}(j)\) for \(k\in\mathbb{N}\), we can define \(\operatorname{an}_{G}(i):=\bigcup_{k\in\mathbb{N}}\operatorname{an}_{G}^{k}(i)\) to be the set of _ancestors_ of node \(i\). Similarly, let \(\operatorname{de}_{G}(i):=\{i^{\prime}\in I:i\in\operatorname{an}_{G}(i^{ \prime})\}\) denote the set of _descendants_ of node \(i\). A _reverse topological ordering_ of a DAG \(G=(I,E)\) with \(I=[m]\) is a permutation \(\pi_{G}:I\to I\) such that if \(i\in I\) and \(i^{\prime}\in\operatorname{an}_{G}(i)\), then \(\pi_{G}(i)<\pi_{G}(i^{\prime})\). Any directed graph is acyclic if and only if it has a reverse topological ordering. We remark that all of these definitions remain unchanged when applied to a _weighted DAG_\(G=(I,E,\mathbf{w})\), i.e. a DAG \((I,E)\) equipped with edge weights \(\mathbf{w}=(w_{e}\geq 0:e\in E)\), and that any unweighted DAG may implicitly be assumed to be a weighted DAG with unit weights. A DAG \(G=(I,E)\) is a _polyforest_ if \(|\operatorname{pa}_{G}(i)|\leq 1\) for all \(i\in I\), and a weighted DAG \(G=(I,E,\mathbf{w})\) is a _polyforest-weighted DAG_ if \(F:=(I,\{e\in E:w_{e}>0\})\) is a polyforest. ## 2 Methodology Let \(P\) denote a distribution on \(\mathbb{R}^{d}\times\mathbb{R}\), and let \((X,Y)\sim P\). Suppose that the regression function \(\eta:\mathbb{R}^{d}\to\mathbb{R}\), defined by \(\eta(x):=\mathbb{E}(Y|X=x)\), is increasing, and that the conditional distribution of \(Y-\eta(X)\) given \(X\) is sub-Gaussian1 with variance parameter \(\sigma^{2}\). Given an independent and identically distributed sample \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\sim P\), a threshold \(\tau\in\mathbb{R}\) and a nominal Type I error rate \(\alpha\in(0,1)\), we would like to identify a Borel measurable set \(\hat{A}\subseteq\mathbb{R}^{d}\) such that with probability at least \(1-\alpha\), we have \(\eta(x)\geq\tau\) for all \(x\in\hat{A}\). Subject to this constraint, we would like \(\mu(\hat{A})\) to be as large as possible, where \(\mu\) denotes the marginal distribution of \(X\). An important observation is that, since \(\mathcal{X}_{\tau}(\eta)\) is an upper set, replacing \(\hat{A}\) with its upper hull does not increase the Type I error probability, and may increase its \(\mu\)-measure. Footnote 1: Recall that a random variable \(Z\) is _sub-Gaussian with variance parameter \(\sigma^{2}\)_ if \(\mathbb{E}(e^{tZ})\leq e^{\sigma^{2}t^{2}/2}\) for every \(t\in\mathbb{R}\). Our general strategy is initially to focus on a subset of observations, and seek to compute a \(p\)-value at each of these observations for the null hypothesis that the regression function is below \(\tau\). We can then carefully combine these \(p\)-values using a multiple testing procedure having familywise error rate control over our structured hypotheses, and finally output the upper hull of the covariate observations corresponding to the rejected hypotheses. More precisely, for \(m\in[n]\), and writing \(\mathcal{D}_{X,m}:=(X_{1},\ldots,X_{m})\) for our reduced sample with the shorthand \(\mathcal{D}_{X}:=\mathcal{D}_{X,n}\), we will construct \(p\)-values \(\mathbf{p}:=(p_{i})_{i\in[m]}\) having the property that \(\mathbb{P}(p_{i}\leq t|\mathcal{D}_{X})\leq t\) for every \(t\in[0,1]\) whenever \(\eta(X_{i})<\tau\). Next, we show how to exploit these \(p\)-values to obtain a set \(\mathcal{R}_{\alpha}(\mathcal{D}_{X,m},\mathbf{p})\subseteq[m]\) of rejected hypotheses having the familywise error control property that with probability at least \(1-\alpha\), we have \(\eta(X_{i})\geq\tau\) for every \(i\in\mathcal{R}_{\alpha}(\mathcal{D}_{X,m},\mathbf{p})\). Our final output set, \(\hat{A}\), is the upper hull of \(\{X_{i}:i\in\mathcal{R}_{\alpha}(\mathcal{D}_{X,m},\mathbf{p})\}\). It remains to describe how we propose to construct the \(p\)-values, and to control the familywise error, and to this end, we first focus on the case \(d=1\) for simplicity of exposition. One difference between the univariate and multivariate cases is that we take \(m=n\) in the former. Consider the null hypothesis that \(\eta(x)<\tau\) for some \(x\in\mathbb{R}\), so that \(\eta(X_{i})<\tau\) whenever \(X_{i}\leq x\). Write \(\mathcal{I}(x):=\{i\in[n]:X_{i}\leq x\}\) and \(n(x):=|\mathcal{I}(x)|\), and, for \(j\in[n(x)]\), let \(X_{(j)}(x)\) denote the \(j\)th nearest neighbour of \(x\) among \(\{X_{i}:i\in\mathcal{I}(x)\}\), where for definiteness ties are broken by retaining the original ordering. Writing \(Y_{(1)}(x),\ldots,Y_{(n)}(x)\) for the concomitant responses, under the null and conditional on \(\mathcal{D}_{X}\), the process \[S_{k}\equiv S_{k}(x,\sigma,\tau,\mathcal{D}):=\sum_{j=1}^{k}\frac{Y_{(j)}(x)- \tau}{\sigma} \tag{1}\] for \(k\in[n(x)]\) is a supermartingale with negative mean. Thus, large values of \(S_{k}\) for some \(k\in[n(x)]\) provide evidence against the null. One could also consider the alternative supermartingale \(S_{k}^{\prime}:=\sum_{j=1}^{k}\bigl{(}Y_{(n(x)+1-j)}(x)-\tau\bigr{)}/\sigma\) for \(k\in[n(x)]\), but our approach has the advantage that \(S_{k}\) stochastically dominates \(S_{k}^{\prime}\), so will have at least the same power. Tests based on supermartingales such as \((S_{k})\) are known as _martingale tests_(Duan et al., 2020) and time-uniform upper boundaries \((v_{k})\) are known for a variety of families of increment distributions (Howard et al., 2021); thus, \(v_{k}\equiv v_{k}(\alpha)\) has the property that \(\mathbb{P}\bigl{(}\max_{k\in[n(x)]}(S_{k}-v_{k})\geq 0\bigr{)}\leq\alpha\) under the null hypothesis \(\eta(x)<\tau\). These inequalities can be inverted to yield a \(p\)-value; see Figure 2. Definition 1 below extends these ideas to the general multivariate case. Turning now to familywise error rate (FWER) control, and working conditional on \(\mathcal{D}_{X}\) with \(d=1\), consider \(p\)-values \(\mathbf{p}:=(p_{i})_{i\in[n]}\) constructed as above for testing the null hypotheses \(H_{i}:\eta(X_{i})<\tau\) for \(i\in[n]\). One approach to controlling the FWER at level \(\alpha\) is to reject only hypotheses \(H_{i}\) with \(i\in\mathcal{R}^{\text{FS}}_{\alpha}(\mathcal{D}_{X},\mathbf{p})\), where \[\mathcal{R}^{\text{FS}}_{\alpha}(\mathcal{D}_{X},\mathbf{p}):=\{j\in[n]\colon p_{ k}\leq\alpha\text{ for all }k\text{ with either }X_{k}>X_{j}\text{ or both }X_{k}=X_{j}\text{ and }k\leq j\}.\] Controlling the FWER by employing an a priori ordering is known as a _fixed sequence procedure_(Westfall and Krishen, 2001; Hsu and Berger, 1999), and explains the superscript FS in the notation \(\mathcal{R}^{\text{FS}}_{\alpha}(\mathcal{D}_{X},\mathbf{p})\). Writing \(H_{(i)}:\eta\bigl{(}X_{(i)}(\max_{j\in[n]}X_{j})\bigr{)}<\tau\) and \(p_{(i)}\) for the corresponding \(p\)-value, this approach can be seen as a sequential procedure that, starting with \(i=1\), stops and does not reject \(H_{(i)}\) if \(p_{(i)}>\alpha\) and otherwise rejects \(H_{(i)}\) before proceeding to \(i+1\), where the step is repeated. Here, the order in which we decide whether Figure 2: A schematic illustration of the proposed martingale test with \(d=1\) and \(\sigma=1\). Left: Raw data, where we denote \(Z_{j}:=\bigl{(}X_{(j)}(x),Y_{(j)}(x)\bigr{)}\) and \(\Delta_{j}:=Y_{(j)}(x)-\tau\) for \(j\in[n(x)]\); note that the illustrated point \(Z_{0}\) does not enter the martingale because its first component exceeds \(x\). Right: The martingale \((S_{k})\), where \(\bigl{(}v_{k}(\alpha)\bigr{)}_{k\in[n(x)]}\) denotes a suitable time-uniform upper boundary; here, \(S_{4}\geq v_{4}(\alpha)\), so we would reject the null hypothesis \(\eta(x)<\tau\) with a \(p\)-value satisfying \(v_{4}(p)=S_{4}\). hypotheses should be rejected is motivated by the fact that \(H_{(i)}\subseteq H_{(i+1)}\) for \(i\in[n-1]\). A computationally-efficient implementation of this procedure only needs to calculate \(p_{(i+1)}\) if \(H_{(i)}\) has been rejected. We now extend the presented ideas to the general case \(d\in\mathbb{N}\). The construction of the \(p\)-values follows a similar approach as above, but the order in which the responses enter the supermartingale sequence is now determined by the supremum norm distance of the corresponding covariates from \(x\). **Definition 1**.: _Given \(x\in\mathbb{R}^{d}\), \(\tau\in\mathbb{R}\), \(\sigma>0\) and \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\in(\mathbb{R}^{d} \times\mathbb{R})^{n}\) write \(\mathcal{I}(x)\equiv\mathcal{I}(x,\mathcal{D}_{X}):=\{i\in[n]:X_{i}\preccurlyeq x\}\) and again \(n(x)\equiv n(x,\mathcal{D}_{X}):=|\mathcal{I}(x,\mathcal{D}_{X})|\). Further, for \(j\in[n(x)]\), let \(X_{(j)}(x)\) denote the \(j\)th nearest neighbour in \(\{X_{i}:i\in\mathcal{I}(x)\}\) of \(x\) in supremum norm, with ties broken by retaining the original ordering of the indices, and let \(Y_{(1)}(x),\ldots,Y_{(n(x))}(x)\) denote the concomitant responses. Defining \(S_{k}\equiv S_{k}(x,\sigma,\tau,\mathcal{D})\) for \(k\in[n(x)]\) as in (1), we then set_ \[\hat{p}_{\sigma,\tau}(x)\equiv\hat{p}_{\sigma,\tau}(x,\mathcal{D}):=1\wedge \min_{k\in[n(x)]}5.2\exp\biggl{\{}-\frac{(S_{k}\lor 0)^{2}}{2.0808k}+\frac{ \log\log(2k)}{0.72}\biggr{\}},\] _whenever \(n(x)>0\), and \(\hat{p}_{\sigma,\tau}(x,\mathcal{D}):=1\) otherwise._ Lemma 5 below shows that \(\hat{p}_{\sigma,\tau}(x,\mathcal{D})\) is indeed a \(p\)-value for the null hypothesis \(\eta(x)<\tau\). We now proceed to the issue of FWER control in the multivariate setting, where we again condition on \(\mathcal{D}_{X}\). Recall that we are interested in testing the hypotheses \(H_{i}:\eta(X_{i})<\tau\) for \(i\in[m]\) and a pre-specified \(m\in[n]\). The fact that \(\preccurlyeq\) induces only a partial order on \(\mathbb{R}^{d}\) when \(d>1\) means that there is no natural generalisation of the univariate fixed sequence testing procedure. Instead, we structure the hypotheses in a directed acyclic graph (DAG), with the edges in the graph representing logical relationships between hypotheses; such an approach has been studied in the literature to control both the FWER (Meijer and Goeman, 2015) and the false discovery rate (Ramdas et al., 2019)2. The following definitions will be useful in the construction of an efficient multiple testing procedure. Footnote 2: In a different but related approach, a graph structure can be used to encode a ranking of hypotheses beyond a strict logical ordering (Bretz et al., 2009). **Definition 2** (Induced DAGs and polyforests).: _Let \(\boldsymbol{z}=(z_{1},\ldots,z_{m})\in(\mathbb{R}^{d})^{m}\)._ 1. _The_ induced DAG__\(\mathcal{G}(\boldsymbol{z})=\big{(}[m],\mathcal{E}(\boldsymbol{z})\big{)}\) _is the graph with nodes_ \([m]\) _and edges_ \[\mathcal{E}(\boldsymbol{z}):=\big{\{}(i_{0},i_{1})\in[m]^{2}:i_{0 }\neq i_{1}\text{ and }z_{i_{1}}\preccurlyeq z_{i_{0}}\text{, and if }z_{i_{1}}\preccurlyeq z_{i_{2}}\preccurlyeq z_{i_{0}}\text{ then either}\] \[z_{i_{2}}=z_{i_{0}}\text{ and }i_{0}\leq i_{2}\text{, or }z_{i_{2}}=z_{i_{1}}\text{ and }i_{2}\leq i_{1}\big{\}}.\] 2. _The_ induced polyforest__\(\mathcal{G}_{\mathrm{F}}(\boldsymbol{z})=\big{(}[m],\mathcal{E}_{\mathrm{F}}( \boldsymbol{z})\big{)}\) _is the subgraph of_ \(\mathcal{G}(\boldsymbol{z})\) _with nodes_ \([m]\) _and edges_3__ Footnote 3: Here, sargmin refers to the smallest element of the argmin set. \[\mathcal{E}_{\mathrm{F}}(\boldsymbol{z}):=\Big{\{}(i_{0},i_{1})\in\mathcal{E}( \boldsymbol{z}):i_{0}=\operatorname*{\text{sargmin}}_{i:(i,i_{1})\in\mathcal{E}( \boldsymbol{z})}\|X_{i}-X_{i_{1}}\|_{\infty}\Big{\}}.\] 3. _The_ induced polyforest-weighted DAG _is_ \(\mathcal{G}_{\mathrm{W}}(\boldsymbol{z}):=\big{(}[m],\mathcal{E}(\boldsymbol{z} ),\boldsymbol{w}(\boldsymbol{z})\big{)}\)_, where_ \(\boldsymbol{w}(\boldsymbol{z})=(w_{e})_{e\in\mathcal{E}(\boldsymbol{z})}\) _is given by_ \(w_{e}:=1_{\{e\in\mathcal{E}_{\mathrm{F}}(\boldsymbol{z})\}}\)_._ From the definition, we see that the induced polyforest-weighted DAG encodes the complete information of both \(\mathcal{G}(\mathbf{z})\) and \(\mathcal{G}_{\mathrm{F}}(\mathbf{z})\), as illustrated by Figure 3(a), where \(d=2\) and each node represents the hypothesis corresponding to the observation at its location. **Definition 3** (DAG testing procedure).: _A DAG testing procedure \(\mathcal{R}\) is a function that takes as input a significance level \(\alpha\in(0,1]\), a weighted DAG \(G=(I,E,\mathbf{w})\) and \(\mathbf{p}=(p_{i})_{i\in I}\in(0,1]^{I}\), and outputs a subset \(\mathcal{R}_{\alpha}(G,\mathbf{p})\subseteq I\)._ The fixed sequence procedure presented for \(d=1\) is a DAG testing procedure since it only exploits the natural ordering information in \(\mathcal{D}_{X}\), though in that case we wrote the first argument of \(\mathcal{R}_{\alpha}^{\mathrm{FS}}\) as the set of nodes in the DAG rather than the full DAG for simplicity. In arbitrary dimensions, the methods proposed by Bretz et al. (2009), Meijer and Goeman (2015) and Ramdas et al. (2019) are DAG testing procedures. While the Meijer and Goeman (2015) procedure both controls the FWER and accounts for logical relationships between the hypotheses, theoretical and empirical power considerations lead us to propose a new approach that can be regarded as a sparsified version of the Meijer and Goeman (2015) procedure or as an extension of the sequential rejection procedures of Bretz et al. (2009). In order to describe our proposed iterative DAG testing procedure \(\mathcal{R}^{\mathrm{ISS}}\), write \(G:=\mathcal{G}(\mathcal{D}_{X,m})\) for the induced DAG and \(F:=\mathcal{G}_{\mathrm{F}}(\mathcal{D}_{X,m})\) for the induced polyforest. We begin by splitting our \(\alpha\)-budget across the root nodes, with each such node receiving budget proportional to its number of leaf node descendants in \(F\) (including the node itself if it is a leaf node). We reject each root node hypothesis whose \(p\)-value is at most its \(\alpha\)-budget, and whenever we do so, we also reject its ancestors in the original \(G\) (which does not inflate the Type I error, due to the logical ordering of the hypotheses). The rejected root nodes are then removed from \(F\), and we repeat the process iteratively, stopping when either we have rejected all hypotheses, or if we fail to reject any additional hypotheses at a given iteration. Formal pseudocode to compute \(\mathcal{R}^{\mathrm{ISS}}\) is given in Algorithm 1; see also Figure 3 for an illustration. The DAG testing procedure \(\mathcal{R}^{\mathrm{ISS}}\) allows us to define the corresponding isotonic subgroup selection set \[\hat{A}^{\mathrm{ISS}}\equiv\hat{A}^{\mathrm{ISS}}_{\sigma,\tau,\alpha,m}( \mathcal{D}):=\big{\{}x\in\mathbb{R}^{d}:\!X_{i_{0}}\preccurlyeq x\text{ for some }i_{0}\in\mathcal{R}^{\mathrm{ISS}}_{\alpha}\big{(} \mathcal{G}_{\mathrm{W}}(\mathcal{D}_{X,m}),\big{(}\hat{p}_{\sigma,\tau}(X_{i },\mathcal{D})\big{)}_{i\in[m]}\big{)}\big{\}}.\] Pseudocode for computing \(\hat{A}^{\mathrm{ISS}}\) is given in Algorithm 2. In Section 3 below, we establish that \(\hat{A}^{\mathrm{ISS}}\) controls the Type I error uniformly over appropriate distributional classes, and moreover has optimal worst-case power up to poly-logarithmic factors. Figure 3: Illustration of Algorithm 1 with \(\alpha=0.05\). Nodes are numbered according to one potential reverse topological ordering of the induced weighted DAG. The \(p\)-value for each hypothesis is given in round brackets. Solid arrows represent edges with weight 1 in the induced polyforest-weighted DAG, whereas dashed arrows represent those with weight 0. Each iteration of the procedure corresponds to one panel. A filled circle indicates that the hypothesis has been rejected in a previous step. If a node is assigned positive \(\alpha\)-budget at the current iteration, then its (rounded) budget is given in purple to the top left. ## 3 Theory ### Type I error control We first introduce the class of distributions over which we prove the Type I error control of \(\hat{A}^{\text{ISS}}\). **Definition 4**.: _Given \(\sigma>0\), let \(\mathcal{P}_{\text{Mon},d}(\sigma)\) denote the class of all distributions \(P\) on \(\mathbb{R}^{d}\times\mathbb{R}\) with increasing regression function \(\eta:\mathbb{R}^{d}\to\mathbb{R}\), and for which, when \((X,Y)\sim P\), the conditional distribution of \(Y-\eta(X)\) given \(X\) is sub-Gaussian with variance parameter \(\sigma^{2}\)._ Our Type I error control relies on showing first that \(\hat{p}_{\sigma,\tau}(x,\mathcal{D})\) is indeed a \(p\)-value for testing the null hypothesis \(\eta(x)<\tau\) when \(P\in\mathcal{P}_{\text{Mon},d}(\sigma)\) and then that the DAG testing procedure \(\mathcal{R}^{\text{ISS}}\) controls the FWER in the sense defined in Definition 7 below. The next lemma accomplishes the first of these tasks (in fact, it shows that \(\hat{p}_{\sigma,\tau}(x,\mathcal{D})\) is a \(p\)-value even conditional on \(\mathcal{D}_{X}\)). We write \(P^{n}\) for the \(n\)-fold product measure corresponding to \(P\). **Lemma 5**.: _Given any \(x\in\mathbb{R}^{d}\), \(\tau\in\mathbb{R}\), \(\sigma>0\), \(P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\) such that \(\eta(x)<\tau\) and \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), we have \(\mathbb{P}_{P}\big{\{}\hat{p}_{\sigma,\tau}(x,\mathcal{D})\leq\alpha|\mathcal{ D}_{X}\big{\}}\leq\alpha\) for all \(\alpha\in(0,1)\)._ We now direct our attention towards the DAG testing procedure \(\mathcal{R}^{\mathrm{ISS}}\). Here, it is convenient to introduce the following terminology. **Definition 6**.: _Given a weighted, directed graph \(G=(I,E,\boldsymbol{w})\), we say that a subset \(I_{0}\subseteq I\) is \(G\)-lower if whenever \(i_{0}\in I_{0}\) we have \(\mathrm{de}_{G}(i_{0})\subseteq I_{0}\). Conversely, \(I_{0}\subseteq I\) is called \(G\)-upper if \(I\setminus I_{0}\) is \(G\)-lower._ Given a finite set \(I\), a family of distributions \(\mathcal{Q}\) on \((0,1]^{I}\) and a finite collection of null hypotheses \(H_{i}\subseteq\mathcal{Q}\) for \(i\in I\), let \(G_{0}:=(I,E_{0})\) with \(E_{0}:=\{(i_{0},i_{1})\in I^{2}:H_{i_{0}}\subseteq H_{i_{1}}\}\) be the directed graph that encodes all logical relationships between hypotheses. Then for any \(Q\in\mathcal{Q}\), the true null index set \(I_{0}(Q):=\{i\in I:Q\in H_{i}\}\) is necessarily a \(G_{0}\)-lower set. Conversely, the index set of false null hypotheses must be a \(G_{0}\)-upper set. We say that a polyforest-weighted DAG \((I,E,\boldsymbol{w})\) is \(G_{0}\)_-consistent_ if \(E\subseteq E_{0}\). Multiple testing procedures that reject hypotheses corresponding to a \(G_{0}\)-upper set are called _coherent_(Gabriel, 1969, p. 229), and by construction, \(\mathcal{R}^{\mathrm{ISS}}\) is indeed coherent when applied to a \(G_{0}\)-consistent polyforest-weighted DAG. We are now in a position to formalise the concept of FWER control for DAG testing procedures. **Definition 7**.: _A DAG testing procedure \(\mathcal{R}\) controls the FWER if given any finite set \(I\), a family of distributions \(\mathcal{Q}\) on \((0,1]^{I}\), a collection of random variables \(\boldsymbol{p}=(p_{i})_{i\in I}\) taking values in \((0,1]^{I}\), as well as hypotheses \(H_{i}\subseteq\{Q\in\mathcal{Q}:\mathbb{P}_{Q}(p_{i}\leq t)\leq t,\forall t\in (0,1]\}\) for \(i\in I\) and any \(G_{0}\)-consistent polyforest-weighted DAG \(G^{\prime}=(I,E,\boldsymbol{w})\), we have \(\mathbb{P}_{Q}\big{(}\mathcal{R}_{\alpha}(G^{\prime},\boldsymbol{p})\cap I_{ 0}(Q)\neq\emptyset\big{)}\leq\alpha\) for all \(\alpha\in(0,1)\) and \(Q\in\mathcal{Q}\)._ **Lemma 8**.: _The DAG testing procedure \(\mathcal{R}^{\mathrm{ISS}}\) defined by Algorithm 1 controls the FWER._ The strategy of the proof of Lemma 8 is based on ideas in the proof of Goeman and Solari (2010, Theorem 1). Combining Lemmas 5 and 8 yields our Type I error guarantee: **Theorem 9**.: _For any \(d\in\mathbb{N}\), \(n\in\mathbb{N}\), \(m\in[n]\), \(\alpha\in(0,1)\), \(\tau\in\mathbb{R}\), \(\sigma>0\), and \(P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\), along with \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), we have_ \[\mathbb{P}_{P}\big{(}\hat{A}_{\sigma,\tau,\alpha,m}^{\mathrm{ISS}}(\mathcal{D} )\subseteq\mathcal{X}_{\tau}(\eta)\bigm{|}\mathcal{D}_{X}\big{)}\geq 1-\alpha.\] Let \(\hat{\mathcal{A}}_{n}(\tau,\alpha,\mathcal{P})\) denote the family of _data-dependent selection sets_\(\hat{A}\) (i.e. Borel measurable functions from \((\mathbb{R}^{d}\times\mathbb{R})^{n}\) to the set of Borel subsets of \(\mathbb{R}^{d}\)) that control the Type I error rate at level \(\alpha\in(0,1)\) over the family \(\mathcal{P}\) of distributions on \(\mathbb{R}^{d}\times\mathbb{R}\). In other words, we write \(\hat{A}\in\hat{\mathcal{A}}_{n}(\tau,\alpha,\mathcal{P})\) if \[\mathbb{P}_{P}\big{(}\hat{A}(\mathcal{D})\subseteq\mathcal{X}_{\tau}(\eta) \big{)}\geq 1-\alpha \tag{2}\] for all \(P\in\mathcal{P}\) with \(\mathcal{D}\sim P^{n}\). An immediate consequence of Theorem 9 is that \(\hat{A}_{\sigma,\tau,\alpha,m}^{\mathrm{ISS}}\in\hat{\mathcal{A}}_{n}\big{(} \tau,\alpha,\mathcal{P}_{\mathrm{Mon},d}(\sigma)\big{)}\). In fact, an inspection of the proof of Theorem 9 (see also Lemma 25) reveals that \(\hat{A}_{\sigma,\tau,\alpha,m}^{\mathrm{ISS}}\) controls the Type I error over a larger class. Indeed, writing \(\mathcal{P}_{\mathrm{Upp},d}(\tau,\sigma)\) for the class of distributions of pairs \((X,Y)\) such that the \(\tau\)-superlevel set of the regression function \(\eta:\mathbb{R}^{d}\to\mathbb{R}\) is an upper set and, again, the conditional distribution of \(Y-\eta(X)\) given \(X\) is sub-Gaussian with variance parameter \(\sigma^{2}\), it follows from the proof that \(\hat{A}^{\mathrm{ISS}}_{\sigma,\tau,\alpha,m}\in\hat{\mathcal{A}}_{n}\big{(}\tau,\alpha,\mathcal{P}_{\mathrm{Upp},d}(\tau,\sigma)\big{)}\). We have \(\mathcal{P}_{\mathrm{Mon},d}(\sigma)=\bigcap_{\tau^{\prime}\in\mathbb{R}} \mathcal{P}_{\mathrm{Upp},d}(\tau^{\prime},\sigma)\), but the regression functions of distributions in \(\mathcal{P}_{\mathrm{Upp},d}(\tau,\sigma)\) for a fixed \(\tau\in\mathbb{R}\) may deviate from monotonicity as long as the \(\tau\)-superlevel set remains an upper set. In this sense, our procedure \(\hat{A}^{\mathrm{ISS}}_{\sigma,\tau,\alpha,m}\) is robust to misspecification of the monotonicity of the regression function. ### Power Classical results on Gaussian testing reveal that merely asking for \(P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\) is insufficient to be able to provide non-trivial uniform power guarantees for data-dependent selection sets with Type I error control (see Proposition 26 in Appendix A.2 for details). The main issue here is that the marginal distribution \(\mu\) may place a lot of mass in regions where \(\eta\) is only slightly above \(\tau\), and these regions will be hard for a data-dependent selection set to include if it has Type I error control. In this section, therefore, we introduce a margin condition that controls the \(\mu\)-measure of these difficult regions. **Definition 10**.: _For \(d\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\beta>0\) and \(\nu>0\), let \(\mathcal{P}_{\mathrm{Mar},d}(\tau,\beta,\nu)\) denote the class of distributions \(P\) on \(\mathbb{R}^{d}\times\mathbb{R}\) for which the marginal \(\mu\) on \(\mathbb{R}^{d}\) and the regression function \(\eta:\mathbb{R}^{d}\to\mathbb{R}\) satisfy \(\mu\big{(}\eta^{-1}([\tau,\tau+\nu\xi^{\beta}])\big{)}\leq\xi\) for all \(\xi\in(0,1]\)._ **Example 1**.: _Let \(d=1\) and let \(P\in\mathcal{P}_{\mathrm{Mon},1}(\sigma)\) have uniform marginal distribution \(\mu\) on \([0,1]\) and regression function \(\eta\). We then have \(P\in\mathcal{P}_{\mathrm{Mon},1}(\sigma)\cap\mathcal{P}_{\mathrm{Mar},1}(\tau,\beta,\nu)\) if \(\eta(x+\xi)\geq\tau+\nu\xi^{\beta}\) for all \(\xi\in(0,1]\) and \(x\in\mathcal{X}_{\tau}(\eta)\)._ We now divide our power analysis for \(\hat{A}^{\mathrm{ISS}}\) into univariate and multivariate cases, since the natural total order on \(\mathbb{R}\) means that our results simplify a little when \(d=1\). Theorem 11 below provides high-probability and expectation upper bounds on the _regret_\(\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}^{\mathrm{ISS}}\big{)}\) for \(P\in\mathcal{P}_{\mathrm{Mon},1}(\sigma)\cap\mathcal{P}_{\mathrm{Mar},1}(\tau,\beta,\nu)\). **Theorem 11**.: _Let \(\sigma,\beta,\nu>0\) and \(\alpha\in(0,1)\). There exists a universal constant \(C>0\) such that for any distribution \(P\in\mathcal{P}_{\mathrm{Mon},1}(\sigma)\cap\mathcal{P}_{\mathrm{Mar},1}(\tau,\beta,\nu)\) and \(\delta\in(0,1)\), we have_ \[\mathbb{P}_{P}\bigg{[}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}^{ \mathrm{ISS}}_{\sigma,\tau,\alpha,n}(\mathcal{D})\big{)}>1\wedge C\bigg{\{} \bigg{(}\frac{\sigma^{2}}{n\nu^{2}}\log_{+}\Big{(}\frac{\log_{+}n}{\alpha \wedge\delta}\Big{)}\bigg{)}^{1/(2\beta+1)}+\frac{\log_{+}(1/\delta)}{n} \bigg{\}}\bigg{]}\leq\delta,\] _and_ \[\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}^{ \mathrm{ISS}}_{\sigma,\tau,\alpha,n}(\mathcal{D})\big{)}\big{\}}\leq 1\wedge C\bigg{\{} \bigg{(}\frac{\sigma^{2}}{n\nu^{2}}\log_{+}\Big{(}\frac{\log_{+}n}{\alpha} \Big{)}\bigg{)}^{1/(2\beta+1)}+\frac{1}{n}\bigg{\}}.\] From Theorem 11, we see that the regret of \(\hat{A}^{\mathrm{ISS}}\) decomposes as a sum of two terms: the first reflects the error incurred in determining whether each data point belongs to \(\mathcal{X}_{\tau}(\eta)\), while the second represents the error arising from the uncertainty of whether or not regions between the data points belong to this superlevel set. The combination of Theorem 11 with Proposition 14 below and Theorem 17 in Section 3.3 reveals that the dependence of our bound on the parameters \(n\), \(\alpha\), \(\sigma\), \(\beta\) and \(\nu\) is optimal up to an iterated logarithmic factor in \(n\). In the proof of Theorem 11, we exploit the fact that \(\mathcal{D}_{X}\) has a total order in the univariate case, so the corresponding induced polyforest-weighted DAG forms a directed path in which each edge has weight 1. Since in our algorithm, the \(p\)-values \(\hat{p}_{\sigma,\tau}\) for coinciding hypotheses are equal, \(\mathcal{R}^{\text{ISS}}\) is equivalent to the fixed sequence procedure \(\mathcal{R}^{\text{FS}}\). Turning now to the multivariate case, we begin with a negative result, which reveals that we can find distributions in our class for which no data-dependent selection set with Type I error control performs better than the trivial procedure that ignores the data, and selects the entire domain with probability \(\alpha\) and the empty set otherwise. **Proposition 12**.: _Let \(d\geq 2\), \(\tau\in\mathbb{R}\), \(\sigma,\beta,\nu>0\) and \(\alpha\in(0,1)\). Then, writing \(\mathcal{P}^{\prime}:=\mathcal{P}_{\operatorname{Mon},d}(\sigma)\cap\mathcal{ P}_{\operatorname{Mar},d}(\tau,\beta,\nu)\), we have for any \(n\in\mathbb{N}\) that_ \[\sup_{P\in\mathcal{P}^{\prime}}\inf_{\hat{A}\in\mathcal{A}_{n}(\tau,\alpha, \mathcal{P}^{\prime})}\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta )\setminus\hat{A}(\mathcal{D})\big{)}\big{\}}\geq 1-\alpha.\] An interesting feature of Proposition 12 is the ordering of the supremum over distributions in our class and the infimum over data-dependent selection sets. Usually, with minimax lower bounds, these would appear in the opposite order, but here we are able to establish the stronger conclusion, because it is the same subfamily of \(\mathcal{P}_{\operatorname{Mon},d}(\sigma)\cap\mathcal{P}_{\operatorname{Mar},d}(\tau,\beta,\nu)\) that causes the poor performance of any data-dependent selection set with Type I error control. In fact, by examining the proof, we see that the issue is caused by constructing a marginal distribution \(\mu\) that concentrates its mass around a large antichain4 in \([0,1]^{d}\), which constitutes the boundary of \(\mathcal{X}_{\tau}(\eta)\). This motivates us to regulate the extent to which this is allowed to happen. Footnote 4: Recall that an _antichain_ in \(\mathbb{R}^{d}\) is a set \(\mathbb{W}\) such that we do not have \(x\preccurlyeq x^{\prime}\) for any \(x,x^{\prime}\in\mathbb{W}\). It is the fact that antichains of arbitrary size exist in \([0,1]^{d}\) when \(d\geq 2\) that is essential to this construction; when \(d=1\), any antichain must be a singleton. **Definition 13**.: _Given \(d\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\theta>1\), \(\gamma,\lambda>0\), we let \(\mathcal{P}_{\operatorname{Reg},d}(\tau,\theta,\gamma,\lambda)\) denote the class of all distributions \(P\) on \(\mathbb{R}^{d}\times\mathbb{R}\) with marginal \(\mu\) on \(\mathbb{R}^{d}\) and associated regression function \(\eta\) such that_ 1. \(\theta^{-1}\cdot r^{d}\leq\mu\big{(}B_{\infty}(x,r)\big{)}\leq\theta\cdot(2r) ^{d}\) _for all_ \(x\in\mathcal{X}_{\tau}(\eta)\cap\operatorname{supp}(\mu)\) _and_ \(r\in(0,1]\)_;_ 2. \(B_{\infty}(x,r)\cap\mathcal{X}_{\tau+\lambda r^{\gamma}}(\eta)\neq\emptyset\) _for all_ \(x\in\mathcal{X}_{\tau}(\eta)\cap\operatorname{supp}(\mu)\) _and_ \(r\in(0,1]\)_._ For distributions in the \(\mathcal{P}_{\operatorname{Mon},d}(\sigma)\) class, Definition 13 represents a slight strengthening of the margin condition used in our univariate analysis, as made precise by Proposition 14 below. **Proposition 14**.: _Let \(d\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\) and \(\theta>1\). There exists \(C\geq 1\), depending only on \((d,\theta)\), such that_ \[\mathcal{P}_{\operatorname{Mon},d}(\sigma)\cap\mathcal{P}_{\operatorname{Reg},d}(\tau,\theta,\gamma,\lambda)\subseteq\mathcal{P}_{\operatorname{Mon},d}( \sigma)\cap\mathcal{P}_{\operatorname{Mar},d}(\tau,\gamma,\lambda/C^{\gamma}).\] Thus, \((\gamma,\lambda)\) in the class \(\mathcal{P}_{\operatorname{Reg},d}(\tau,\theta,\gamma,\lambda)\) play a similar but not identical role to \((\beta,\nu)\) in \(\mathcal{P}_{\operatorname{Mar},d}(\tau,\beta,\nu)\), in controlling the way in which the regression function is required to grow as we move away from the boundary of the \(\tau\)-superlevel set. We are now in a position to state our main result concerning the power of our proposed procedure; the result holds in all dimensions but our primary interest here is in the multivariate case. **Theorem 15**.: _Let \(d\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\) and \(\theta>1\). There exists \(C\geq 1\), depending only on \((d,\theta)\), such that for any \(P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{\mathrm{Reg},d}(\tau, \theta,\gamma,\lambda)\), \(n\in\mathbb{N}\), \(\alpha\in(0,1)\) and \(\delta\in(0,1)\), along with \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), we have for \(m\in[n]\) that_ \[\mathbb{P}_{P}\bigg{[}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}^{ \mathrm{ISS}}_{\sigma,\tau,\alpha,m}(\mathcal{D})\big{)}>1\wedge C\bigg{\{} \bigg{(}\frac{\sigma^{2}}{n\lambda^{2}}\log_{+}\Big{(}\frac{m\log_{+}n}{\alpha \wedge\delta}\Big{)}\bigg{)}^{\frac{1}{2\gamma+d}}+\bigg{(}\frac{\log_{+}(m/ \delta)}{m}\bigg{)}^{\frac{1}{d}}\bigg{\}}\bigg{]}\leq\delta,\] _and_ \[\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}^{ \mathrm{ISS}}_{\sigma,\tau,\alpha,m}(\mathcal{D})\big{)}\big{\}}\leq 1\wedge C \bigg{\{}\bigg{(}\frac{\sigma^{2}}{n\lambda^{2}}\log_{+}\Big{(}\frac{m\log_{+} n}{\alpha}\Big{)}\bigg{)}^{1/(2\gamma+d)}+\bigg{(}\frac{\log_{+}m}{m}\bigg{)}^{1/d} \bigg{\}}.\] The terms in the bound in Theorem 15 are similar to those in Theorem 11, and exhibit the trade-off in the choice of \(m\): if we choose it to be small, then there are fewer data points in our subsample that belong to \(\mathcal{X}_{\tau}(\eta)\) and moreover these are less likely to be excluded from \(\hat{A}^{\mathrm{ISS}}\) because they are typically assigned greater budget in our DAG testing procedure. On the other hand, we incur a greater loss in excluding regions between data points in our subsample that belong to this superlevel set. By specialising Theorem 15 via a particular choice of \(m\), we obtain the following almost immediate corollary. It is this upper bound to which we will compare our minimax lower bound in Theorem 17. **Corollary 16**.: _Under the conditions of Theorem 15, if we take \(m_{0}:=n\wedge\lceil n\lambda^{2}/\sigma^{2}\rceil\), then_ \[\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}^{ \mathrm{ISS}}_{\sigma,\tau,\alpha,m_{0}}(\mathcal{D})\big{)}\big{\}}\leq 1 \wedge 4C\bigg{\{}\bigg{(}\frac{\sigma^{2}}{n\lambda^{2}}\log_{+}\Big{(}\frac{n \lambda^{2}\log_{+}n}{\sigma^{2}\alpha}\Big{)}\bigg{)}^{1/(2\gamma+d)}+\bigg{(} \frac{\log_{+}n}{n}\bigg{)}^{1/d}\bigg{\}}.\] As is apparent from the proof of Corollary 16, a high-probability bound analogous to that in Theorem 15 also holds, but this is omitted for brevity. In practice, one can take \(m=n\) (as we do in our simulations in Section 5), with a corresponding power bound obtained as a special case of Theorem 15. Corollary 16 suggests a curse of dimensionality effect in isotonic subgroup selection; this is confirmed as an essential price to pay by Theorem 17 below. ### Main lower bound In order to discuss the optimality of our data-dependent selection set \(\hat{A}^{\mathrm{ISS}}\), we present a minimax lower bound that provides a benchmark on the regret that is achievable by any data-dependent selection set with Type I error control. **Theorem 17**.: _Let \(d\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\) and \(\theta>1\). Then, writing \(\mathcal{P}^{\prime}:=\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{ \mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\), there exists \(c\in(0,1)\), depending only on \((d,\gamma)\), such that for any \(n\in\mathbb{N}\) and \(\alpha\in(0,1/4]\), we have_ \[\inf_{\hat{A}\in\mathcal{A}_{n}(\tau,\alpha,\mathcal{P}^{\prime})}\sup_{P\in \mathcal{P}^{\prime}}\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta) \setminus\hat{A}(\mathcal{D})\big{)}\big{\}}\geq c\bigg{[}1\wedge\bigg{\{} \bigg{(}\frac{\sigma^{2}}{n\lambda^{2}}\log_{+}\Big{(}\frac{1}{5\alpha}\Big{)} \bigg{)}^{1/(2\gamma+d)}+\frac{1}{n^{1/d}}\bigg{\}}\bigg{]}. \tag{3}\] By comparing the rate in Theorem 17 with those in Theorem 11 and Corollary 16, we see that \(\hat{A}^{\mathrm{ISS}}\) attains the optimal regret among procedures with Type I error control, up to poly-logarithmic factors. In particular, up to such factors, these results reveal the optimal dependence of the regret not only on \(n\), but also on \(\sigma\), \(\lambda\) and \(\alpha\). It is interesting to note that Theorem 17 incorporates procedures that are only required to control Type I error over \(\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{\mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\), whereas \(\hat{A}^{\mathrm{ISS}}\) has Type I error control over the larger class \(\mathcal{P}_{\mathrm{Mon},d}(\sigma)\), by Theorem 9. Thus, \(\hat{A}^{\mathrm{ISS}}\) suffers no deterioration in performance for this stronger validity guarantee, at least up to poly-logarithmic factors. The proof of Theorem 17 combines two minimax lower bounds, given in Propositions 31 and 33, which provide the different terms in the sum in (3). The main idea in both cases is to divide \([0,1]^{d}\) into a hypercube lattice, and to construct pairs of distributions where either the regression function (Proposition 31) or the marginal distribution (Proposition 33) only differ in a single hypercube among a collection whose centres lie on a large antichain in \(\mathbb{R}^{d}\). Observations outside these critical hypercubes therefore do not help to distinguish between the distributions in a pair, so by choosing the number of hypercubes and the difference in the regression function levels appropriately, we obtain a non-trivial probability of failing to include them in a data-dependent selection set. Our formal constructions, together with illustrations, are given in Section A.3. ## 4 Extensions ### Choice of \(p\)-value construction Recall that in our isotonic subgroup selection procedure \(\hat{A}^{\mathrm{ISS}}\), we propose a \(p\)-value based on a martingale test in combination with a _finite law of the iterated logarithm (LIL) bound_ (Lemma 45_(a)_). The following definition gives an alternative \(p\)-value construction that uses a different bound and includes a hyperparameter \(\rho>0\). **Definition 18**.: _In the setting of Definition 1, for \(\rho>0\), we define_ \[\tilde{p}^{\rho}_{\sigma,\tau}(x)\equiv\tilde{p}^{\rho}_{\sigma,\tau}(x, \mathcal{D}):=1\wedge\min_{k\in[n(x)]}\sqrt{\frac{k+\rho}{4\rho}}\bigg{\{} \mathrm{exp}\bigg{(}\frac{(S_{k}\lor 0)^{2}}{2(k+\rho)}\bigg{)}-1\bigg{\}}^{-1}\] _whenever \(n(x)>0\), and \(\tilde{p}^{\rho}_{\sigma,\tau}(x,\mathcal{D}):=1\) otherwise._ By Lemma 45_(b)_, which is due to Howard et al. (2021), in combination with the proof technique of Lemma 5, \(\tilde{p}^{\rho}_{\sigma,\tau}(x)\) is indeed a \(p\)-value for the null hypothesis \(\eta(x)<\tau\) (even conditional on \(\mathcal{D}_{X}\)). It follows that if we modify our procedure to use these \(p\)-values instead of those in Definition 1, then the Type I error guarantee in Theorem 9 is unaffected. The objective function being minimised over \(k\in[n(x)]\) is a little smaller for the original \(p\)-value definition when \(k\) is sufficiently large, and this therefore leads to a stronger power bound in Theorem 15. Nevertheless, for appropriate values of \(\rho>0\), Definition 18 may yield a slightly smaller objective for small and moderate values of \(k\), and hence may be preferable in practice. Based on some preliminary simulations, we found that the power of our approach varied very little over choices of \(\rho\in(0,1]\), and that \(\rho=1/2\) was a robust choice that we used throughout our experiments in Section 5 and recommend for practical use. ### Alternative distributional assumptions In this subsection, we introduce three variants of \(\hat{A}^{\mathrm{ISS}}\), each of which is able to control Type I error over appropriate classes without knowledge of any nuisance parameter and without the need for sample splitting. In each case, we retain the same multiple testing component \(\mathcal{R}^{\text{ISS}}\) to our procedure, but construct the \(p\)-values in different ways. Power results analogous to Theorem 15 also hold for these versions of \(\hat{A}^{\text{ISS}}\), but are omitted for brevity. #### 4.2.1 Gaussian noise with unknown variance Since Algorithm 2 takes the sub-Gaussian variance parameter \(\sigma^{2}\) as an input, we present here an adaptive approach for a Gaussian error setting. More precisely, for \(\sigma>0\), let \(\mathcal{P}_{\text{N},d}(\sigma)\) denote the subset of \(\mathcal{P}_{\text{Mon},d}(\sigma)\) with \(Y-\eta(X)|X\sim\mathcal{N}(0,\sigma^{2})\). **Definition 19**.: _In the setting of Definition 1, let \(\hat{\sigma}_{0,k}^{2}:=k^{-1}\sum_{j=1}^{k}\bigl{(}Y_{(j)}(x)-\tau\bigr{)}_{+} ^{2}\) and \(\bar{Y}_{1,k}:=k^{-1}\sum_{j=1}^{k}Y_{(j)}(x)\) for \(k\in[n(x)]\) and \(\hat{\sigma}_{1,k}^{2}:=k^{-1}\sum_{j=1}^{k}\bigl{(}Y_{(j)}(x)-\bar{Y}_{1,k} \bigr{)}^{2}\) for \(k\in\{2,\ldots,n(x)\}\). Moreover, we denote \(\bar{Y}_{1,0}:=0\), and \(\hat{\sigma}_{1,k}^{2}:=1\) for \(k\in\{0,1\}\). For \(k\in[n(x)]\), define_ \[\bar{p}_{\tau}^{k}(x)\equiv\bar{p}_{\tau}^{k}(x,\mathcal{D}):=\frac{1}{\hat{ \sigma}_{0,k}^{k}e^{k/2}}\cdot\prod_{j=1}^{k}\hat{\sigma}_{1,j-1}\exp\biggl{\{} \frac{\bigl{(}Y_{(j)}(x)-\bar{Y}_{1,j-1}\bigr{)}^{2}}{2\hat{\sigma}_{1,j-1}^{2 }}\biggr{\}},\] _where for definiteness \(\bar{p}_{\tau}^{k}(x):=1\) if \(\hat{\sigma}_{0,k}=0\), and \(\bar{p}_{\tau}(x)\equiv\bar{p}_{\tau}(x,\mathcal{D}):=1\wedge\min_{k\in[n(x)]} \bar{p}_{\tau}^{k}(x)\)._ The idea here is that \(\bar{p}_{\tau}^{k}(x)\) exploits the sequential likelihood ratio test principle developed by Wasserman et al. (2020), applied to a notion of a \(t\)-test for a stream of independent normal random variables with varying means. In particular, \(\hat{\sigma}_{0,k}^{2}\) and \(\hat{\sigma}_{1,k}^{2}\) are maximum likelihood estimators of \(\sigma^{2}\) under the null hypothesis that \(\max_{j\in[k]}\eta\bigl{(}X_{(j)}(x)\bigr{)}<\tau\) and without this constraint respectively. The next lemma is analogous to Lemma 5 and guarantees that \(\bar{p}_{\tau}(x)\) is a \(p\)-value. **Lemma 20**.: _Let \(x\in\mathbb{R}^{d}\), \(\tau\in[0,1)\), \(P\in\cup_{\sigma\in(0,\infty)}\mathcal{P}_{\text{N},d}(\sigma)\) with \(\eta(x)<\tau\) and \(\mathcal{D}=\bigl{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\bigr{)}\sim P^{n}\). Then \(\mathbb{P}_{P}\bigl{\{}\bar{p}_{\tau}(x,\mathcal{D})\leq\alpha|\mathcal{D}_{ X}\bigr{\}}\leq\alpha\) for all \(\alpha\in(0,1)\)._ It is worth highlighting that the proof of Lemma 20 relies on the fact that \(\{x\in\mathbb{R}^{d}:\eta(x)<\tau\}\) is a lower set, but otherwise does not use the monotonicity of \(\eta\). From this, we can deduce that an analogous robustness to misspecification holds to that presented at the end of Section 3.1. #### 4.2.2 Classification Our second variant is tailored to the case of bounded responses, which in particular includes classification settings. Suppose that \((X,Y)\sim P\) for some distribution \(P\) on \(\mathbb{R}^{d}\times[0,1]\) with increasing regression function \(\eta\) on \(\mathbb{R}^{d}\). By Hoeffding's lemma, the sub-Gaussianity condition of Definition 4 is satisfied with \(\sigma=1/2\), so that \(\hat{A}^{\text{ISS}}\) may be used to control the Type I error over \(\mathcal{P}_{\text{Mon},d}(1/2)\). However, in this context, it suffices to control the Type I error over the subclass \(\mathcal{P}_{\text{Bdd},d}\) of \(\mathcal{P}_{\text{Mon},d}(1/2)\) consisting of distributions \(P\) on \(\mathbb{R}^{d}\times[0,1]\) with increasing regression function. In such a setting, we may combine our procedure with the following modified \(p\)-value construction. Recall that for \(z\in(0,1)\) and \(a,b>0\), the _incomplete beta function_ is defined by \(\text{B}(z;a,b):=\int_{0}^{z}t^{a-1}(1-t)^{b-1}\,dt\). **Definition 21**.: _In the setting of Definition 1, let \(\check{S}_{k}\equiv\check{S}_{k}(x,\mathcal{D}):=\sum_{j=1}^{k}Y_{(j)}(x)\) and define_ \[\check{p}_{\tau}(x)\equiv\check{p}_{\tau}(x,\mathcal{D}):=1\wedge\min_{k\in[n (x)]}\frac{\tau^{\check{S}_{k}}(1-\tau)^{n-\check{S}_{k}+1}}{\mathrm{B}(1- \tau;n-\check{S}_{k}+1,\check{S}_{k}+1)}.\] The following lemma confirms that this indeed defines a \(p\)-value (even conditional on \(\mathcal{D}_{X}\)). Its proof proceeds via a one-sided version of the time-uniform confidence sequence construction of Robbins (1970), which is itself based on earlier work by Ville (1939) and Wald (1947). **Lemma 22**.: _Let \(x\in\mathbb{R}^{d}\), \(\tau\in[0,1)\), \(P\in\mathcal{P}_{\mathrm{Bdd},d}\) with \(\eta(x)<\tau\) and \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\). Then \(\mathbb{P}_{P}\big{\{}\check{p}_{\tau}(x,\mathcal{D})\leq\alpha|\mathcal{D}_{ X}\big{\}}\leq\alpha\) for all \(\alpha\in(0,1)\)._ As a consequence of Lemma 22, combining \(\hat{A}^{\mathrm{ISS}}\) with the \(p\)-values \(\check{p}_{\tau}\) still controls the Type I error over \(\mathcal{P}_{\mathrm{Bdd},d}\). In a similar spirit to the discussion at the end of Section 3.1, both this conclusion and Lemma 22 hold over the even larger class \(\mathcal{P}_{\mathrm{Bdd}\mathrm{Upp},d}(\tau)\supseteq\mathcal{P}_{\mathrm{ Bdd},d}\) of distributions \(P\) on \(\mathbb{R}^{d}\times[0,1]\) with regression function \(\eta\) such that \(\mathcal{X}_{\tau}(\eta)\) is an upper set (see Lemma 36). #### 4.2.3 Increasing conditional quantiles Finally in this subsection, we present an alternative assumption on the conditional response distribution that motivates a version of \(\hat{A}^{\mathrm{ISS}}\) that is robust to heavy tails. **Definition 23**.: _Given \(\theta\in(0,1)\), a distribution \(P\) on \(\mathbb{R}^{d}\times\mathbb{R}\) and \((X,Y)\sim P\), let \(\zeta_{\theta}:\mathbb{R}^{d}\to\mathbb{R}\) denote the conditional \(\theta\)-quantile given by \(\zeta_{\theta}(x):=\inf\bigl{\{}y\in\mathbb{R}:\mathbb{P}_{P}\bigl{(}Y\leq y|X =x\bigr{)}\geq\theta\bigr{\}}\) for \(x\in\mathbb{R}^{d}\). Now let \(\mathcal{P}_{\mathrm{Q},d}(\theta)\) denote the class of all such distributions \(P\) for which \(\zeta_{\theta}\) is increasing._ **Lemma 24**.: _Given \(\alpha\in(0,1)\), \(\theta\in(0,1)\), \(\tau\in\mathbb{R}\), \(P\in\mathcal{P}_{\mathrm{Q},d}(\theta)\), \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\) and writing \(\mathcal{D}^{\tau}=\big{(}(X_{1},1_{\{Y_{1}>\tau\}}),\ldots,(X_{n},1_{\{Y_{n}> \tau\}})\big{)}\), we have whenever \(x\notin\mathcal{X}_{\tau}(\zeta_{\theta})\) that \(\mathbb{P}_{P}\bigl{\{}\check{p}_{1/2,1-\theta}(x,\mathcal{D}^{\tau})\leq \alpha|\mathcal{D}_{X}\bigr{\}}\leq\alpha\) and \(\mathbb{P}_{P}\bigl{\{}\check{p}_{1-\theta}(x,\mathcal{D}^{\tau})\leq\alpha| \mathcal{D}_{X}\bigr{\}}\leq\alpha\) for all \(\alpha\in(0,1)\)._ In particular, if \(\eta\) is increasing and the conditional distribution of \(Y-\eta(X)\) given \(X\) is symmetric about zero, then the distribution of \((X,Y)\) belongs to \(\mathcal{P}_{\mathrm{Q},d}(1/2)\). As such, the modification of \(\hat{A}^{\mathrm{ISS}}\) with the \(p\)-values \(\check{p}_{1/2}(x,\mathcal{D}^{\tau})\) in place of \(\hat{p}_{\sigma,\tau}(x,\mathcal{D})\) controls the Type I error at the nominal level. ### Application to heterogeneous treatment effects We now describe how our proposed procedure can be used to identify subsets of the covariate domain with high treatment effects in randomised controlled trials. As a model for such a setting, we assume that we observe independent copies \((X_{1},T_{1},\check{Y}_{1}),\ldots,(X_{n},T_{n},\check{Y}_{n})\) of the triple \((X,T,\check{Y})\), where \(X\) is the covariate vector, \(T\) takes values in \(\{0,1\}\) and encodes the assignment to one of two treatment arms, and \(\tilde{Y}\) gives the corresponding response. For \(\ell\in\{0,1\}\), denote by \(\tilde{P}^{\ell}\) the conditional distribution of \((X,\tilde{Y})\) given that \(T=\ell\) and define corresponding regression functions \(\tilde{\eta}^{\ell}\) by \(\tilde{\eta}^{\ell}(x):=\mathbb{E}(\tilde{Y}|X=x,T=\ell)\) for \(x\in\mathbb{R}^{d}\). We are interested in identifying the \(\tau\)-superlevel set of the _heterogeneous treatment effect_\(\eta\), where \(\eta(x):=\tilde{\eta}^{1}(x)-\tilde{\eta}^{0}(x)\) for \(x\in\mathbb{R}^{d}\). To this end, observe that writing \(\pi(x):=\mathbb{P}(T=1|X=x)\) for the _propensity score_, and considering the _inverse propensity weighted response_ \[Y:=\frac{T-\pi(X)}{\pi(X)\big{(}1-\pi(X)\big{)}}\cdot\tilde{Y}, \tag{4}\] we have \(\mathbb{E}(Y|X=x)=\eta(x)\) for all \(x\in\mathbb{R}^{d}\). Hence, writing \(P\) for the distribution of \((X,Y)\) and with \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), where \(Y_{1},\ldots,Y_{n}\) are the inverse propensity weighted responses obtained from \((X_{1},T_{1},\tilde{Y}_{1}),\ldots,(X_{n},T_{n},\tilde{Y}_{n})\), we have for any \(\alpha\in(0,1)\) and any \(m\in[n]\) that \(\mathbb{P}_{P}\big{(}\hat{A}^{\text{ISS}}_{\sigma,\tau,\alpha,m}(\mathcal{D}) \subseteq\mathcal{X}_{\tau}(\eta)\big{)}\geq 1-\alpha\) whenever \(P\in\mathcal{P}_{\text{Mon},d}(\sigma)\), by Theorem 9. #### 4.3.1 Conditional treatment ranking Under assumptions that are in spirit similar to those in Section 4.2.3, we can use \(\hat{A}^{\text{ISS}}\) to establish non-inferiority of a treatment on a subgroup. Suppose that for \(x\in\mathbb{R}^{d}\), \(y\in\mathbb{R}\) and \(\ell\in\{0,1\}\), we have \(\mathbb{P}\big{(}\tilde{Y}-\tilde{\eta}^{\ell}(x)\leq y|X=x,T=\ell\big{)}=F(y)\) for some continuous distribution function \(F\) with the symmetry property \(F(t)=1-F(-t)\) for all \(t\in\mathbb{R}\). In particular, this includes the case where \(\eta\) is increasing and we have homoscedastic Gaussian errors with unknown variance. Let \(Y\) be as in (4) and observe that whenever \(\pi(x)=1/2\), \[\mathbb{P}(Y\geq 0\mid X=x) =\frac{1}{2}\mathbb{P}\big{(}Y\geq 0\mid T=1,X=x\big{)}+\frac{1}{ 2}\mathbb{P}\big{(}Y\geq 0\mid T=0,X=x\big{)}\] \[=\frac{1}{2}\mathbb{P}\big{(}\tilde{Y}\geq 0\mid T=1,X=x\big{)}+ \frac{1}{2}\mathbb{P}\big{(}\tilde{Y}\leq 0\mid T=0,X=x\big{)}\] \[=\frac{1}{2}\big{\{}1-F\big{(}-\tilde{\eta}^{1}(x)\big{)}\big{\}} +\frac{1}{2}F\big{(}-\tilde{\eta}^{0}(x)\big{)}\] \[=\frac{1}{2}F\big{(}\tilde{\eta}^{1}(x)\big{)}+\frac{1}{2}F\big{( }-\tilde{\eta}^{0}(x)\big{)}.\] Writing \(Y^{*}:=\mathbb{1}_{\{Y\geq 0\}}\) and \(\eta^{*}(x):=\mathbb{E}(Y^{*}|X=x)=\mathbb{P}(Y\geq 0|X=x)\) for \(x\in\mathbb{R}^{d}\), we have \(\eta^{*}(x)\geq 1/2\) if and only if \(\eta(x)=\tilde{\eta}^{1}(x)-\tilde{\eta}^{0}(x)\geq 0\), so \(\mathcal{X}_{1/2}(\eta^{*})=\mathcal{X}_{0}(\eta)\). Moreover, when \(\eta\) is increasing on \(\mathbb{R}^{d}\), the distribution of \((X,Y^{*})\) belongs to \(\mathcal{P}_{\text{BddUpp},d}(1/2)\). Hence, in order to estimate \(\mathcal{X}_{0}(\eta)\), we may use \(\hat{A}^{\text{ISS}}\) with either \(\hat{p}_{1/2,1/2}\) or \(\hat{p}_{1/2}\) and retain Type I error control (see the discussions at the end of Section 3.1 and 4.2.2 respectively). An application of this procedure is presented in Section 6.1.2. ## 5 Simulations The aim of this section is to explore the empirical performance of \(\hat{A}^{\text{ISS}}\) in a wide range of settings. Throughout, we took independent pairs \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\), where \(X_{1},\ldots,X_{n}\sim\text{Unif}\big{(}[0,1]^{d}\big{)}=:\mu\) with \(d\in\{2,3,4\}\) and \(n\in\{500,1000,2000,5000\}\). Rescaled versions of the six functions \(f\) in Table 1 serve as our main regression functions; see Appendix B for eight further examples. More specifically, we let \(\eta(x):=\{f(x)-f(0)\}/\{f(\mathbf{1}_{d})-f(0)\}\) for each choice of \(f\), as illustrated in Figure 4. Further, we let \(Y_{i}|X_{i}\sim N\big{(}\eta(X_{i}),\sigma^{2}\big{)}\) for \(i\in[n]\) with \(\sigma=1/4\) when \(d=2\), \(\sigma=1/16\) when \(d=3\) and \(\sigma=1/64\) when \(d=4\). We set \(\alpha=0.05\) and the thresholds \(\tau\equiv\tau(\eta)\) were chosen such that \(\mu\big{(}\mathcal{X}_{\tau}(\eta)\big{)}=1/2\); see Table 1. Finally, in this table we also provide \[\gamma(P):=\inf\biggl{\{}\gamma>0:P\in\bigcup_{\lambda>0}\mathcal{P}_{\mathrm{ Reg},d}(\tau,\theta,\gamma,\lambda)\biggr{\}}\] for the distribution \(P\) associated with each choice of \(\mu\) and \(\eta\). Since for \(\gamma,\gamma^{\prime}>0\) and \(\lambda,\lambda^{\prime}>0\) such that \(\gamma\leq\gamma^{\prime}\) and \(\lambda\geq\lambda^{\prime}\), we have \(\mathcal{P}_{\mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\subseteq\mathcal{P}_ {\mathrm{Reg},d}(\tau,\theta,\gamma^{\prime},\lambda^{\prime})\), this is a natural choice to illustrate the effect of the exponent in Definition 13_(ii)_ on the rate of convergence. Although we are not aware of other proposed methods for isotonic subgroup selection, there are alternative ways in which we could combine our \(p\)-values with different DAG testing \begin{table} \begin{tabular}{c|c|c|c} Label & Function \(f\) & \(\tau\) & \(\gamma(P)\) \\ \hline (a) & \(\sum_{j=1}^{d}x^{(j)}\) & \(1/2\) & \(1\) \\ (b) & \(\max_{1\leq j\leq d}x^{(j)}\) & \(1/2^{1/d}\) & \(1\) \\ (c) & \(\min_{1\leq j\leq d}x^{(j)}\) & \(1-1/2^{1/d}\) & \(1\) \\ (d) & \(1_{(0.5,1]}\big{(}x^{(1)}\big{)}\) & \(1/2\) & \(0\) \\ (e) & \(\sum_{j=1}^{d}\bigl{(}x^{(j)}-0.5\bigr{)}^{3}\) & \(1/2\) & \(3\) \\ (f) & \(x^{(1)}\) & \(1/2\) & \(1\) \\ \end{tabular} \end{table} Table 1: Definition of the functions used in the simulations. Here, \(x=(x^{(1)},\ldots,x^{(d)})^{\top}\in[0,1]^{d}\). Figure 4: For \(d=2\), the contour lines (red) of the regression functions corresponding to the functions \(f\) in Table 1 at the levels \(k/6\) for \(k\in[5]\) are shown. The fill colour indicates the function value at the respective position from \(0\) (purple) to \(1\) (yellow). procedures to form a data-dependent selection set. For instance, one could apply Holm's procedure (Holm, 1979) to ensure FWER control on the data points, and then take our data-dependent selection set to be the upper hull of the set of points in \(\mathcal{D}_{X,m}\) corresponding to rejected hypotheses. This simple procedure is already a uniform improvement (in terms of the size of the selected set) on the Bonferroni approach to constructing a one-sided confidence band for \(\eta\) that was mentioned in the introduction. Alternatively, one could combine our \(p\)-values with either the all-parent or any-parent version of the DAG testing procedure due to Meijer and Goeman (2015), which is described in detail in Section C.1, and which we applied with uniform weights on the leaf nodes. Although we are able to prove in Section C.2 that these latter procedures have sub-optimal worst-case performance, they remain a natural approach to controlling the FWER for DAG-structured hypotheses. We refer to these three alternative versions of our procedure as \(\hat{A}^{\text{ISS,H}}\), \(\hat{A}^{\text{ISS,All}}\) and \(\hat{A}^{\text{ISS,Any}}\) respectively. In all cases, we took \(m=n\), used \(\tilde{p}^{1/2}\) given in Definition 18 as \(p\)-values and for each data-dependent selection set \(\hat{A}\) we estimated \(\mathbb{E}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}\big{)} \big{\}}\) using a Monte Carlo approximation based on \(10^{5}\) independent draws from \(\mu\) for each data realisation, averaged over 100 repetitions of each experiment. A comparison of the running times given in Appendix B.2 shows that \(\hat{A}^{\text{ISS}}\) can be as much as 10 times faster to compute than \(\hat{A}^{\text{ISS,All}}\) and \(\hat{A}^{\text{ISS,Any}}\), though it is not as fast as the more naive \(\hat{A}^{\text{ISS,H}}\). The results for regression functions (a)-(f) are presented in Figures 5, 6 and 7 respectively. Corresponding results for the other eight regression functions defined in Appendix B, which are qualitatively similar, are given in Figures 13, 14 and 15. Moreover, in Figures 16 and 17, we compare \(\hat{A}^{\text{ISS}}\) with two different possible approaches based on sample splitting that are of a similar flavour to the two-stage approaches mentioned in the introduction. These were omitted from our earlier comparisons for visual clarity, and because their performance turns out not to be competitive. From all of these figures, we see that \(\hat{A}^{\text{ISS}}\) is the most effective of these approaches for combining our \(p\)-values with a DAG testing procedure. The differences between \(\hat{A}^{\text{ISS}}\) and the other approaches are more marked when \(d=2\) than in higher dimensions. It is also notable that regression functions with smaller values of \(\gamma(P)\) such as (d) yield much smaller estimates of \(\mathbb{E}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}\big{)} \big{\}}\) that decay more rapidly with the sample size. Conversely, for settings with larger values of \(\gamma(P)\), such as (e), the decay of our estimates of \(\mathbb{E}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}\big{)} \big{\}}\) is much slower. These observations are in agreement with our theory in Section 3.2. Finally, we remark that our procedures appear to adapt well to settings where the regression function depends only on a subset of the \(d\) variables, as can be seen for instance by comparing the results in (a) and (f). Figure 5: Estimates of \(\mathbb{E}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}\big{)}\big{\}}\) for \(d=2\) and \(\sigma=1/4\). medication zidovudine against the effects of a combination therapy of zidovudine together with zalcitabine. At the time, patient heterogeneity with respect to the response to these treatments was not well understood (Burger et al., 1994). Moreover, prior studies suggested that the beneficial effects of zidovudine fade with time and that this could be remedied with multitherapy (Hammer et al., 1996). ACTG 175 aimed to investigate treatment effect heterogeneity, in particular with respect to prior drug exposure, among patients with less advanced HIV disease. Besides relevant parts of their medical records, covariates including age, weight and ethnicity were recorded. The primary end point of the study was defined as a reduction of the CD4 cell count by at least 50%, development of AIDS, or death, with a median follow-up duration of 143 weeks (Hammer et al., 1996). The data for 2139 patients are freely available in the R package speff2trial(Juraska et al., 2022). #### 6.1.1 Risk group estimation We first consider the task of identifying the patient subgroup whose probability of not reaching the primary endpoint when receiving zidovudine alone (532 patients in total) is at least \(\tau=0.5\) based on their age, which the study's eligibility criteria required to be at least 12 years. To that end, for \(i\in[n]\), let \(Y_{i}=1\) if the \(i\)th patient did not reach the primary endpoint and \(Y_{i}=0\) otherwise. Furthermore, let \(X_{i}\) denote the \(i\)th patient's age (multiplied by \(-1\)), since a decrease in age is expected to correspond to an increased probability of avoiding the primary end point across the eligible age range. Under the assumption that \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), we then have that \(P\in\mathcal{P}_{\text{Mon,1}}(1/2)\), so Type I error control for our procedure \(\hat{A}^{\text{ISS}}\) is guaranteed by Theorem 9. The left panel of Figure 8 illustrates the data-dependent selection set that we output with \(\alpha=0.05\), indicating that not reaching the primary endpoint is the more likely outcome for patients aged 39 and under. For a bivariate illustration, we use age multiplied by \(-1\) and CD4 cell count at the trial onset as covariates. A high initial CD4 cell count is expected to be associated with a lower risk of reaching the primary endpoint. Thus, we assume that \(\mathcal{D}\sim P^{n}\) for some \(P\in\mathcal{P}_{\text{Mon,2}}(1/2)\), and the right panel of Figure 8 illustrates the output \(\hat{A}^{\text{ISS}}\) for \(\tau=0.5\) and \(\alpha=0.05\). The fact that the left-hand extreme of this selected set is slightly below \(39\) years is a reflection of the stronger form of Type I error control sought in the larger dimension. #### 6.1.2 Heterogeneous treatment effects To illustrate an application of the methodology of Section 4.3.1, we take the change in CD4 cell count from trial onset to week \(20\) (\(\pm 5\) weeks) as the measured response \(\tilde{Y}_{i}\) for the \(i\)th patient. We further set \(T_{i}=0\) if the \(i\)th patient was in the control group receiving monotherapy with zidovudine (\(532\) patients) and \(T_{i}=1\) if they were assigned to receive multitherapy with zidovudine and zalcitabine (\(524\) patients). We are interested in identifying the subgroup for which multitherapy is at least as good as monotherapy, in the sense that the CD4 cell count is decreased by less, based on the patient's age (again multiplied by \(-1\)), denoted by \(X_{i}\) for the \(i\)th patient. This means that, conditional on treatment \(T_{i}\) and age \(X_{i}\), \(\tilde{\eta}^{T_{i}}(X_{i})\) is the expected change in CD4 cell count in the first \(20\) weeks, and we assume that the observed response \(\tilde{Y}_{i}\) is conditionally symmetrically distributed around \(\tilde{\eta}^{T_{i}}(X_{i})\). Thus \(\eta(x):=\tilde{\eta}^{1}(x)-\tilde{\eta}^{0}(x)\) gives the heterogeneous treatment effect on the change in CD4 cell count for patients of \(x\) years of age, and we are interested in identifying \(\mathcal{X}_{0}(\eta)\) under the assumption that this is an upper set. Here, \(\pi(x)=1/2\) for all \(x\), so that from (4), \(Y_{i}=(4T_{i}-2)\cdot\tilde{Y}_{i}\) for all \(i\). Defining now \(Y_{i}^{*}:=\mathbbm{1}_{\{Y_{i}\geq 0\}}\) and assuming \(\mathcal{D}^{0}=\big{(}(X_{1},Y_{1}^{*}),\ldots,(X_{n},Y_{n}^{*})\big{)}\sim P ^{n}\), we have \(P\in\mathcal{P}_{\text{Upp,1}}(1/2,1/2)\), so that \(\hat{A}^{\text{ISS}}\) when applied with \(\tilde{p}_{1/2}\) controls the Type I error by the discussion in Section 4.3.1. See Figure 9 for a visualisation of the result. We conclude that among patients aged \(25\) or younger, replacing monotherapy by multitherapy is uniformly associated with a neutral or beneficial effect on the stability of CD4 cell count. Figure 8: Left: Each black dot represents one patient in \(\mathcal{D}\) (with the response jittered vertically for clarity) and the red line gives the lower bound of \(\hat{A}^{\text{ISS}}_{\sigma,\tau,\alpha,m}(\mathcal{D})\) when applied with the \(p\)-values \(\tilde{p}^{1/2}\), for \(\sigma=1/2\), \(\tau=1/2\), \(\alpha=0.05\), \(m=n=532\). The black line shows the isotonic least squares regression estimator and the blue dashed line indicates the level \(\tau\). Right: corresponding bivariate illustration, where the red region indicates \(\hat{A}^{\text{ISS}}\). ### Fuel consumption dataset Here we consider the Auto MPG dataset5 that was popularised by Quinlan (1993) and that is available through the UCI Machine Learning Repository (Dua and Graff, 2019). This dataset contains information on \(n=398\) cars, including their urban fuel consumption, weight and engine displacement. We would like to identify the combinations of car weight and engine displacement for which the probability of fuel efficiency being at least 15mpg is at least \(\tau=0.5\). To this end, we set \(Y_{i}=1\) if the \(i\)th car's fuel efficiency is at least 15mpg and \(Y_{i}=0\) otherwise. Since increases in weight and engine displacement can be assumed to decrease the conditional probability of high fuel efficiency, we let the two components of \(X_{i}\in(-\infty,0)^{2}\) give the \(i\)th car's weight and engine displacement (multiplied by \(-1\)). We then have that \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots(X_{n},Y_{n})\big{)}\sim P^{n}\) with \(P\in\mathcal{P}_{\text{Mon},2}(1/2)\), so Type I error control for our procedure \(\hat{A}^{\text{ISS}}\) is guaranteed by Theorem 9. The output set from the \(\hat{A}^{\text{ISS}}\) algorithm is shown in Figure 10. The strong sample correlation of \(0.93\) between weight and engine displacement contributes to the data-dependent selection set being almost rectangular, with a weight of under 3400lbs and an engine displacement of under 250 cubic inches being sufficient to be fairly confident that the fuel consumption is more likely than not to be at least 15mpg. Footnote 5: See [https://archive.ics.uci.edu/ml/datasets/auto+mpg](https://archive.ics.uci.edu/ml/datasets/auto+mpg). ## Appendix Section A of this appendix consists of proofs of all of our main results, as well as statements and proofs of intermediate results. Section B presents further simulations, while Section C contains a discussion of an alternative and general approach to combining the \(p\)-values due Figure 9: Each black dot represents one patient in \(\mathcal{D}\) (with \(Y_{i}^{*}\) jittered vertically for clarity) and the red line gives the lower bound of \(\hat{A}^{\text{ISS}}_{\sigma,\tau,\alpha,m}(\mathcal{D})\) when applied with \(\check{p}_{\tau}\), \(\tau=1/2\), \(\alpha=0.05\), \(m=n=1056\). Further, the black line shows the isotonic least squares regression estimator. The blue dashed line indicates the level \(\tau\). to Meijer and Goeman (2015). Finally, in Section D, we give a few auxiliary results. ## Appendix A Proofs We begin with some additional notation used in the appendix. For a set \(A\subseteq\mathbb{R}^{d}\), let \(\operatorname{Pow}(A)\) denote the power set of \(A\). Denote by \(\|\cdot\|_{2}\) the Euclidean norm on \(\mathbb{R}^{d}\) and for \(x\in\mathbb{R}^{d}\) and \(r>0\), define the closed Euclidean norm ball by \(B_{2}(x,r):=\{z\in\mathbb{R}^{d}:\|z-x\|_{2}\leq r\}\). Given \(A_{0}\), \(A_{1}\subseteq\mathbb{R}^{d}\), we write \(A_{0}\preccurlyeq A_{1}\) if \(x_{0}\preccurlyeq x_{1}\) for every pair \((x_{0},x_{1})\in A_{0}\times A_{1}\). We write \(\mathcal{L}_{d}\) for Lebesgue measure on \(\mathbb{R}^{d}\). Finally, for \(r\geq 0\) and \(x\in\mathbb{R}^{d}\), we denote \(\mathcal{I}_{r}(x)\equiv\mathcal{I}_{r}(x,\mathcal{D}_{X}):=\{i\in[n]:X_{i} \preccurlyeq x,\|X_{i}-x\|_{\infty}\leq r\}\). ### Proofs from Section 3.1 In order to verify Lemma 5 it will be convenient to prove the following small generalisation. **Lemma 25**.: _Let \(\sigma>0\), \(\tau\in\mathbb{R}\) and let \(P\) be a distribution on \(\mathbb{R}^{d}\times\mathbb{R}\) with regression function \(\eta\) such that if \((X,Y)\sim P\), then \(Y-\eta(X)\) is conditionally sub-Gaussian with variance parameter \(\sigma^{2}\) given \(X\). Fix \(x\in\mathbb{R}^{d}\) and suppose that \(\eta(x^{\prime})\leq\tau\) for all \(x^{\prime}\preccurlyeq x\). Given \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), we have \(\mathbb{P}_{P}\big{\{}\hat{p}_{\sigma,\tau}(x,\mathcal{D})\leq\alpha|\mathcal{ D}_{X}\big{\}}\leq\alpha\) for all \(\alpha\in(0,1)\)._ Proof of Lemma 25 (and hence Lemma 5).: Throughout the proof, we operate conditional on \(\mathcal{D}_{X}\) and consider the setting of Definition 1. If \(\mathcal{I}(x)=\emptyset\), then \(\hat{p}_{\sigma,\tau}(x,\mathcal{D})=1\) and the result follows, so suppose henceforth that \(n(x)\geq 1\). Define the \(\sigma\)-algebra \(\mathcal{F}_{0}\) generated by \(\{(X_{i})_{i\in\mathcal{I}(x)}\}\) and the \(\sigma\)-algebras \(\mathcal{F}_{k}\) generated by \(\big{\{}\big{(}Y_{(j)}(x)\big{)}_{j\in[k]}|\mathcal{F}_{0}\big{\}}\) for \(k\in[n(x)]\). Similarly to the proof of Duan et al. (2020, Theorem 3), we first show that \((S_{k})_{k\in\{0\}\cup[n(x)]}\),where \(S_{0}:=0\), is a supermartingale with respect to the filtration \((\mathcal{F}_{k})_{k\in\{0\}\cup[n(x)]}\). Since for \(k\in[n(x)]\), is measurable with respect to \(\mathcal{F}_{k-1}\) and the ordering \(Y_{(1)},\ldots,Y_{(n(x))}\) is fixed conditional on \(\mathcal{D}_{X}\), we have for any \(k\in[n(x)]\) that \[\mathbb{E}(S_{k}\mid\mathcal{F}_{k-1}) =S_{k-1}+\frac{1}{\sigma}\big{\{}\mathbb{E}\big{(}Y_{(k)}(x)\mid \mathcal{F}_{k-1}\big{)}-\tau\big{\}}\] \[=S_{k-1}+\frac{1}{\sigma}\big{\{}\eta\big{(}X_{(k)}(x)\big{)}- \tau\big{\}}\leq S_{k-1},\] where in the last step we used the fact that \(\eta(X_{i})\leq\eta(x)\leq\tau\) for \(i\in\mathcal{I}(x)\). Since the integrability of \(S_{k}\) follows from the sub-Gaussianity of the increments, the sequence \((S_{k})_{k\in\{0\}\cup[n(x)]}\) is a supermartingale. Moreover, its increments satisfy \(Z^{\prime}_{k}:=S_{k}-S_{k-1}=\big{(}Y_{(k)}(x)-\tau\big{)}/\sigma\leq\big{\{} Y_{(k)}(x)-\eta\big{(}X_{(k)}(x)\big{)}\big{\}}/\sigma=:Z_{k}\) for \(k\in[n(x)]\) and the random variables \((Z^{\prime}_{k})_{k\in[n(x)]}\) are independent conditional on \(\mathcal{D}_{X}\), as are \((Z_{k})_{k\in[n(x)]}\). Thus, we have by Lemma 45_(a)_ and with \(u_{\alpha}(\cdot)\) as defined there that \[\mathbb{P}\bigg{(}\bigcup_{k=1}^{n(x)}\big{\{}S_{k}\geq u_{\alpha }(k)\big{\}}\ \bigg{|}\ \mathcal{D}_{X}\bigg{)} =\mathbb{P}\bigg{(}\bigcup_{k=1}^{n(x)}\bigg{\{}\sum_{j=1}^{k}Z^{ \prime}_{j}\geq u_{\alpha}(k)\bigg{\}}\ \bigg{|}\ \mathcal{D}_{X}\bigg{)}\] \[\leq\mathbb{P}\bigg{(}\bigcup_{k=1}^{n(x)}\bigg{\{}\sum_{j=1}^{k} Z_{j}\geq u_{\alpha}(k)\bigg{\}}\ \bigg{|}\ \mathcal{D}_{X}\bigg{)}\leq\alpha.\] Hence, for \(\alpha\in(0,1)\), \[\mathbb{P}\big{(}\hat{p}_{\sigma,\tau}(x,\mathcal{D})\leq\alpha \mid\mathcal{D}_{X}\big{)} =\mathbb{P}\big{(}\hat{p}_{\sigma,\tau}(x,\mathcal{D})\leq\alpha,n (x)>0\mid\mathcal{D}_{X}\big{)}\] \[=\mathbb{P}\bigg{(}\max_{k\in[n(x)]}\frac{S_{k}}{u_{\alpha}(k)} \geq 1,n(x)>0\ \Big{|}\ \mathcal{D}_{X}\bigg{)}\leq\alpha,\] as required. Proof of Lemma 8.: Fix a finite set \(I\), a family of distributions \(\mathcal{Q}\) on \((0,1]^{I}\), a collection of random variables \(\boldsymbol{p}=(p_{i})_{i\in I}\) taking values in \((0,1]^{I}\), as well as hypotheses \(H_{i}\subseteq\{Q\in\mathcal{Q}:\mathbb{P}_{Q}(p_{i}\leq t)\leq t,\forall t\in (0,1]\}\) for \(i\in I\) and any \(G_{0}\)-consistent polyforest-weighted DAG \(G^{\prime}=(I,E,\boldsymbol{w})\). Throughout this proof, define \(G:=(I,E)\) and \(F:=(I,\{e\in E:w_{e}=1\})\) as in Algorithm 1, and write \(J^{c}:=I\setminus J\) for any \(J\subseteq I\). Fix \(Q_{0}\in\mathcal{Q}\), so that \(I_{0}\equiv I_{0}(Q_{0})\subseteq I\) is a \(G\)-lower set giving the indices of true null hypotheses. If \(I_{0}=\emptyset\), then no Type I error can be made and the proof is complete. We therefore suppose henceforth that \(|I_{0}|>0\). For a proper subset \(J\) of \(I\), it is convenient to define \[\alpha(i,J)\equiv\alpha(i,J,F):=\left\{\begin{array}{ll}\frac{|(\{i\}\cup \operatorname{\mathrm{d}}\!\varepsilon(i))\cap L(F)\cap J^{c}|}{|L(F)\cap J^{c}| }\cdot\alpha&\text{if }i\notin J,\,\operatorname{\mathrm{p}}\!\varepsilon(i) \subseteq J\\ 0&\text{otherwise}.\end{array}\right. \tag{5}\] Thus, given a set of rejected hypotheses \(J\) at a particular iteration of Algorithm 1 and writing \(\mathcal{N}_{0}(J):=\{i\in J^{c}:p_{i}\leq\alpha(i,J)\}\), the hypotheses in \(\mathcal{N}(J):=\mathcal{N}_{0}(J)\cup\bigcup_{j\in\mathcal{N}_{0}(J)} \operatorname{\mathrm{an}}\!_{G}(j)\) will be rejected at the next iteration. Hence, the set of rejected hypotheses at the \(\ell\)-th iteration in Algorithm 1 can be written as \(R_{\ell}=R_{\ell-1}\cup\mathcal{N}(R_{\ell-1})\) with \(R_{0}=\emptyset\). We first claim that \[\mathcal{N}(I_{1})\subseteq\mathcal{N}(I_{2})\cup I_{2} \tag{6}\] for all \(G\)-upper proper subsets \(I_{1}\subseteq I_{2}\) of \(I\). To see this, fix such \(I_{1},I_{2}\) and any \(i\in\mathcal{N}(I_{1})\). The result is immediate if \(i\in I_{2}\), so suppose that \(i\in I_{2}^{c}\). If \(i\in\mathcal{N}_{0}(I_{1})\), then \(\alpha(i,I_{1})>0\), so \(\mathrm{pa}_{F}(i)\subseteq I_{1}\). Since \(I_{1}\) and \(I_{2}\) are \(G\)-upper, they are also \(F\)-upper. Hence \(\{i\}\cup\mathrm{de}_{F}(i)\subseteq I_{2}^{c}\subseteq I_{1}^{c}\), so that \[\alpha(i,I_{1})=\frac{\left|\left(\{i\}\cup\mathrm{de}_{F}(i)\right)\cap L(F) \right|}{|L(F)\cap I_{1}^{c}|}\cdot\alpha\leq\frac{\left|\left(\{i\}\cup \mathrm{de}_{F}(i)\right)\cap L(F)\right|}{|L(F)\cap I_{2}^{c}|}\cdot\alpha= \alpha(i,I_{2}), \tag{7}\] and we deduce that \(i\in\mathcal{N}_{0}(I_{2})\subseteq\mathcal{N}(I_{2})\). If instead \(i\in\mathrm{an}_{G}(j_{0})\) for some \(j_{0}\in\mathcal{N}_{0}(I_{1})\), then since \(I_{2}\) is \(G\)-upper and \(i\in I_{2}^{c}\), we have \(j_{0}\in I_{2}^{c}\). Following the same line of reasoning as in (7), we see that \(0<\alpha(j_{0},I_{1})\leq\alpha(j_{0},I_{2})\), so that \(j_{0}\in\mathcal{N}_{0}(I_{2})\) and consequently \(i\in\mathrm{an}_{G}(j_{0})\subseteq\mathcal{N}(I_{2})\). This establishes the claim in (6). Our second claim is that \[\mathbb{P}_{Q_{0}}\big{(}\mathcal{N}(I_{0}^{c})=\emptyset\big{)}\geq 1-\alpha. \tag{8}\] To see this, first note that since \(I_{0}^{c}\) is \(G\)-upper, it is \(F\)-upper. Moreover, for any \(i\in I_{0}\), we have \(\alpha(i,I_{0}^{c})>0\) only if \(\mathrm{pa}_{F}(i)\subseteq I_{0}^{c}\), so the ancestors of any element of \(I_{*}:=\{i\in I_{0}:\alpha(i,I_{0}^{c})>0\}\) belong to \(I_{0}^{c}\), and we deduce that \(I_{*}\) is an antichain in \(F\). Combining this with the fact that \(F\) is a polyforest in which each node has at most one parent, we see that if \(i_{1},i_{2}\in I_{*}\) are distinct, then \(\{i_{1}\}\cup\mathrm{de}_{F}(i_{1})\) and \(\{i_{2}\}\cup\mathrm{de}_{F}(i_{2})\) are disjoint. Hence, \[\mathbb{P}_{Q_{0}}\big{(}\mathcal{N}(I_{0}^{c})\neq\emptyset\big{)} =\mathbb{P}_{Q_{0}}\bigg{(}\bigcup_{i\in I_{0}}\big{\{}p_{i}\leq \alpha(i,I_{0}^{c})\big{\}}\bigg{)}\leq\sum_{i\in I_{0}}\mathbb{P}_{Q_{0}} \big{(}p_{i}\leq\alpha(i,I_{0}^{c})\big{)}\] \[\leq\sum_{i\in I_{0}}\alpha(i,I_{0}^{c})=\sum_{i\in I_{*}}\frac{ \left|\left(\{i\}\cup\mathrm{de}_{F}(i)\right)\cap L(F)\right|}{|L(F)\cap I_{ 0}|}\cdot\alpha\leq\alpha,\] as required. Writing \(\Omega_{0}:=\big{\{}\mathcal{N}(I_{0}^{c})\cap I_{0}=\emptyset\big{\}}\) and using (6), we see that \(\mathcal{N}(R_{\ell-1})\subseteq\mathcal{N}(I_{0}^{c})\cup I_{0}^{c}\), so on \(\Omega_{0}\), we have \(\mathcal{N}(R_{\ell-1})\subseteq I_{0}^{c}\). We deduce that \[\Omega_{0}\cap\{R_{\ell-1}\cap I_{0}=\emptyset\}=\Omega_{0}\cap\{\mathcal{N}( R_{\ell-1})\cap I_{0}=\emptyset\}\cap\{R_{\ell-1}\cap I_{0}=\emptyset\}= \Omega_{0}\cap\{R_{\ell}\cap I_{0}=\emptyset\}.\] Since \(R_{0}=\emptyset\), we have \(\Omega_{0}=\Omega_{0}\cap\{R_{0}\cap I_{0}=\emptyset\}\), which yields by induction that \(\Omega_{0}=\Omega_{0}\cap\{R_{|I|}\cap I_{0}=\emptyset\}\subseteq\{R_{|I|}\cap I _{0}=\emptyset\}\). Combining this with (8) we conclude that \[\mathbb{P}_{Q_{0}}\big{(}R_{|I|}\cap I_{0}=\emptyset\big{)}\geq\mathbb{P}_{Q_{0 }}(\Omega_{0})=\mathbb{P}_{Q_{0}}\big{(}\mathcal{N}(I_{0}^{c})=\emptyset\big{)} \geq 1-\alpha,\] as required. Proof of Theorem 9.: If \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) are measurable spaces, \(f:\mathcal{X}\to\mathcal{Y}\) is measurable and \(\pi\) is a distribution on \(\mathcal{X}\), let \(f_{\sharp}^{\sharp}\pi\) denote the pushforward measure on \(\mathcal{Y}\) of \(\pi\) under \(f\); i.e., if \(Z\sim\pi\) then \(f(Z)\sim f_{\sharp}^{\sharp}\pi\). We condition on \(\mathcal{D}_{X}\) throughout this proof and denote \(\hat{\boldsymbol{p}}^{*}\equiv\hat{\boldsymbol{p}}^{*}(\cdot)=\big{(}\hat{p}_{ i}^{*}(\cdot)\big{)}_{i\in[m]}:=\big{(}\hat{p}_{\sigma,\tau}(X_{i},\cdot) \big{)}_{i\in[m]}\). Write \(\mathcal{Q}:=\{\hat{\boldsymbol{p}}^{*}\sharp\tilde{P}^{n}:\tilde{P}\in \mathcal{P}_{\mathrm{Mon},d}(\sigma)\}\) for a family of distributions over \((0,1]^{m}\) induced by \(\mathcal{P}_{\mathrm{Mon},d}(\sigma)\). Further, for \(i\in[m]\), let \(H_{i}^{*}:=\{\tilde{P}\in\mathcal{P}_{\mathrm{Mon},d}(\sigma):\mathbb{E}_{ \tilde{P}}(Y_{i}|X_{i})<\tau\}\) and \(H_{i}:=\{\hat{\boldsymbol{p}}^{*}\sharp\tilde{P}^{n}:\tilde{P}\in H_{i}^{*}\} \subseteq\mathcal{Q}\), so that for \(Q:=\hat{\boldsymbol{p}}^{*}\sharp P^{n}\) we have \(I_{0}(P):=\{i\in[m]:P\in H_{i}^{*}\}\subseteq\{i\in[m]:Q\in H_{i}\}=:I_{0}(Q)\). Lemma 5 then shows that \(H_{i}\subseteq\{\tilde{Q}\in\mathcal{Q}:\mathbb{P}_{\tilde{Q}}(\hat{p}_{i}^{*} \leq t|\mathcal{D}_{X})\leq t\ \forall t\in(0,1]\}\). Now define \(G_{0}^{*}:=([m],E_{0}^{*})\), where \(E_{0}^{*}:=\{(i_{0},i_{1})\in[m]^{2}:H_{i_{0}}^{*}\subseteq H_{i_{1}}^{*}\}\) and \(G_{0}:=([m],E_{0})\), where \(E_{0}:=\{(i_{0},i_{1})\in[m]^{2}:H_{i_{0}}\subseteq H_{i_{1}}\}\). We claim that \(E_{0}^{*}\subseteq E_{0}\). To see this, fix \((i_{0},i_{1})\in E_{0}^{*}\), so that \(H_{i_{0}}^{*}\subseteq H_{i_{1}}^{*}\), and suppose that \(Q_{0}\in H_{i_{0}}\). Then we can find \(P_{0}\in H_{i_{0}}^{*}\) such that \(Q_{0}=\hat{\boldsymbol{p}}^{*}\sharp P_{0}^{n}\). But since \(P_{0}\in H_{i_{1}}^{*}\), we must have that \(Q_{0}\in H_{i_{1}}\), and this establishes our claim. By construction, \(\mathcal{G}_{\mathrm{W}}(\mathcal{D}_{X,m})\) is a \(G_{0}^{*}\)-consistent polyforest-weighted DAG, and we can therefore deduce from our claim that it is also a \(G_{0}\)-consistent polyforest-weighted DAG. Hence, by Lemma 8, \[\mathbb{P}_{P}\big{\{}\mathcal{R}_{\alpha}^{\mathrm{ISS}}\big{(} \mathcal{G}_{\mathrm{W}}(\mathcal{D}_{X,m}), \hat{\boldsymbol{p}}^{*}(\mathcal{D})\big{)}\cap I_{0}(P)=\emptyset \bigm{|}\mathcal{D}_{X}\big{\}}\] \[\geq\mathbb{P}_{Q}\big{\{}\mathcal{R}_{\alpha}^{\mathrm{ISS}} \big{(}\mathcal{G}_{\mathrm{W}}(\mathcal{D}_{X,m}),\hat{\boldsymbol{p}}^{*} \big{)}\cap I_{0}(Q)=\emptyset\bigm{|}\mathcal{D}_{X}\big{\}}\geq 1-\alpha.\] Moreover, \(\mathcal{X}_{\tau}(\eta)\) is an upper set because \(P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\), and we conclude that as required. ### Proofs from Section 3.2 The following proposition shows that if we only know that \(P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\), then it is impossible to provide non-trivial uniform power guarantees for data-dependent selection sets with Type I error control. **Proposition 26**.: _Let \(d\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma>0\) and \(\alpha\in(0,1)\). Then, for any \(n\in\mathbb{N}\),_ \[\sup_{P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)}\inf_{\hat{A}\in\hat{A}_{n}( \tau,\alpha,\mathcal{P}_{\mathrm{Mon},d}(\sigma))}\mathbb{E}_{P}\big{\{}\mu \big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}(\mathcal{D})\big{)}\big{\}} \geq 1-\alpha.\] Proof of Proposition 26.: Fix a Borel probability measure \(\mu\) on \(\mathbb{R}^{d}\). For \(\Delta\in\mathbb{R}\), let \(\eta_{\Delta}:\mathbb{R}^{d}\to\mathbb{R}\) denote the constant function satisfying \(\eta_{\Delta}(x):=\tau+\Delta\) for all \(x\in\mathbb{R}^{d}\), and let \(P_{\Delta}\) denote the distribution on \(\mathbb{R}^{d}\times\mathbb{R}\) of \((X,Y)\), where \(X\sim\mu\) and \(Y|X\sim\mathcal{N}\big{(}\eta_{\Delta}(X),\sigma^{2}\big{)}\). Thus \(\{P_{\Delta}:\Delta\in\mathbb{R}\}\subseteq\mathcal{P}_{\mathrm{Mon},d}(\sigma)\). Moreover, for any \(\Delta>0\), we have by Pinsker's inequality that \[\mathrm{TV}(P_{-\Delta}^{n},P_{0}^{n})\leq\sqrt{\frac{n}{2}\cdot\mathrm{KL}(P_ {-\Delta},P_{0})}=\sqrt{\frac{n}{2}\cdot\mathrm{KL}\big{(}\mathcal{N}(\tau- \Delta,\sigma^{2}),\mathcal{N}(\tau,\sigma^{2})\big{)}}\leq\frac{\sqrt{n}\Delta} {2\sigma}.\] Now fix \(\Delta>0\), and suppose that \(\hat{A}\in\hat{\mathcal{A}}_{n}\big{(}\tau,\alpha,\mathcal{P}_{\mathrm{Mon},d}( \sigma)\big{)}\). Since \(\mathcal{X}_{\tau}(\eta_{-\Delta})=\emptyset\), we have for every \(x\in\mathbb{R}^{d}\) that \[\mathbb{P}_{P_{0}}\big{(}x\in\hat{A}(\mathcal{D})\big{)} \leq\mathbb{P}_{P_{-\Delta}}\big{(}x\in\hat{A}(\mathcal{D})\big{)} +\mathrm{TV}\big{(}P_{-\Delta}^{n},P_{0}^{n}\big{)}\] \[\leq\mathbb{P}_{P_{-\Delta}}\big{(}\hat{A}(\mathcal{D})\nsubseteq \mathcal{X}_{\tau}(\eta_{-\Delta})\big{)}+\frac{\sqrt{n}\Delta}{2\sigma}\leq \alpha+\frac{\sqrt{n}\Delta}{2\sigma}.\] Hence, by Fubini's theorem, \[\mathbb{E}_{P_{0}}\big{\{}\mu\big{(}\hat{A}(\mathcal{D})\big{)}\big{\}}= \mathbb{E}_{P_{0}}\bigg{(}\int_{\mathbb{R}^{d}}1_{\{x\in\hat{A}(\mathcal{D})\}} \,d\mu(x)\bigg{)}=\int_{\mathbb{R}^{d}}\mathbb{P}_{P_{0}}\big{(}x\in\hat{A}( \mathcal{D})\big{)}\,d\mu(x)\leq\alpha+\frac{\sqrt{n}\Delta}{2\sigma}.\] Moreover, by our choice of \(\eta_{0}\), we have \(\mathcal{X}_{\tau}(\eta_{0})=\mathbb{R}^{d}\), and hence \[\mathbb{E}_{P_{0}}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta_{0})\setminus\hat{A }(\mathcal{D})\big{)}\big{\}}=1-\mathbb{E}_{P_{0}}\big{\{}\mu\big{(}\hat{A}( \mathcal{D})\big{)}\big{\}}\geq 1-\alpha-\frac{\sqrt{n}\Delta}{2\sigma}.\] The result follows by taking an infimum over \(\hat{A}\in\hat{\mathcal{A}}_{n}\big{(}\tau,\alpha,\mathcal{P}_{\mathrm{Mon},d}( \sigma)\big{)}\), and then letting \(\Delta\to 0\) Proof of Theorem 11.: Let us define \(C_{0}:=962\geq 2\{16\lor 481^{1/(2\beta+1)}\}\) and \(C:=3C_{0}\). Further, let \[\xi:=\frac{C_{0}}{2}\Bigg{\{}\bigg{(}\frac{\sigma^{2}}{n\nu^{2}}\log_{+}\Big{(} \frac{\log_{+}n}{\alpha\wedge\delta}\Big{)}\bigg{)}^{1/(2\beta+1)}+\frac{\log_ {+}(1/\delta)}{n}\Bigg{\}}.\] By the choice of \(\xi\), the result holds if \(\mu\big{(}\mathcal{X}_{\tau}(\eta)\big{)}\leq 2\xi\). We therefore suppose henceforth that \(\xi\) is such that \(\xi<1/2\) and \(\mu\big{(}\mathcal{X}_{\tau}(\eta)\big{)}>2\xi\). Then, since \(P\in\mathcal{P}_{\mathrm{Mar},d}(\tau,\beta,\nu)\), we have that \(\mu\big{(}\mathcal{X}_{\tau+\nu\xi^{\beta}}(\eta)\big{)}>\xi\), so \(x_{0}:=\inf\{x\in\mathcal{X}_{\tau+\nu\xi^{\beta}}(\eta):\mu\big{(}\mathcal{X }_{\tau+\nu\xi^{\beta}}(\eta)\cap(-\infty,x]\big{)}\geq\xi\}\) is finite. For \(I_{\xi}:=\mathcal{X}_{\tau+\nu\xi^{\beta}}(\eta)\cap(-\infty,x_{0}]\), it then holds that \(\mu(I_{\xi})\geq\xi\). Further, by Lemma 49_(i)_ and the fact that \(C_{0}\geq 32\), \[n\xi\geq 16\log_{+}\Big{(}\frac{1}{\delta}\Big{)}\geq 8\log\Big{(}\frac{2}{ \delta}\Big{)}.\] Writing \(\Omega_{0}:=\big{\{}n^{-1}\sum_{i=1}^{n}\mathbb{1}_{\{X_{i}\in I_{\xi}\}}\geq \xi/2\big{\}}\), it follows by a multiplicative Chernoff bound (McDiarmid, 1998, Theorem 2.3(c)) that \[\mathbb{P}_{P}\big{(}\Omega_{0}^{c}\big{)}\leq e^{-n\xi/8}\leq\frac{\delta}{2}.\] By the choice of \(\xi\), it holds on \(\Omega_{0}\) that \[\sum_{i=1}^{n}\mathbb{1}_{\{X_{i}\in I_{\xi}\}}\geq\frac{n\xi}{2}\geq\frac{1} {2}\cdot\Big{(}\frac{C_{0}}{2}\Big{)}^{2\beta+1}\cdot\frac{\sigma^{2}}{\nu^{2 }\xi^{2\beta}}\log_{+}\Big{(}\frac{\log_{+}n}{\alpha\wedge\delta}\Big{)};\] in particular, since \(n\xi\geq 8\log(2/\delta)\), it holds on this event that \(\sum_{i=1}^{n}\mathbb{1}_{\{X_{i}\in I_{\xi}\}}\geq 1\). Thus, we can fix any \(i_{1}\in[n]\) such that \(X_{i_{1}}=\max(\mathcal{D}_{X}\cap I_{\xi})\). Furthermore, let \(i_{1},\ldots,i_{K}\in[n]\) with \(K:=|\{i\in[n]:X_{i}\geq X_{i_{1}}\}|\) be the maximal set of indices such that \(X_{i_{k}}\geq X_{i_{1}}\) for all \(k\in[K]\). Writing \(r_{k}:=X_{i_{k}}-\min(\mathcal{D}_{X}\cap I_{\xi})\) and noting that \(\mathcal{I}_{r}(x)=\{i\in[n]:x-r\leq X_{i}\leq x\}\) for \(r\geq 0\) and \(x\in\mathbb{R}\), we have on \(\Omega_{0}\) that for all \(k\in[K]\), \[n\geq|\mathcal{I}_{r_{k}}(X_{i_{k}})|\geq|\mathcal{I}_{r_{1}}(X_{i_{1}})|\geq \frac{1}{2}\cdot\Big{(}\frac{C_{0}}{2}\Big{)}^{2\beta+1}\cdot\frac{\sigma^{2} }{\nu^{2}\xi^{2\beta}}\log_{+}\Big{(}\frac{\log_{+}n}{\alpha\wedge\delta} \Big{)}.\] Hence, writing \(u_{\delta^{\prime}}(\ell):=1.7\sqrt{\ell\big{\{}0.72\log(5.2/\delta^{\prime})+ \log\log(2\ell)\big{\}}}\) for \(\delta^{\prime}\in(0,1)\) and \(\ell\in\mathbb{N}\) as in Lemma 45_(a)_, we have for all \(k\in[K]\) that \[\leq\frac{2\cdot 1.7^{2}\cdot 0.72\cdot\log\big{(}\frac{10.4}{ \alpha\wedge\delta}\big{)}+4\cdot 1.7^{2}\cdot\log\big{\{}2\log_{+}\big{(}| \mathcal{I}_{r_{k}}(X_{i_{k}})|\big{)}\big{\}}}{|\mathcal{I}_{r_{k}}(X_{i_{k}})|}\] \[\leq 11.56\cdot\frac{\log\big{(}\frac{20.8\log_{+}n}{\alpha \wedge\delta}\big{)}}{|\mathcal{I}_{r_{1}}(X_{i_{1}})|}\leq 11.56\cdot 20.8 \cdot\frac{\log_{+}\big{(}\frac{\log_{+}n}{\alpha\wedge\delta}\big{)}}{| \mathcal{I}_{r_{1}}(X_{i_{1}})|}\] \[\leq 481\cdot\Big{(}\frac{2}{C_{0}}\Big{)}^{2\beta+1}\cdot\frac{ \nu^{2}\xi^{2\beta}}{\sigma^{2}}\leq\frac{\nu^{2}\xi^{2\beta}}{\sigma^{2}},\] where we used the inequality \((a+b)^{2}\leq 2(a^{2}+b^{2})\) for \(a,b\geq 0\), Lemma 49_(i)_ and the fact that \(C_{0}\geq 2\cdot 481^{1/(2\beta+1)}\). For \[\Omega_{1}\big{(}X_{i_{1}}\big{)}:=\bigcap_{k=1}^{K}\biggl{\{}\sum_{i\in \mathcal{I}_{\tau_{k}}(X_{i_{k}})}\frac{Y_{i}-\tau}{\sigma}\geq u_{\alpha}\big{(} |\mathcal{I}_{\tau_{k}}(X_{i_{k}})|\big{)}\biggr{\}},\] we therefore have on \(\Omega_{0}\) that \[\mathbb{P}_{P}\Bigl{(}\Omega_{1}\big{(}X_{i_{1}}\big{)}^{c} \big{|} \;\mathcal{D}_{X}\Bigr{)}\] \[=\mathbb{P}_{P}\biggl{(}\bigcup_{k=1}^{K}\biggl{\{}\sum_{i\in \mathcal{I}_{\tau_{k}}(X_{i_{k}})}\frac{Y_{i}-(\tau+\nu\xi^{\beta})}{\sigma}<u _{\alpha}\big{(}|\mathcal{I}_{\tau_{k}}(X_{i_{k}})|\big{)}-|\mathcal{I}_{\tau _{k}}(X_{i_{k}})|\frac{\nu\xi^{\beta}}{\sigma}\biggr{\}}\;\bigg{|}\;\mathcal{D }_{X}\biggr{)}\] \[\leq\mathbb{P}_{P}\biggl{(}\bigcup_{k=1}^{K}\biggl{\{}\sum_{i\in \mathcal{I}_{\tau_{k}}(X_{i_{k}})}\frac{Y_{i}-(\tau+\nu\xi^{\beta})}{\sigma}<- u_{\delta/2}\big{(}|\mathcal{I}_{\tau_{k}}(X_{i_{k}})|\big{)}\biggr{\}}\; \bigg{|}\;\mathcal{D}_{X}\biggr{)}\leq\frac{\delta}{2},\] where the last inequality follows from Lemma 45_(a)_. Let \(n(x)\) and \(\big{(}Y_{(j)}(x)\big{)}_{j\in[n(x)]}\) be as in Definition 1. For \(k\in[K]\), write \(n_{k}\equiv n_{k}(x,\mathcal{D}_{X}):=|\mathcal{I}_{\tau_{k}}(x)|\) and \[\hat{p}_{\sigma,\tau}^{\tau_{k}}(x,\mathcal{D}):=5.2\exp\biggl{\{}-\frac{\max \bigl{(}\sum_{j=1}^{n_{k}}Y_{(j)}(x)-\tau n_{k},0\bigr{)}^{2}}{2.0808\sigma^{2 }n_{k}}+\frac{\log\log(2n_{k})}{0.72}\biggr{\}}.\] We have on \(\Omega_{0}\cap\Omega_{1}\big{(}X_{i_{1}}\big{)}\) that \[\max_{k\in[K]}\hat{p}_{\sigma,\tau}(X_{i_{k}},\mathcal{D})\leq\max_{k\in[K]} \hat{p}_{\sigma,\tau}^{\tau_{k}}(X_{i_{k}},\mathcal{D})\leq\alpha,\] so that \(\{i_{k}:k\in[K]\}\subseteq\mathcal{R}_{\alpha}^{\rm ISS}\big{(}\mathcal{G}_{ \rm W}(\mathcal{D}_{X}),\big{(}\hat{p}_{\sigma,\tau}(X_{i_{k}},\mathcal{D}) \big{)}_{i\in[n]}\big{)}\). Since \(X_{i_{1}}\in I_{\xi}\), we have on \(\Omega_{0}\cap\Omega_{1}\big{(}X_{i_{1}}\big{)}\) that \[[x_{0},\infty)\subseteq\big{[}X_{i_{1}},\infty)\subseteq\hat{A}_{\sigma,\tau, \alpha,n}^{\rm ISS}(\mathcal{D}).\] It follows that \[\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}_{\sigma,\tau, \alpha,n}^{\rm ISS}(\mathcal{D})\big{)} \leq\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\mathcal{X}_{\tau+ \nu\xi^{\beta}}(\eta)\big{)}+\mu\big{(}\mathcal{X}_{\tau+\nu\xi^{\beta}}(\eta )\setminus\hat{A}_{\sigma,\tau,\alpha,n}^{\rm ISS}(\mathcal{D})\big{)}\] \[\leq\xi+\mu\big{(}\mathcal{X}_{\tau+\nu\xi^{\beta}}(\eta)\cap(- \infty,X_{i_{1}})\big{)}\] \[\leq\xi+\mu\big{(}\mathcal{X}_{\tau+\nu\xi^{\beta}}(\eta)\cap(- \infty,x_{0})\big{)}\leq 2\xi,\] since \(P\in\mathcal{P}_{\rm Mar,1}(\tau,\beta,\nu)\). We conclude that \[\mathbb{P}_{P}\biggl{[}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus \hat{A}_{\sigma,\tau,\alpha,n}^{\rm ISS}(\mathcal{D})\big{)} >1\wedge C_{0}\biggl{\{}\left(\frac{\sigma^{2}}{n\nu^{2}}\log_{+} \Bigl{(}\frac{\log_{+}n}{\alpha\wedge\delta}\Bigr{)}\right)^{1/(2\beta+1)}+ \frac{\log_{+}(1/\delta)}{n}\biggr{\}}\biggr{]}\] \[=\mathbb{P}_{P}\Bigl{(}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus \hat{A}_{\sigma,\tau,\alpha,n}^{\rm ISS}(\mathcal{D})\big{)}>2\xi\Bigr{)}\leq \mathbb{P}_{P}\bigl{(}\Omega_{0}^{c}\cup\Omega_{1}(X_{i_{1}})^{c}\bigr{)}\leq\delta.\] This proves the first statement in the theorem, and we deduce the second result by integrating our tail bound over \(\delta\in(0,1)\). Since this part of the calculation is an identical argument to that in the multivariate case, we refer the reader to (10), (11) and (12) in the proof of Theorem 15 for details. Since \(C=3C_{0}\), the result follows. Proof of Proposition 12.: Take \(q\in\mathbb{N}\) and let \(\mathbb{W}_{q,d}\subseteq[q]^{d}\) and \((\mathcal{H}^{q}_{\mathbf{j}}:\mathbf{j}\in\mathbb{W}_{q,d})\) be as in Section 3.3. Let \(\mu_{q}\) denote the uniform distribution on \(\bigcup_{\mathbf{j}\in\mathbb{W}_{q,d}}\mathcal{H}^{q}_{\mathbf{j}}\). For each \(\mathbf{j}=(j_{1},\ldots,j_{d})\in\mathbb{W}_{q,d}\), we define \(\eta_{q,\mathbf{j}}:\mathbb{R}^{d}\to\mathbb{R}\) by \[\eta_{q,\mathbf{j}}(x)\equiv\eta_{q,\mathbf{j}}(x_{1},\ldots,x_{d}):=\left\{\begin{aligned} &\tau-\nu&\text{ if }x_{\ell}<j_{\ell}/q \text{ for all }\ell\in[d]\\ &\tau+\nu&\text{ otherwise.}\end{aligned}\right.\] We also define \(\eta_{q,*}:\mathbb{R}^{d}\to\mathbb{R}\) to be the constant function \(\eta_{q,*}(x):=\tau+\nu\). For \(\mathbf{j}\in\mathbb{W}_{q,d}\cup\{*\}\), let \(P_{q,\mathbf{j}}\in\mathcal{P}_{\text{Mon},d}(\sigma)\) denote the distribution on \([0,1]^{d}\times\mathbb{R}\) of \((X,Y)\), where \(X\sim\mu_{q}\) and \(Y|X\sim\mathcal{N}\big{(}\eta_{q,\mathbf{j}}(X),\sigma^{2}\big{)}\). Moreover, \(\mu_{q}\big{(}\eta_{q,\mathbf{j}}^{-1}([\tau,\tau+\nu\xi^{\beta}])\big{)}=\mu_{q}( \emptyset)=0\leq\xi\) for all \(\xi<1\). On the other hand, if \(\xi=1\) then \(\mu_{q}\big{(}\eta_{q,\mathbf{j}}^{-1}([\tau,\tau+\nu\xi^{\beta}])\big{)}\leq 1=\xi\). Thus, \(P_{q,\mathbf{j}}\in\mathcal{P}_{\text{Mon},d}(\sigma)\cap\mathcal{P}_{\text{Mar},d }(\tau,\beta,\nu)\) for all \(\mathbf{j}\in\mathbb{W}_{q,d}\cup\{*\}\). In addition, given any \(\mathbf{j}=(j_{1},\ldots,j_{d})\in\mathbb{W}_{q,d}\), and \(\mathbf{j}^{\prime}=(j^{\prime}_{1},\ldots,j^{\prime}_{d})\in\mathbb{W}_{q,d} \backslash\{\mathbf{j}\}\) we must have \(j^{\prime}_{\ell^{\prime}}>j_{\ell^{\prime}}\) for some \(\ell^{\prime}\in[d]\) by the antichain property. Hence \(x_{\ell^{\prime}}\geq(j^{\prime}_{\ell^{\prime}}-1)/q\geq j_{\ell^{\prime}}/q\) for all \(x=(x_{1},\ldots,x_{d})^{\top}\in\mathcal{H}^{q}_{\mathbf{j}^{\prime}}\), so \(\eta_{q,\mathbf{j}}(x)=\tau+\nu=\eta_{q,*}(x)\) for such \(x\). Consequently, for each \(\mathbf{j}\in\mathbb{W}_{q,d}\), we have \[\text{KL}(P_{q,*},P_{q,\mathbf{j}}) =\int_{\mathbb{R}^{d}}\text{KL}\big{\{}\mathcal{N}\big{(}\eta_{q,*}(x),\sigma^{2}\big{)},\mathcal{N}\big{(}\eta_{q,\mathbf{j}}(x),\sigma^{2}\big{)} \big{\}}\,d\mu_{q}(x)\] \[=\mu_{q}(\mathcal{H}^{q}_{\mathbf{j}})\cdot\frac{2\nu^{2}}{\sigma^{2 }}=\frac{1}{|\mathbb{W}_{q,d}|}\cdot\frac{2\nu^{2}}{\sigma^{2}}\leq\frac{2\nu^ {2}d}{q^{d-1}\sigma^{2}}.\] Hence, by Pinsker's inequality, \[\text{TV}\big{(}P_{q,*}^{n},P_{q,\mathbf{j}}^{n}\big{)}\leq\sqrt{\frac{1}{2}\cdot \text{KL}\big{(}P_{q,*}^{n},P_{q,\mathbf{j}}^{n}\big{)}}=\sqrt{\frac{n}{2}\cdot \text{KL}\big{(}P_{q,*},P_{q,\mathbf{j}}\big{)}}\leq\sqrt{\frac{n\nu^{2}d}{q^{d-1} \sigma^{2}}}.\] To complete the proof, consider \(\hat{A}\in\hat{\mathcal{A}}_{n}(\tau,\alpha,\mathcal{P}^{\prime})\). Let \(x\in\bigcup_{\mathbf{j}\in\mathbb{W}_{q,d}}\mathcal{H}^{q}_{\mathbf{j}}\), so we can find \(\mathbf{j}_{x}\in\mathbb{W}_{q,d}\) such that \(x\in\mathcal{H}^{q}_{\mathbf{j}_{x}}\). Then \(x\notin\mathcal{X}_{\tau}(\eta_{q,\mathbf{j}_{x}})\), so \[\mathbb{P}_{P_{q,*}}\big{(}x\in\hat{A}(\mathcal{D})\big{)} \leq\mathbb{P}_{P_{q,j_{x}}}\big{(}x\in\hat{A}(\mathcal{D})\big{)} +\text{TV}\big{(}P_{q,*}^{n},P_{q,\mathbf{j}_{x}}^{n}\big{)}\] \[\leq\mathbb{P}_{P_{q,j_{x}}}\big{(}\hat{A}\nsubseteq\mathcal{X}_ {\tau}(\eta_{q,\mathbf{j}_{x}})\big{)}+\sqrt{\frac{n\nu^{2}d}{q^{d-1}\sigma^{2}}} \leq\alpha+\sqrt{\frac{n\nu^{2}d}{q^{d-1}\sigma^{2}}}.\] Hence, by Fubini's theorem, \[\mathbb{E}_{P_{q,*}}\big{\{}\mu_{q}\big{(}\hat{A}(\mathcal{D}) \big{)}\big{\}} =\mathbb{E}_{P_{q,*}}\bigg{(}\int_{\mathbb{R}^{d}}\mathbb{1}_{ \{x\in\hat{A}(\mathcal{D})\}}\,d\mu_{q}(x)\bigg{)}\] \[=\int_{\mathbb{R}^{d}}\mathbb{P}_{P_{q,*}}\big{(}x\in\hat{A}( \mathcal{D})\big{)}\,d\mu_{q}(x)\leq\alpha+\sqrt{\frac{n\nu^{2}d}{q^{d-1} \sigma^{2}}}.\] By our choice of \(\eta_{q,*}\), we have \(\mathcal{X}_{\tau}(\eta_{q,*})=\mathbb{R}^{d}\), and hence \[\mathbb{E}_{P_{q,*}}\big{\{}\mu_{q}\big{(}\mathcal{X}_{\tau}(\eta_{*})\setminus \hat{A}(\mathcal{D})\big{)}\big{\}}=1-\mathbb{E}_{P_{q,*}}\big{\{}\mu_{q} \big{(}\hat{A}(\mathcal{D})\big{)}\big{\}}\geq 1-\alpha-\sqrt{\frac{n\nu^{2}d}{q^{d-1} \sigma^{2}}}.\] The result follows by taking an infimum over \(\hat{A}\in\hat{\mathcal{A}}_{n}(\tau,\alpha,\mathcal{P}^{\prime})\), and then letting \(q\to\infty\) Proof of Proposition 14.: Fix \(P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{\mathrm{Reg},d}(\tau, \theta,\gamma,\lambda)\) with marginal distribution \(\mu\) on \(\mathbb{R}^{d}\) and regression function \(\eta\), and let \(C:=2\cdot 3^{3d+1}\cdot(1+d^{1/2})^{d+1}\cdot\theta^{2}\geq 1\). Fix \(\xi\in(0,1]\), and let \(r:=2\xi/C\in(0,1]\). Let \(T\subseteq\mathcal{X}_{\tau+\lambda r^{\gamma}}(\eta)\) be as in Lemma 27, so that, by that same lemma, \[\mu\big{(}\eta^{-1}([\tau,\tau+\lambda\xi^{\gamma}/C^{\gamma}]) \big{)} =\mu\big{(}\eta^{-1}([\tau,\tau+\lambda(r/2)^{\gamma}])\big{)} \leq\mu\big{(}\eta^{-1}\big{(}[\tau,\tau+\lambda r^{\gamma})\big{)}\big{)}\] \[\leq 3^{3d+1}\cdot(1+d^{1/2})^{d+1}\cdot\theta^{2}\cdot r=\xi,\] as required. Proof of Theorem 15.: Let us define \(C:=3^{3d+1+1/d}\cdot(1+d^{1/2})^{d+1}\cdot\theta^{2}\cdot C_{\circ}\), where \[C_{\circ}:=\big{(}2^{6}\theta\log(4\cdot 9^{d}\cdot\theta)\big{)}^{1/d}\geq \big{(}2^{4}\theta\cdot\log(4\cdot 9^{d}\cdot\theta)\big{)}^{1/d}\vee(175\cdot \theta)^{1/(2\gamma+d)}.\] Further, let \[r:=C_{\circ}\bigg{\{}\bigg{(}\frac{\sigma^{2}}{n\lambda^{2}}\log_{+}\Big{(} \frac{m\log_{+}n}{\alpha\wedge\delta}\Big{)}\bigg{)}^{1/(2\gamma+d)}+\bigg{(} \frac{\log_{+}(m/\delta)}{m}\bigg{)}^{1/d}\bigg{\}}.\] Since \(C\geq C_{\circ}\), the first result is immediate if \(r>1\). We therefore assume henceforth that \(r\leq 1\) (so in particular, \(n\lambda^{2}\geq\sigma^{2}\)). Observe that \[m\cdot r^{d}\geq C_{\circ}^{d}\cdot\log_{+}(m/\delta) \geq 2^{4}\theta\cdot\log(4\cdot 9^{d}\cdot\theta)\cdot\log_{+}(m^{(d-1 )/d}/\delta)\] \[\geq 8\theta\big{\{}\log(4\cdot 9^{d}\cdot\theta)+\log(r^{-(d-1 )}/\delta)\big{\}}=8\theta\log(4\cdot 9^{d}\cdot\theta\cdot r^{-(d-1)}/\delta),\] since \(r\geq m^{-1/d}\) and so \(m^{(d-1)/d}\geq r^{-(d-1)}\). Now, let \(\big{(}(S_{j}^{0},S_{j}^{1})\big{)}_{j\in[q]}\) be the hypercubes in Lemma 27 and let \((\Omega_{0,j,k})_{j\in[q],k\in\{0,1\}}\) be the events in Lemma 28. Then on \(\bigcap_{j\in[q]}\Omega_{0,j,1}\), we have for each \(j\in[q]\) that there exists \(i_{j}\in[m]\) with \(X_{i_{j}}\in S_{j}^{1}\). We extend \(\{X_{i_{1}},\ldots,X_{i_{q}}\}\) to a maximal set \(\{X_{i_{1}},\ldots,X_{i_{\ell}}\}\) with \(\ell\in\{q,q+1,\ldots,m\}\) such that \(X_{i_{q+1}},\ldots,X_{i_{\ell}}\in\bigcup_{j=1}^{q}\bigl{\{}x\in\mathbb{R}^{d} :X_{i_{j}}\preccurlyeq x\bigr{\}}\). Note that since \(S_{j}^{1}\succcurlyeq S_{j}^{0}\) for every \(j\in[q]\), for every \(s\in[\ell]\) there exists \(j_{s}\in[q]\) such that \(\{X_{i_{s}}\}\succcurlyeq S_{j_{s}}^{0}\). For \(s\in[\ell]\) and \(x\in\mathbb{R}\), write \(r_{s}:=\sup_{x^{\prime}\in S_{j_{s}}^{0}}\|x^{\prime}-X_{i_{s}}\|_{\infty}\), \(\mathcal{I}_{r_{s}}(x):=\{i\in[n]:X_{i}\preccurlyeq x,\|X_{i}-x\|_{\infty}\leq r _{s}\}\), \(n_{s}:=|\mathcal{I}_{r_{s}}(x)|\) and \[\hat{p}_{\sigma,\tau}^{r_{s}}(x,\mathcal{D}):=5.2\exp\biggl{\{}-\frac{\max \bigl{(}\sum_{i\in\mathcal{I}_{r_{s}}(x)}Y_{i}-\tau n_{s},0\bigr{)}^{2}}{2.080 8\sigma^{2}n_{s}}+\frac{\log\log(2n_{s})}{0.72}\biggr{\}}.\] By the choice of \(r\), we have on \(\bigcap_{j\in[q],k\in\{0,1\}}\Omega_{0,j,k}\) that \[\min_{s\in[\ell]}|\mathcal{I}_{r_{s}}(X_{i_{s}})|=\min_{j\in[q]}| \mathcal{I}_{r_{j}}(X_{i_{j}})| \geq\min_{j\in[q]}\sum_{i=1}^{n}\mathbbm{1}_{\{X_{i}\in S_{j}^{0} \}}\geq\frac{nr^{d}}{2\theta}\] \[\geq\frac{C_{\circ}^{2\gamma+d}\cdot\sigma^{2}}{2\cdot\theta \cdot\lambda^{2}\cdot r^{2\gamma}}\cdot\log_{+}\Big{(}\frac{m\log_{+}n}{ \alpha\wedge\delta}\Big{)}\] \[\geq\frac{2\cdot\sigma^{2}}{\lambda^{2}\cdot r^{2\gamma}}\cdot 4.2 \cdot 5.2\cdot 2\cdot 2\log_{+}\Big{(}\frac{m\log_{+}n}{\alpha\wedge\delta} \Big{)}\] \[\geq\frac{2\sigma^{2}}{\lambda^{2}\cdot r^{2\gamma}}\biggl{\{}4. 2\log\Bigl{(}\frac{5.2m}{\alpha\wedge\delta}\Bigr{)}+3\log(2\log_{+}n)\biggr{\}}\] \[\geq\frac{2\sigma^{2}}{\lambda^{2}\cdot r^{2\gamma}}\biggl{\{}2\log \Bigl{(}\frac{2m}{\delta}\Bigr{)}+2.0808\log\Bigl{(}\frac{5.2m}{\alpha}\Bigr{)}+ 3\log\log(2n)\biggr{\}},\] where the last two inequalities follow from Lemma 49_(i)_. Let \(\big{(}\Omega_{1,s}(\cdot)\big{)}_{s\in[\ell]}\) be as in Lemma 29. Then, on \(\Omega_{*}:=\bigcap_{j\in[q],k\in\{0,1\}}\Omega_{0,j,k}\cap\bigcap_{s\in[\ell]} \Omega_{1,s}(X_{i_{s}})\), we have \[\max_{s\in[\ell]}\hat{p}_{\sigma,\tau}(X_{i_{s}},\mathcal{D})\leq\max_{s\in[ \ell]}\hat{p}_{\sigma,\tau}^{r_{s}}(X_{i_{s}},\mathcal{D})\leq\frac{\alpha}{m}. \tag{9}\] We claim that on \(\Omega_{*}\), we have \(I_{1}:=\{i_{1},\ldots,i_{\ell}\}\subseteq\mathcal{R}_{\alpha}^{\rm ISS}\big{(} \mathcal{G}_{\rm W}(\mathcal{D}_{X,m}),\big{(}\hat{p}_{\sigma,\tau}(X_{i}, \mathcal{D})\big{)}_{i\in[m]}\big{)}\), and prove this by contradiction. First, for \(([m],E,\mathbf{w}):=\mathcal{G}_{\rm W}(\mathcal{D}_{X,m})\), denote for brevity \(G:=([m],E)=\mathcal{G}(\mathcal{D}_{X,m})\) and \(F:=([m],\{e\in E:w_{e}=1\})=\mathcal{G}_{\rm F}(\mathcal{D}_{X,m})\) as in Algorithm 1. Moreover, define \(\alpha(\cdot,\cdot)\) as in (5) in the proof of Lemma 8 and write \(R:=\mathcal{R}_{\alpha}^{\rm ISS}\big{(}\mathcal{G}_{\rm W}(\mathcal{D}_{X,m} ),\big{(}\hat{p}_{\sigma,\tau}(X_{i},\mathcal{D})\big{)}_{i\in[m]}\big{)}\). Suppose now for a contradiction that there exists \(s\in[\ell]\) such that \(i_{s}\notin R\) and write \(I_{*}:=\{i\in[m]:\alpha(i,R)>0\}\subseteq[m]\setminus R\). Now \(I_{1}\) is \(G\)-upper by construction and hence also \(F\)-upper. Consequently, there exists \(s^{\prime}\in[\ell]\) with \(i_{s^{\prime}}\in I_{*}\cap\big{(}\{i_{s}\}\cup\operatorname{an}_{F}(i_{s}) \big{)}\subseteq I_{1}\setminus R\) which in turn necessitates by Algorithm 1 that \(\hat{p}_{\sigma,\tau}(X_{i_{s^{\prime}}},\mathcal{D})>\alpha(i_{s^{\prime}},R)\). Moreover, as \(R\) is also \(G\)-upper by construction and therefore \(F\)-upper, we deduce that \(\big{(}\{i_{s^{\prime}}\}\cup\operatorname{de}_{F}(i_{s^{\prime}})\big{)} \cap R=\emptyset\) while \(\big{(}\{i_{s^{\prime}}\}\cup\operatorname{de}_{F}(i_{s^{\prime}})\big{)} \cap L(F)\neq\emptyset\), so that \(\big{|}\big{(}\{i_{s^{\prime}}\}\cup\operatorname{de}_{F}(i_{s^{\prime}}) \big{)}\cap L(F)\setminus R\big{|}\geq 1\). Thus, by (9), \[\alpha(i_{s^{\prime}},R)=\frac{\big{|}\big{(}\{i_{s^{\prime}}\}\cup\operatorname {de}_{F}(i_{s^{\prime}})\big{)}\cap L(F)\setminus R\big{|}}{|L(F)\setminus R|} \cdot\alpha\geq\frac{\alpha}{m}\geq\hat{p}_{\sigma,\tau}(X_{i_{s^{\prime}}}, \mathcal{D}),\] which establishes our contradiction and therefore proves the claim. It follows that on \(\Omega_{*}\), we have \(\{i_{1},\ldots,i_{q}\}\subseteq\mathcal{R}_{\alpha}^{\rm ISS}\big{(} \mathcal{G}_{\rm W}(\mathcal{D}_{X,m}),\big{(}\hat{p}_{\sigma,\tau}(X_{i}, \mathcal{D})\big{)}_{i\in[m]}\big{)}\), so taking the Borel measurable \(T\subseteq\mathcal{X}_{\tau}(\eta)\cap\operatorname{supp}(\mu)\) from Lemma 27, we have \[T\subseteq\bigcup_{j=1}^{q}\{x\in\mathbb{R}^{d}:X_{i_{j}}\preccurlyeq x\} \subseteq\hat{A}_{\sigma,\tau,\alpha,m}^{\rm ISS}(\mathcal{D}).\] Hence, on \(\Omega_{*}\), \[\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}_{\sigma,\tau,\alpha,m}^{ \rm ISS}(\mathcal{D})\big{)}\leq\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus T \big{)}\leq 3^{3d+1}\cdot(1+d^{1/2})^{d+1}\cdot\theta^{2}\cdot r,\] where the second inequality follows from Lemma 27. Thus, for any \(m\in[n]\), we conclude by Lemmas 28 and 29 that \[\mathbb{P}_{P}\bigg{[}\mu\big{(}\mathcal{X}_{\tau}(\eta) \setminus\hat{A}_{\sigma,\tau,\alpha,m}^{\rm ISS}(\mathcal{D})\big{)}>1 \wedge\frac{C}{3^{1/d}}\bigg{\{}\bigg{(}\frac{\sigma^{2}}{n\lambda^{2}}\log_{+ }\Big{(}\frac{m\log_{+}n}{\alpha\wedge\delta}\Big{)}\bigg{)}^{1/(2\gamma+d)} \!\! Hence, by Jensen's inequality, \[\int_{0}^{1}\biggl{\{}\log_{+}\Bigl{(}\frac{m\log_{+}n}{\alpha\wedge\delta} \Bigr{)}\biggr{\}}^{1/(2\gamma+d)}\,d\delta\leq 3^{1/(2\gamma+d)}\biggl{\{}\log_{+} \Bigl{(}\frac{m\log_{+}n}{\alpha}\Bigr{)}\biggr{\}}^{1/(2\gamma+d)}. \tag{11}\] At the same time, by Jensen's inequality again, \[\int_{0}^{1}\log_{+}^{1/d}\Bigl{(}\frac{m}{\delta}\Bigr{)}\,d\delta \leq\biggl{\{}\int_{0}^{1}\log_{+}\Bigl{(}\frac{m}{\delta}\Bigr{)} \,d\delta\biggr{\}}^{1/d}\leq\biggl{\{}\int_{0}^{1}\log\Bigl{(}\frac{m\lor e}{ \delta}\Bigr{)}\,d\delta\biggr{\}}^{1/d}\] \[=\bigl{(}\log_{+}m+1\bigr{)}^{1/d}\leq 2^{1/d}\log_{+}^{1/d}m, \tag{12}\] whence the result follows as \(3^{1/(2\gamma+d)}\lor 2^{1/d}\leq 3^{1/d}\). Proof of Corollary 16.: If \(\lambda\geq\sigma\), then \(m_{0}=n\), and the result follows from Theorem 15. On the other hand, if \(\lambda<\sigma\), then \(m_{0}=\lceil n\lambda^{2}/\sigma^{2}\rceil\leq n\). As in the proof of Theorem 15, we may assume that \(n\lambda^{2}\geq\sigma^{2}\), so that \[\biggl{(}\frac{\log_{+}(m_{0}/\delta)}{m_{0}}\biggr{)}^{1/d}\leq\biggl{(}\frac {\sigma^{2}}{n\lambda^{2}}\log_{+}\Bigl{(}\frac{2n\lambda^{2}\log_{+}n}{\sigma ^{2}(\alpha\wedge\delta)}\Bigr{)}\biggr{)}^{1/d}\leq 2\biggl{(}\frac{\sigma^{2}}{n \lambda^{2}}\log_{+}\Bigl{(}\frac{n\lambda^{2}\log_{+}n}{\sigma^{2}(\alpha \wedge\delta)}\Bigr{)}\biggr{)}^{1/d}.\] Since the result is clear if \(\frac{\sigma^{2}}{n\lambda^{2}}\log_{+}\bigl{(}\frac{n\lambda^{2}\log_{+}n}{ \sigma^{2}(\alpha\wedge\delta)}\bigr{)}>1\), we may further assume that this quantity is at most \(1\). But then \[\biggl{(}\frac{\log_{+}(m_{0}/\delta)}{m_{0}}\biggr{)}^{1/d}\leq 2\biggl{(} \frac{\sigma^{2}}{n\lambda^{2}}\log_{+}\Bigl{(}\frac{n\lambda^{2}\log_{+}n}{ \sigma^{2}(\alpha\wedge\delta)}\Bigr{)}\biggr{)}^{1/(2\gamma+d)}.\] The \(\log_{+}\bigl{(}m_{0}/(\alpha\wedge\delta)\bigr{)}\) term can be handled similarly (in fact, in a slightly simpler way), so \[\mathbb{P}_{P}\biggl{[}\mu\bigl{(}\mathcal{X}_{\tau}(\eta)\setminus \mathring{A}^{\text{ISS}}_{\sigma,\tau,\alpha,m_{0}}(\mathcal{D})\bigr{)}>1 \wedge\frac{4C}{3^{1/d}}\biggl{\{}\biggl{(}\frac{\sigma^{2}}{n\lambda^{2}}\log _{+}\Bigl{(}\frac{n\lambda^{2}\log_{+}n}{\sigma^{2}(\alpha\wedge\delta)} \Bigr{)}\biggr{)}^{1/(2\gamma+d)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+\biggl{(}\frac{\log_{+}(n/\delta)}{n}\biggr{)}^{1/d}\biggr{\}}\biggr{]} \leq\delta.\] We can then deduce the expectation bound using the same techniques as in the proof of Theorem 15, and the result follows. **Lemma 27**.: _Let \(d\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(\theta\in(1,\infty)\) and take \(P\in\mathcal{P}_{\operatorname{Mon},d}(\sigma)\cap\mathcal{P}_{\operatorname{Reg },d}(\tau,\theta,\gamma,\lambda)\). Given \(r\leq 1\), there exist \(q\leq\lfloor 9^{d}\cdot\theta\cdot r^{-(d-1)}\rfloor\in\mathbb{N}\) and pairs of hypercubes \(\bigl{(}(S^{0}_{j},S^{1}_{j})\bigr{)}_{j\in[q]}\in\bigl{(}\operatorname{ Pow}(\mathbb{R}^{d})\times\operatorname{Pow}(\mathbb{R}^{d})\bigr{)}^{q}\) such that \(S^{0}_{j}\leq S^{1}_{j}\), \(S^{0}_{j}\subseteq\mathcal{X}_{\tau+\lambda\cdot r^{\gamma}}(\eta)\), \(\mu(S^{0}_{j})\wedge\mu(S^{1}_{j})\geq\theta^{-1}\cdot r^{d}\), along with a Borel measurable set \(T\subseteq\mathcal{X}_{\tau+\lambda r^{\gamma}}(\eta)\cap\operatorname{supp}(\mu)\subseteq \mathcal{X}_{\tau}(\eta)\cap\operatorname{supp}(\mu)\) such that for every \(x\in T\) there exists \(j_{x}\in[q]\) with \(S^{1}_{j_{x}}\leq\{x\}\), and_ \[\mu\bigl{(}\mathcal{X}_{\tau}(\eta)\setminus T\bigr{)}\leq 3^{3d+1}\cdot(1+d^{1/2})^{ d+1}\cdot\theta^{2}\cdot r.\] Proof.: Without loss of generality, assume that \(\operatorname{supp}(\mu)\cap\mathcal{X}_{\tau}(\eta)\neq\emptyset\). Write \(V_{1}:=\{s\cdot 1_{d}:s\in\mathbb{R}\}\) with orthogonal complement \(V_{1}^{\perp}\), and write \(\Pi_{V_{1}}:\mathbb{R}^{d}\to V_{1}\) and \(\Pi_{V_{1}^{\perp}}:\mathbb{R}^{d}\to V_{1}^{\perp}\) for the orthogonal projections onto \(V_{1}\) and \(V_{1}^{\perp}\) respectively. Fix \(r\leq 1\). We begin by showing that \(\Pi_{V_{1}^{\perp}}\big{(}\mathrm{supp}(\mu)\cap\mathcal{X}_{\tau}(\eta)\big{)}\) can be covered by \(q\leq\lfloor 3^{d-1}\cdot 2^{3d/2}\cdot\theta\cdot r^{-(d-1)}\rfloor\) closed Euclidean balls of radius \(r\). To see this, first let \((z_{\ell})_{\ell\in[p]}\) with \(p\in\mathbb{N}\cup\{\infty\}\) be a maximal sequence in \(\mathrm{supp}(\mu)\cap\mathcal{X}_{\tau}(\eta)\) with \(\|z_{j}-z_{j^{\prime}}\|_{\infty}>2^{-1/2}\) for \(j\neq j^{\prime}\). Then \[\mathrm{supp}(\mu)\cap\mathcal{X}_{\tau}(\eta)\subseteq\bigcup_{j=1}^{p}B_{ \infty}(z_{j},2^{-1/2})\subseteq\bigcup_{j=1}^{p}B_{2}(z_{j},(d/2)^{1/2}).\] Moreover, since \(\big{(}B_{\infty}(z_{j},2^{-3/2})\big{)}_{j\in[p]}\) are disjoint and \(P\in\mathcal{P}_{\mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\), we have \[1\geq\sum_{j=1}^{p}\mu\big{(}B_{\infty}(z_{j},2^{-3/2})\big{)}\geq\frac{p}{2^{ 3d/2}\theta},\] so \(p\leq 2^{3d/2}\theta\). Each projected Euclidean ball \(\Pi_{V_{1}^{\perp}}\big{(}B_{2}(z_{j},(d/2)^{1/2})\big{)}\) can in turn be covered by \((3/r)^{d-1}\) closed Euclidean balls6 of radius \((d/2)^{1/2}r\leq(d/2)^{1/2}\). It follows that we can find a sequence \(x_{1},\ldots,x_{q}\in V_{1}^{\perp}\) with \(q\leq 3^{d-1}\cdot 2^{3d/2}\cdot\theta\cdot r^{-(d-1)}\) and \(\Pi_{V_{1}^{\perp}}\big{(}\mathrm{supp}(\mu)\cap\mathcal{X}_{\tau}(\eta) \big{)}\subseteq\bigcup_{j\in[q]}\Pi_{V_{1}^{\perp}}\big{(}B_{2}(x_{j},(d/2)^{ 1/2}r)\big{)}\). We deduce that Footnote 6: Here we use the fact that given any \(d^{\prime}\in\mathbb{N}\) and \(\epsilon\in(0,1]\), the closed Euclidean unit ball in \(\mathbb{R}^{d^{\prime}}\) may be covered by at most \((3/\epsilon)^{d^{\prime}}\) closed Euclidean balls of radius \(\epsilon\). Indeed, if \(w_{1},\ldots,w_{M}\in B_{2,d^{\prime}}(0,1)\) satisfy \(\|w_{j}-w_{j^{\prime}}\|_{2}>\epsilon\) for \(j\neq j^{\prime}\), then \(\cup_{j\in M}B_{2,d^{\prime}}(w_{j},\epsilon/2)\subseteq B_{2,d^{\prime}}(0,1 +\epsilon/2)\subseteq B_{2,d^{\prime}}(0,3/2)\), so \(M(\epsilon/2)^{d^{\prime}}\leq(3/2)^{d^{\prime}}\), and the result follows. \[\mathrm{supp}(\mu)\cap\mathcal{X}_{\tau}(\eta) \subseteq\bigcup_{j\in[q]}\Pi_{V_{1}^{\perp}}\big{(}B_{2}(x_{j},( d/2)^{1/2}r)\big{)}\oplus\bigcup_{\ell\in\mathbb{Z}}\Pi_{V_{1}}\big{(}B_{2}( \ell r\cdot\mathbf{1}_{d},(d/2)^{1/2}r)\big{)}\] \[\subseteq\bigcup_{(j,\ell)\in[q]\times\mathbb{Z}}B_{2}(x_{j}+\ell r \cdot\mathbf{1}_{d},d^{1/2}r)\] \[\subseteq\bigcup_{(j,\ell)\in[q]\times\mathbb{Z}}B_{\infty}(x_{j} +\ell r\cdot\mathbf{1}_{d},d^{1/2}r).\] Now, for each \(j\in[q]\), choose \[\ell_{0,j}:=\min\big{\{}\ell\in\mathbb{Z}:B_{\infty}(x_{j}+\ell r\cdot \mathbf{1}_{d},d^{1/2}r)\cap\mathcal{X}_{\tau}(\eta)\cap\mathrm{supp}(\mu)\neq \emptyset\big{\}},\] with the convention that \(\min\emptyset:=\infty\), and the minimum of a set with no lower bound is \(-\infty\). Note that since \(\mathrm{supp}(\mu)\cap\mathcal{X}_{\tau}(\eta)\subseteq\bigcup_{j^{\prime}\in[ p]}B_{\infty}(z_{j^{\prime}},2^{-1/2})\), we must have \(\ell_{0,j}\in\mathbb{Z}\cup\{\infty\}\). Let \(\mathcal{J}_{0}:=\{j\in[q]:\ell_{0,j}\in\mathbb{Z}\}\). By construction, for each \(j\in\mathcal{J}_{0}\) there exists \(z_{0,j}\in B_{\infty}(x_{j}+\ell_{0,j}r\cdot\mathbf{1}_{d},d^{1/2}r)\cap \mathcal{X}_{\tau}(\eta)\cap\mathrm{supp}(\mu)\). Hence \(z_{0,j}+r\cdot\mathbf{1}_{d}\in B_{\infty}(z_{0,j},r)\cap\mathcal{X}_{\tau+ \lambda\cdot r^{\gamma}}(\eta)\subseteq B_{\infty}(x_{j}+\ell_{0,j}r\cdot \mathbf{1}_{d},(1+d^{1/2})r)\cap\mathcal{X}_{\tau+\lambda\cdot r^{\gamma}}(\eta)\) since \(P\in\mathcal{P}_{\mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\) and \(r\leq 1\). But for all \(x^{\prime}\in B_{\infty}(x_{j}+\ell\cdot r\cdot\mathbf{1}_{d},(1+d^{1/2})\cdot r)\) with \(\ell\in\mathbb{Z}\cap[\ell_{0,j}+2(1+d^{1/2}),\infty)\), we have \(z_{0,j}+r\cdot\mathbf{1}_{d}\preccurlyeq x_{j}+(\ell_{0,j}+1+d^{1/2})\cdot r \cdot\mathbf{1}_{d}\preccurlyeq x_{j}+(\ell-1-d^{1/2})\cdot r\cdot\mathbf{1}_{d} \preccurlyeq x^{\prime}\). Since \(\eta\) is increasing, we deduce that \(\bigcup_{\ell\in\mathbb{Z}\cap[\ell_{0,j}+2(1+d^{1/2}),\infty)}B_{\infty}(x_{j}+ \ell r\cdot\mathbf{1}_{d},(1+d^{1/2})\cdot r)\subseteq\mathcal{X}_{\tau+\lambda \cdot r^{\gamma}}(\eta)\). Next, define \[\ell_{1,j} :=\min\big{\{}\ell\in\mathbb{Z}\cap[\ell_{0,j}+2(1+d^{1/2}), \infty):B_{\infty}(x_{j}+\ell r\cdot\mathbf{1}_{d},d^{1/2}r)\cap\mathrm{supp}( \mu)\neq\emptyset\big{\}},\] \[\ell_{2,j} :=\min\big{\{}\ell\in\mathbb{Z}\cap[\ell_{1,j}+2(1+d^{1/2}),\infty): B_{\infty}(x_{j}+\ell r\cdot\mathbf{1}_{d},d^{1/2}r)\cap\mathrm{supp}(\mu)\neq \emptyset\big{\}},\] and set \(\mathcal{J}_{k}:=\{j\in[q]:\ell_{k,j}\in\mathbb{Z}\}\) for \(k\in\{1,2\}\). Then for \(j\in\mathcal{J}_{1}\), there exists \(z_{1,j}\in B_{\infty}(x_{j}+\ell_{1,j}r\cdot\mathbf{1}_{d},d^{1/2}r)\cap\text{ supp}(\mu)\) with \(S^{0}_{j}:=B_{\infty}(z_{1,j},r)\subseteq B_{\infty}(x_{j}+\ell_{1,j}r\cdot \mathbf{1}_{d},(1+d^{1/2})\cdot r)\subseteq\mathcal{X}_{\tau+\lambda\cdot r \gamma}(\eta)\) and \(\mu(S^{j}_{0})\geq\theta^{-1}\cdot r^{d}\), since \(P\in\mathcal{P}_{\text{Reg},d}(\tau,\theta,\gamma,\lambda)\) and \(r\leq 1\). Similarly, for \(j\in\mathcal{J}_{2}\) there exists \(z_{2,j}\in B_{\infty}(x_{j}+\ell_{2,j}r\cdot\mathbf{1}_{d},d^{1/2}r)\cap\text{ supp}(\mu)\) with \(S^{1}_{j}:=B_{\infty}(z_{2,j},r)\subseteq B_{\infty}(x_{j}+\ell_{2,j}r\cdot \mathbf{1}_{d},(1+d^{1/2})\cdot r)\subseteq\mathcal{X}_{\tau+\lambda\cdot r \gamma}(\eta)\) and \(\mu(S^{1}_{j})\geq\theta^{-1}\cdot r^{d}\). We claim moreover that \(S^{0}_{j}\prec S^{1}_{j}\) for each \(j\in\mathcal{J}_{2}\). Indeed, given \(a_{0}\in S^{0}_{j}\subseteq B_{\infty}(x_{j}+\ell_{1,j}r\cdot\mathbf{1}_{d},(1 +d^{1/2})\cdot r)\) and \(a_{1}\in S^{1}_{j}\subseteq B_{\infty}(x_{j}+\ell_{2,j}r\cdot\mathbf{1}_{d},( 1+d^{1/2})\cdot r)\) we have \(a_{0}\preccurlyeq x_{j}+(\ell_{1,j}+1+d^{1/2})\cdot r\cdot\mathbf{1}_{d} \preccurlyeq x_{j}+(\ell_{2,j}-1-d^{1/2})\cdot r\cdot\mathbf{1}_{d}\preccurlyeq a _{1}\), as required. Similarly, \(S^{1}_{j}\preccurlyeq\bigcup_{\ell\in\mathbb{Z}\cap[\ell_{2,j}+2(1+d^{1/2}), \infty)}B_{\infty}(x_{j}+\ell\cdot r\cdot\mathbf{1}_{d},d^{1/2}r)\). Thus, letting \[T:=\bigg{(}\bigcup_{j\in[q],\ell\in\mathbb{Z}\cap[\ell_{2,j}+2(1+d^{1/2}), \infty)}B_{\infty}(x_{j}+\ell\cdot r\cdot\mathbf{1}_{d},d^{1/2}r)\bigg{)}\cap \text{supp}(\mu)\subseteq\mathcal{X}_{\tau}(\eta),\] there exists \(j_{x}\in[q]\) with \(S^{1}_{j_{x}}\preccurlyeq\{x\}\) for every \(x\in T\). Moreover, since \(P\in\mathcal{P}_{\text{Reg},d}(\tau,\theta,\gamma,\lambda)\) we have \[\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus T\big{)}\leq q\cdot 6(1+d^{1/2}) \cdot\lceil d^{1/2}\rceil^{d}\cdot\theta\cdot(2r)^{d}\leq 3^{3d+1}\cdot(1+d^{1/2} )^{d+1}\cdot\theta^{2}\cdot r,\] as required. **Lemma 28**.: _Fix \(\delta\in(0,1]\), \(n\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(\theta\in(1,\infty)\), \(m\in[n]\) and take \(P\in\mathcal{P}_{\text{\rm Mon},d}(\sigma)\cap\mathcal{P}_{\text{\rm Reg},d}( \tau,\theta,\gamma,\lambda)\). Fix \(r\leq 1\), and let \(\big{(}(S^{0}_{j},S^{1}_{j})\big{)}_{j\in[q]}\in\big{(}\text{\rm Pow}(\mathbb{R }^{d})\times\text{\rm Pow}(\mathbb{R}^{d})\big{)}^{q}\) denote the hypercubes in Lemma 27. Let \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), and for \(j\in[q]\), let_ \[\Omega_{0,j,0}:=\bigg{\{}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{\{X_{i}\in S^ {0}_{j}\}}\geq\frac{r^{d}}{2\theta}\bigg{\}}\quad\text{and}\quad\Omega_{0,j,1}: =\bigg{\{}\frac{1}{m}\sum_{i=1}^{m}\mathbbm{1}_{\{X_{i}\in S^{1}_{j}\}}\geq \frac{r^{d}}{2\theta}\bigg{\}}.\] _Then, for \(m\cdot r^{d}\geq 8\theta\cdot\log(4\cdot 9^{d}\cdot\theta\cdot r^{-(d-1)}\cdot \delta^{-1})\) that_ \[\mathbb{P}_{P}\bigg{(}\bigcup_{j\in[q],k\in\{0,1\}}\Omega_{0,j,k}^{c}\bigg{)} \leq\frac{\delta}{2}.\] Proof.: By Lemma 27, we have \(\mu(S^{k}_{j})\geq r^{d}/\theta\) for every \(j\in[q]\) and \(k\in\{0,1\}\), and moreover, \(q\leq 9^{d}\cdot\theta\cdot r^{-(d-1)}\). Hence, by the multiplicative Chernoff bound (McDiarmid, 1998, Theorem 2.3(c)), we have \[\mathbb{P}_{P}\bigg{(}\bigcup_{j\in[q],k\in\{0,1\}}\Omega_{0,j,k}^{c}\bigg{)} \leq 2q\cdot\exp\biggl{(}-\frac{m\cdot r^{d}}{8\theta}\biggr{)}\leq\frac{\delta}{2},\] as required. **Lemma 29**.: _Fix \(\delta\in(0,1]\), \(n\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(\theta\in(1,\infty)\), \(\alpha\in(0,1)\), \(m\in[n]\), \(\ell\in[m]\) and take \(P\in\mathcal{P}_{\text{\rm Mon},d}(\sigma)\cap\mathcal{P}_{\text{\rm Reg},d}(\tau, \theta,\gamma,\lambda)\). Fix \(r\leq 1\), and let \((S^{0}_{j})_{j\in[q]}\in\big{(}\text{\rm Pow}(\mathbb{R}^{d})\big{)}^{q}\) be as in Lemma 27. Let \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), and for each \(s\in[\ell]\), find \((x_{s},j_{s})\in\mathbb{R}^{d}\times[q]\) such that \(\{x_{s}\}\succcurlyeq S^{0}_{j_{s}}\). Now let \(r_{s}:=\sup_{x\in S^{0}_{j_{s}}}\|x-x_{s}\|_{\infty}\), and let_ \[\Omega_{1,s}(x_{s}):=\bigg{\{}\frac{1}{|\mathcal{I}_{r_{s}}(x_{s})|}\sum_{i\in \mathcal{I}_{r_{s}}(x_{s})}Y_{i}\geq\tau+\sigma\sqrt{\frac{2.0808\log(5.2m/ \alpha)+3\log\log(2n)}{|\mathcal{I}_{r_{s}}(x_{s})|}}\bigg{\}}.\] _If \(\min_{s\in[\ell]}|\mathcal{I}_{r_{s}}(x_{s})|\geq\frac{2\sigma^{2}}{\lambda^{2} \cdot r^{2\gamma}}\big{\{}2\log(2m/\delta)+2.0808\log(5.2m/\alpha)+3\log\log(2n) \big{\}}\), then_ \[\mathbb{P}_{P}\bigg{(}\bigcup_{s=1}^{\ell}\Omega_{1,s}(x_{s})^{c}\ \bigg{|}\ \mathcal{D}_{X}\bigg{)}\leq\frac{\delta}{2}.\] Proof.: As shorthand, write \(w_{n,m,\alpha}:=2.0808\log(5.2m/\alpha)+3\log\log(2n)\). Then, by Hoeffding's inequality, \[\mathbb{P}_{P}\bigg{(}\bigcup_{s=1}^{\ell}\Omega_{1,s}(x_{s})^{c}\ \bigg{|}\ \mathcal{D}_{X}\bigg{)}\leq\sum_{s=1}^{\ell}\exp\biggl{\{}-\frac{| \mathcal{I}_{r_{s}}(x_{s})|}{2\sigma^{2}}\bigg{(}\lambda\cdot r^{\gamma}- \sigma\sqrt{\frac{w_{n,m,\alpha}}{|\mathcal{I}_{r_{s}}(x_{s})|}}\bigg{)}^{2} \bigg{\}}\leq\frac{\delta}{2},\] where we have used the fact that \((a+b)^{1/2}\geq(a/2)^{1/2}+(b/2)^{1/2}\) for \(a,b\geq 0\). ### Proofs from Section 3.3 The proof of Theorem 17 involves combining three different lower bounds that emphasise different aspects of the challenge in isotonic subgroup selection. However, there are some commonalities to these three lower bound constructions, so we explain the key ideas here. Fix \(q\in\mathbb{N}\), and note that if \(x=(x_{1},\ldots,x_{d}),x^{\prime}=(x^{\prime}_{1},\ldots,x^{\prime}_{d})\in[q ]^{d}\) and \(x\succcurlyeq x^{\prime}\), then the length of any chain from \(x\) to \(x^{\prime}\) is at most \(\sum_{j=1}^{d}(x_{j}-x^{\prime}_{j})\leq(q-1)\cdot d\), because successive elements within the chain must decrease at least one coordinate by at least \(1\). Now let \(\mathbb{W}_{q,d}\subseteq[q]^{d}\) be an antichain of maximal cardinality. Dilworth's theorem (Dilworth, 1950) states that we can partition \([q]^{d}\) into \(|\mathbb{W}_{q,d}|\) chains, so7\(|\mathbb{W}_{q,d}|\geq q^{d}/\{(q-1)\cdot d\}\geq q^{d-1}/d\). For each \(\boldsymbol{j}=(j_{1},\ldots,j_{d})\in\mathbb{W}_{q,d}\), define a hypercube Footnote 7: In fact, this bound is fairly sharp. Indeed, define the _width_ of a partially ordered set \(R\), denoted \(\text{wd}(R)\), to be the maximum cardinality of an antichain in \(R\). Then, for any two finite partially ordered sets \((R_{1},\preccurlyeq_{1}),(R_{2},\preccurlyeq_{2})\), we have \(\text{wd}(R_{1}\times R_{2})\leq\min\{|R_{1}|\cdot\text{wd}(R_{2}),|R_{2}| \cdot\text{wd}(R_{1})\}\), where the Cartesian product \(R_{1}\times R_{2}\) is equipped with the order relation \(\preccurlyeq_{1\times 2}\), where for \(r_{1},r^{\prime}_{1}\in R_{1}\) and \(r_{2},r^{\prime}_{2}\in R_{2}\), we define \((r_{1},r_{2})\preccurlyeq_{1\times 2}(r^{\prime}_{1},r^{\prime}_{2})\) if and only if \(r_{1}\preccurlyeq_{1}r^{\prime}_{1}\) and \(r_{2}\preccurlyeq_{2}r^{\prime}_{2}\). It therefore follows by induction that \(|\mathbb{W}_{q,d}|\leq q^{d-1}\). \[\mathcal{H}^{q}_{\boldsymbol{j}}:=\prod_{\ell\in[d]}\biggl{[}\frac{j_{\ell}-1} {q},\frac{j_{\ell}}{q}\biggr{)}.\] We also set \[\mathcal{H}^{q}_{\infty}:=\bigcup_{\boldsymbol{j}\in\mathbb{W}_{q,d}}\bigl{\{} x\in\mathbb{R}^{d}\setminus\mathcal{H}^{q}_{\boldsymbol{j}}:x^{\prime}\preccurlyeq x \text{ for some }x^{\prime}\in\mathcal{H}^{q}_{\boldsymbol{j}}\bigr{\}},\] and let \(\mathcal{H}^{q}_{-\infty}:=\mathbb{R}^{d}\setminus\bigcup_{\boldsymbol{j}\in \mathbb{W}_{q,d}\cup\{\infty\}}\mathcal{H}^{q}_{\boldsymbol{j}}\). By Lemma 30, the sets \(\bigl{\{}\mathcal{H}^{q}_{\boldsymbol{j}}:\boldsymbol{j}\in\mathbb{W}_{q,d} \cup\{-\infty,\infty\}\bigr{\}}\) form a partition of \(\mathbb{R}^{d}\). For each \(S\subseteq\mathbb{W}_{q,d}\) and for \(\tau\in\mathbb{R}\), \(\gamma,\lambda>0\), define \(\eta_{S}:\mathbb{R}^{d}\to\mathbb{R}\) by \[\eta_{S}(x)\equiv\eta_{S,\mathbb{W}_{q,d},q,\tau,\gamma,\lambda}(x):=\left\{ \begin{aligned} &\tau-\frac{\lambda}{q^{\gamma}}&&\text{ if }x\in\bigcup_{\boldsymbol{j}\in(\mathbb{W}_{q,d}\cup\{-\infty\})\setminus S} \mathcal{H}^{q}_{\boldsymbol{j}}\\ &\tau+\frac{\lambda}{q^{\gamma}}&&\text{ if }x\in\bigcup_{j\in S} \mathcal{H}^{q}_{\boldsymbol{j}}\\ &\tau+\lambda&&\text{ if }x\in\mathcal{H}^{q}_{\infty}. \end{aligned}\right. \tag{13}\] The intuition is that if \(\boldsymbol{j},\boldsymbol{j}^{\prime}\) are distinct elements of \(\mathbb{W}_{q,d}\), then the response at any \(x\in\mathcal{H}^{q}_{\boldsymbol{j}}\) provides no information on whether or not \(x^{\prime}\in\mathcal{H}^{q}_{\boldsymbol{j}^{\prime}}\) belongs to \(\mathcal{X}_{\tau}(\eta)\). Any data-dependent selection set will therefore struggle to identify \(S\) from the data, and since \(\mathbb{W}_{q,d}\) is a large antichain, the \(\mu\)-measure of this difficult set may be quite large. **Lemma 30**.: _For any \(d,q\in\mathbb{N}\) and antichain \(\mathbb{W}_{q,d}\subseteq[q]^{d}\), the sets \(\big{\{}\mathcal{H}^{q}_{\boldsymbol{j}}:\boldsymbol{j}\in\mathbb{W}_{q,d} \cup\{-\infty,\infty\}\big{\}}\) form a partition of \(\mathbb{R}^{d}\)._ Proof.: The fact that these sets cover \(\mathbb{R}^{d}\) follows by definition of \(\mathcal{H}^{q}_{-\infty}\). Since the sets \(\big{\{}\mathcal{H}^{q}_{\boldsymbol{j}}:\boldsymbol{j}\in\mathbb{W}_{q,d} \cup\{-\infty\}\big{\}}\) are disjoint, and \(\mathcal{H}^{q}_{-\infty}\cap\mathcal{H}^{q}_{\infty}=\emptyset\), we need only check that \(\mathcal{H}^{q}_{\boldsymbol{j}}\cap\mathcal{H}^{q}_{\infty}=\emptyset\) when \(\boldsymbol{j}\in\mathbb{W}_{q,d}\). To this end, suppose for a contradiction that \(x\in\mathcal{H}^{q}_{\boldsymbol{j}}\cap\mathcal{H}^{q}_{\infty}\) for some \(\boldsymbol{j}\in\mathbb{W}_{q,d}\), and take \(\boldsymbol{j}^{\prime}\in\mathbb{W}_{q,d}\setminus\{\boldsymbol{j}\}\) and \(x\in\mathbb{R}^{d}\setminus\mathcal{H}^{q}_{\boldsymbol{j}^{\prime}}\) but \(x^{\prime}\preccurlyeq x\) for some \(x^{\prime}\in\mathcal{H}^{q}_{\boldsymbol{j}^{\prime}}\). Then there exists \(\delta\in(0,1)\) such that \(\boldsymbol{j}^{\prime}-\boldsymbol{1}_{d}\preccurlyeq q\cdot x^{\prime} \preccurlyeq q\cdot x\preccurlyeq\boldsymbol{j}-\delta\cdot\boldsymbol{1}_{d}\) so that \(\boldsymbol{j}^{\prime}-(1-\delta)\cdot\boldsymbol{1}_{d}\preccurlyeq \boldsymbol{j}\). But the coordinates of \(\boldsymbol{j}\) and \(\boldsymbol{j}^{\prime}\) are positive integers, so we must have \(\boldsymbol{j}^{\prime}\preccurlyeq\boldsymbol{j}\), which contradicts \(\mathbb{W}_{q,d}\) being an antichain. **Proposition 31**.: _Fix \(d\in\mathbb{N}\), \(\alpha\in(0,1/4]\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\) and \(\theta>1\). For any \(n\geq(8\cdot 2^{d})^{(2\gamma+d)/(2\gamma)}\big{(}\frac{13\lambda^{2}}{ \sigma^{2}\log_{+}\{1/(5\alpha)\}}\big{)}^{d/(2\gamma)}\), we have_ \[\sup_{P\in\mathcal{P}^{\prime}}\inf_{\hat{A}\in\hat{A}_{n}(\tau,\alpha, \mathcal{P}^{\prime})}\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{\tau}( \eta)\setminus\hat{A}(\mathcal{D})\big{)}\big{\}}\geq\frac{1}{40d}\bigg{\{} \frac{\sigma^{2}}{13n\lambda^{2}}\log_{+}\Big{(}\frac{1}{5\alpha}\Big{)} \wedge 1\bigg{\}}^{1/(2\gamma+d)},\] _where \(\mathcal{P}^{\prime}:=\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{ \mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\)._ Proof.: Suppose first that \[n\geq\frac{\sigma^{2}}{13\lambda^{2}}\log_{+}\Big{(}\frac{1}{5\alpha}\Big{)},\] so that \[q:=\Bigg{\lceil}\bigg{\{}\frac{13n\lambda^{2}}{\sigma^{2}\log_{+} \big{(}1/(5\alpha)\big{)}}\bigg{\}}^{1/(2\gamma+d)}\Bigg{\rceil}\leq 2\bigg{\{} \frac{13n\lambda^{2}}{\sigma^{2}\log_{+}\big{(}1/(5\alpha)\big{)}}\bigg{\}}^{1 /(2\gamma+d)}. \tag{14}\] Figure 11: Lower bound constructions in the proofs of Propositions 31 (left) and 33 (right). The grey regions do not belong to \(\mathcal{X}_{\tau}(\eta)\), while light blue and dark blue regions correspond to areas where \(\eta\) is slightly above and comfortably above \(\tau\) respectively. White regions have no marginal mass. In both panels, \(q=5\), \(d=2\), \(\mathbb{W}_{q,d}=\{(1,5),(2,4),(3,3),(4,2),(5,1)\}\) and \(S=\{(1,5),(4,2),(5,1)\}\). Let \(\mathbb{W}_{q,d}\subseteq[q]^{d}\) be an antichain with \(|\mathbb{W}_{q,d}|\geq q^{d-1}/d\). For each \(S\subseteq\mathbb{W}_{q,d}\), let \(P_{S}\) denote the joint distribution of \((X,Y)\), where \(X\sim\mathrm{Unif}\big{(}[0,1]^{d}\big{)}=:\mu\) and \(Y|X\sim\mathcal{N}\big{(}\eta_{S}(X),\sigma^{2}\big{)}\), with \(\eta_{S}\) defined by (13). Then, by Lemma 32, \(P_{S}\in\mathcal{P}^{\prime}\) for every \(S\subseteq\mathbb{W}_{q,d}\). For ease of notation, we write \(S^{*}:=\mathbb{W}_{q,d}\) and \(S^{*}_{-\boldsymbol{j}}:=\mathbb{W}_{q,d}\setminus\{\boldsymbol{j}\}\) for \(\boldsymbol{j}\in\mathbb{W}_{q,d}\), so that \(\mu\big{(}\mathcal{X}_{\tau}(\eta_{S^{*}})\setminus\mathcal{X}_{\tau}(\eta_{ S^{*}_{-\boldsymbol{j}}})\big{)}=\mu(\mathcal{H}^{q}_{\boldsymbol{j}})=1/q^{d}\) for all \(\boldsymbol{j}\in\mathbb{W}_{q,d}\). Hence, for any Borel set \(A\subseteq\mathbb{R}^{d}\), \[\mu\big{(}\mathcal{X}_{\tau}(\eta_{S^{*}})\setminus A\big{)}\geq\sum_{ \boldsymbol{j}\in S^{*}}\frac{1}{q^{d}}\mathbbm{1}_{\{A\cap\mathcal{H}^{q}_{ \boldsymbol{j}}=\emptyset\}}. \tag{15}\] Note that by the upper bound on \(q\) in (14) and the lower bound on \(n\) in the statement of the proposition we have \[\frac{n}{q^{d}}\geq\frac{n^{2\gamma/(2\gamma+d)}}{2^{d}}\cdot\Big{\{}\frac{ \sigma^{2}\log_{+}\!\big{(}1/(5\alpha)\big{)}}{13\lambda^{2}}\Big{\}}^{d/(2 \gamma+d)}\geq 8.\] Moreover, by (14), \[\Delta:=\frac{2\lambda}{q^{\gamma}}\leq\frac{\sigma}{\sqrt{3.2n/q^{d}}}\log_{ +}^{1/2}\!\Big{(}\frac{1}{5\alpha}\Big{)}\] Fix \(\boldsymbol{j}\in S^{*}\) and a data-dependent selection set \(\hat{A}\in\hat{\mathcal{A}}_{n}(\tau,\alpha,\mathcal{P}^{\prime})\). Now define \(\psi_{\hat{A}}(\mathcal{D}):=\mathbbm{1}_{\{\hat{A}(\mathcal{D})\cap\mathcal{H }^{q}_{\boldsymbol{j}}\neq\emptyset\}}\), which satisfies \(\mathbb{P}_{P_{S^{*}_{-\boldsymbol{j}}}}\big{(}\psi_{\hat{A}}(\mathcal{D})=1 \big{)}\leq\alpha\). Then, by Corollary 47 with \(t=\tau-\lambda/q^{\gamma}\) and \(p=1/q^{d}\), we have \[\mathbb{P}_{P^{*}_{S}}\big{(}\hat{A}(\mathcal{D})\cap\mathcal{H}^{q}_{ \boldsymbol{j}}=\emptyset\big{)}=\mathbb{P}_{P^{*}_{S}}\big{(}\psi_{\hat{A}}( \mathcal{D})=0\big{)}\geq\frac{1}{20}.\] In combination with (15), we deduce that \[\sup_{P\in\mathcal{P}^{\prime}}\inf_{\hat{A}\in\hat{\mathcal{A}}_{ n}(\tau,\alpha,\mathcal{P}^{\prime})}\mathbb{E}_{P}\big{\{}\mu\big{(} \mathcal{X}_{\tau}(\eta)\setminus\hat{A}(\mathcal{D})\big{)}\big{\}} \geq\inf_{\hat{A}\in\hat{\mathcal{A}}_{n}(\tau,\alpha,\mathcal{P}^ {\prime})}\mathbb{E}_{P_{S^{*}}}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta_{S^{ *}})\setminus\hat{A}(\mathcal{D})\big{)}\big{\}}\] \[\geq\inf_{\hat{A}\in\hat{\mathcal{A}}_{n}(\tau,\alpha,\mathcal{P}^ {\prime})}\frac{1}{q^{d}}\sum_{\boldsymbol{j}\in\mathbb{W}_{q,d}}\mathbb{P}_{P _{S^{*}}}\big{\{}\hat{A}(\mathcal{D})\cap\mathcal{H}^{q}_{\boldsymbol{j}}= \emptyset\big{\}}\] \[\geq\frac{|\mathbb{W}_{q,d}|}{20q^{d}}\geq\frac{1}{40d}\bigg{\{} \frac{\sigma^{2}}{13n\lambda^{2}}\log_{+}\Big{(}\frac{1}{5\alpha}\Big{)}\bigg{\}} ^{1/(2\gamma+d)}.\] Finally, if \[n<\frac{\sigma^{2}}{13\lambda^{2}}\log_{+}\!\Big{(}\frac{1}{5\alpha}\Big{)},\] then \[\frac{1}{40d}\bigg{\{}\frac{\sigma^{2}}{13n\lambda^{2}}\log_{+}\!\Big{(}\frac{ 1}{5\alpha}\Big{)}\bigg{\}}^{1/(2\gamma+d)}\geq\frac{1}{40d},\] as required. **Lemma 32**.: _For any \(d,q\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(\theta>1\), an antichain \(\mathbb{W}_{q,d}\subseteq[q]^{d}\) and \(S\subseteq\mathbb{W}_{q,d}\), we have that \(P_{S}\equiv P_{S,\mathbb{W}_{q,d},q,\sigma,\tau,\gamma,\lambda}\) defined as in the proof of Proposition 31 satisfies \(P_{S}\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{\mathrm{Reg},d}( \tau,\theta,\gamma,\lambda)\)._ Proof.: Fix \(S\subseteq\mathbb{W}_{q,d}\). We first prove that \(P_{S}\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\). Since the sub-Gaussianity condition is satisfied, it suffices to show that \(\eta_{S}\) is coordinate-wise increasing in \(\mathbb{R}^{d}\). To this end, first note that for \(x_{0}\in\bigcup_{\boldsymbol{j}\in(\mathbb{W}_{q,d}\cup\{-\infty\})\setminus S} \mathcal{H}^{q}_{\boldsymbol{j}}\) and \(x_{1}\succcurlyeq x_{0}\), we have \(\eta_{S}(x_{0})\leq\eta_{S}(x_{1})\) since \(\eta_{S}(x_{0})=\inf_{x\in\mathbb{R}^{d}}\eta_{S}(x)\). Next, consider the case \(x_{0}\in\bigcup_{j\in S}\mathcal{H}^{q}_{\boldsymbol{j}}\) and let \(\boldsymbol{j}_{0}\in S\) be such that \(x_{0}\in\mathcal{H}^{q}_{\boldsymbol{j}_{0}}\). If \(x_{1}\succcurlyeq x_{0}\), then either \(x_{1}\in\mathcal{H}^{q}_{\boldsymbol{j}_{0}}\), in which case \(\eta_{S}(x_{0})=\eta_{S}(x_{1})\), or \(x_{1}\in\mathbb{R}^{d}\setminus\mathcal{H}^{q}_{\boldsymbol{j}_{0}}\), in which case \(x_{1}\in\mathcal{H}^{q}_{\infty}\) so that \(\eta_{S}(x_{0})=\tau+\lambda/q^{\gamma}\leq\tau+\lambda=\eta_{S}(x_{1})\). Finally, suppose that \(x_{0}\in\mathcal{H}^{q}_{\infty}\) and find \(\boldsymbol{j}_{0}=(j_{0,1},\ldots,j_{0,d})\in\mathbb{W}_{q,d}\) and \(x^{\prime}\prec x_{0}\) such that \(x_{0}\in\mathbb{R}^{d}\setminus\mathcal{H}^{q}_{\boldsymbol{j}_{0}}\) and \(x^{\prime}\in\mathcal{H}^{q}_{\boldsymbol{j}_{0}}\). The fact that \(\boldsymbol{j}_{0}-\boldsymbol{1}_{d}\preccurlyeq q\cdot x^{\prime}\preccurlyeq q \cdot x_{0}\) in conjunction with \(x_{0}=(x_{0,1},\ldots,x_{0,d})^{\top}\in\mathbb{R}^{d}\setminus\mathcal{H}^{q} _{\boldsymbol{j}_{0}}\) means that there exists \(\ell_{0}\in[d]\) such that \(j_{0,\ell_{0}}\leq q\cdot x_{0,\ell_{0}}\). Thus, for any \(x_{1}\succcurlyeq x_{0}\), it follows that \(x_{1}\in\mathbb{R}^{d}\setminus\mathcal{H}^{q}_{\boldsymbol{j}_{0}}\). Moreover, \(x^{\prime}\preccurlyeq x_{0}\preccurlyeq x_{1}\), so that \(x_{1}\in\mathcal{H}^{q}_{\infty}\), whence \(\eta_{S}(x_{0})=\eta_{S}(x_{1})\). This establishes that \(P_{S}\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\). We now show that \(P_{S}\in\mathcal{P}_{\mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\), which requires us to verify the conditions in Definition 13_(i)_ and _(ii)_. We start by showing _(i)_. Indeed, for any \(x_{0}\in[0,1]^{d}=\mathrm{supp}(\mu)\) and \(r\in(0,1]\), we have \(r^{d}\leq\mu\big{(}B_{\infty}(x_{0},r)\big{)}\leq(2r)^{d}\). This establishes the condition in Definition 13_(i)_. It remains to show that Definition 13_(ii)_ is satisfied. Note that \(\mathcal{X}_{\tau}(\eta_{S})=\bigcup_{\boldsymbol{j}\in S}\mathcal{H}^{q}_{ \boldsymbol{j}}\cup\mathcal{H}^{q}_{\infty}\). Suppose first that \(x_{0}\in\mathcal{H}^{q}_{\boldsymbol{j}_{0}}\) for some \(\boldsymbol{j}_{0}\in S\). If \(r\leq 1/q\), then \(\tau+\lambda\cdot r^{\gamma}\leq\tau+\lambda/q^{\gamma}=\eta_{S}(x_{0})\). On the other hand, if \(r\in(1/q,1]\), then \(x_{1}:=x_{0}+r\cdot\boldsymbol{1}_{d}\succcurlyeq(\boldsymbol{j}_{0}- \boldsymbol{1}_{d})/q+r\cdot\boldsymbol{1}_{d}\succcurlyeq\boldsymbol{j}_{0}/q\), so that \(x_{1}\in\mathcal{H}^{q}_{\infty}\). Since \(x_{1}\in B_{\infty}(x_{0},r)\) and \(\eta_{S}(x_{1})=\tau+\lambda\geq\tau+\lambda\cdot r^{\gamma}\) for all \(r\in(0,1]\) the claim is shown for \(x_{0}\in\bigcup_{\boldsymbol{j}\in S}\mathcal{H}^{q}_{\boldsymbol{j}}\). Now, suppose \(x_{0}\in\mathcal{H}^{q}_{\infty}\). Then, similarly to before, \(\eta_{S}(x_{0})=\tau+\lambda\geq\tau+\lambda\cdot r^{\gamma}\) for all \(r\in(0,1]\). This establishes that \(P_{S}\in\mathcal{P}_{\mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\) and hence completes the proof. Our second construction proceeds similarly, but we now also vary the marginal distribution. **Proposition 33**.: _Let \(d\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\) and \(\theta>1\). Then, writing \(\mathcal{P}^{\prime}:=\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{ \mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\), we have for any \(n\in\mathbb{N}\) and \(\alpha\in(0,1/4]\) that_ Proof.: Let \(q:=\lceil(4n)^{1/d}\rceil\) and let \(\mathbb{W}_{q,d}\subseteq[q]^{d}\) be an antichain with \(|\mathbb{W}_{q,d}|\geq q^{d-1}/d\). For each \(S\subseteq\mathbb{W}_{q,d}\), we define a Borel subset \(J_{S}\) of \(\mathbb{R}^{d}\) by \[J_{S}:=\bigg{\{}[0,1]^{d}\setminus\bigg{(}\bigcup_{\boldsymbol{j}\in\mathbb{W }_{q,d}\setminus S}\mathcal{H}^{q}_{\boldsymbol{j}}\bigg{)}\bigg{\}}\cup\bigg{\{} \bigcup_{\boldsymbol{j}\in\mathbb{W}_{q,d}\setminus S}(\mathcal{H}^{q}_{ \boldsymbol{j}}-\boldsymbol{1}_{d})\bigg{\}},\] where \(\boldsymbol{1}_{d}\in\mathbb{R}^{d}\) denotes the all-ones vector; see the right-hand panel in Figure 11 for an illustration. Note that \(\mathcal{L}_{d}(J_{S})=1\), so we can define a Borel probability measure \(\mu_{S}\) on \(\mathbb{R}^{d}\) by \(\mu_{S}(A):=\mathcal{L}_{d}(A\cap J_{S})\) for Borel sets \(A\subseteq\mathbb{R}^{d}\). Now let \(P_{S}\) denote the joint distribution of \((X,Y)\), where \(X\sim\mu_{S}\) and \(Y|X\sim\mathcal{N}\big{(}\eta_{S}(X),\sigma^{2}\big{)}\), with \(\eta_{S}\) defined by (13). Then \(P_{S}\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{\mathrm{Reg},d}( \tau,\theta,\gamma,\lambda)\) by Lemma 34. Given \(\hat{A}\in\hat{\mathcal{A}}_{n}(\tau,\alpha,\mathcal{P}^{\prime})\), let \[\hat{S}:=\big{\{}\boldsymbol{j}\in\mathbb{W}_{q,d}:\hat{A}\cap\mathcal{H}^{q}_{ \boldsymbol{j}}\neq\emptyset\big{\}}.\] Note that for each \(S_{0}\), \(S_{1}\subseteq\mathbb{W}_{q,d}\) with \(|S_{0}\triangle S_{1}|=1\), we have by Lemma 48_(b)_ that \[\mathrm{TV}\big{(}P_{S_{0}}^{n},P_{S_{1}}^{n}\big{)}\leq n\cdot\mathrm{TV}\big{(} P_{S_{0}},P_{S_{1}}\big{)}=\frac{n}{q^{d}}.\] Thus, by Assouad's lemma again, there exists \(S_{*}\subseteq\mathbb{W}_{q,d}\) such that \[\mathbb{E}_{P_{S_{*}}}\big{(}|\hat{S}\triangle S_{*}|\big{)}\geq\frac{| \mathbb{W}_{q,d}|}{2}\cdot\bigg{(}1-\frac{n}{q^{d}}\bigg{)}\geq\frac{3|\mathbb{ W}_{q,d}|}{8},\] by the choice of \(q\). Hence, writing \(Z:=|\hat{S}\triangle S_{*}|/|\mathbb{W}_{q,d}|\) and \(E:=\{Z\geq 1/11\}\), we have \[\mathbb{P}_{P_{S_{*}}}(E)=\frac{11}{10}\bigg{\{}\mathbb{P}_{S_{*}}(E)+\frac{1 }{11}\mathbb{P}_{S_{*}}(E^{c})-\frac{1}{11}\bigg{\}}\geq\frac{11}{10}\bigg{\{} \mathbb{E}_{P_{S_{*}}}(Z1_{E})+\mathbb{E}_{P_{S_{*}}}(Z1_{E^{c}})-\frac{1}{11} \bigg{\}}\geq\frac{5}{16}.\] Now \(\mathbb{P}_{P_{S_{*}}}(\hat{S}\subseteq S_{*})\geq 3/4\) because \(\alpha\in(0,1/4]\), so \[\mathbb{P}_{P_{S_{*}}}\bigg{(}\frac{|S_{*}\setminus\hat{S}|}{|\mathbb{W}_{q,d }|}\geq\frac{1}{11}\bigg{)}\geq\mathbb{P}_{P_{S_{*}}}\big{(}E\cap\{\hat{S} \subseteq S_{*}\}\big{)}\geq\mathbb{P}_{P_{S_{*}}}(E)-\mathbb{P}_{P_{S_{*}}} \big{(}\{\hat{S}\nsubseteqeq S_{*}\}\big{)}\geq\frac{1}{16}.\] Thus, \[\mathbb{E}_{P_{S_{*}}}\big{\{}\mu\big{(}\mathcal{X}_{\eta_{S_{*}}}(\tau) \setminus\hat{A}\big{)}\big{\}}\geq\frac{|\mathbb{W}_{q,d}|}{176\cdot q^{d}} \geq\frac{1}{176\cdot d\cdot q}\geq\frac{1}{352\cdot d\cdot 4^{1/d}}\cdot \frac{1}{n^{1/d}},\] as required. **Lemma 34**.: _For any \(d,q\in\mathbb{N}\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(\theta>1\), an antichain \(\mathbb{W}_{q,d}\subseteq[q]^{d}\) and \(S\subseteq\mathbb{W}_{q,d}\), we have that \(P_{S}\equiv P_{S,\mathbb{W}_{q,d},q,\sigma,\tau,\gamma,\lambda}\) defined as in the proof of Proposition 33 satisfies \(P_{S}\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{\mathrm{Reg},d}( \tau,\theta,\gamma,\lambda)\)._ Proof.: Since the regression function \(\eta_{S}\) associated to \(P_{S}\) is identical to that in Proposition 31, we follow the same steps as in the proof of Lemma 32 to show that \(P_{S}\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\) and that the condition in Definition 13_(ii)_ is satisfied. It remains to show that \(P_{S}\) has the property specified in Definition 13_(i)_. Indeed, for \(J_{S}\) as in Proposition 33, any \(x_{0}\in\mathrm{supp}(\mu_{S})\) and any \(r\in(0,1]\), we have \(\mu_{S}\big{(}B_{\infty}(x_{0},r)\big{)}=\mathcal{L}_{d}\big{(}B_{\infty}(x_{0 },r)\cap J_{S}\big{)}\leq\mathcal{L}_{d}\big{(}B_{\infty}(x_{0},r)\big{)}=(2r)^ {d}\). Moreover there exists \(x_{1}\in B_{\infty}(x_{0},r)\cap J_{S}\) such that \(B_{\infty}(x_{0},r)\cap J_{S}\supseteq B_{\infty}(x_{1},r/2)\), so \(\mathcal{L}_{d}\big{(}B_{\infty}(x_{0},r)\cap J_{S}\big{)}\geq\mathcal{L}_{d }\big{(}B_{\infty}(x_{1},r/2)\big{)}\geq r^{d}\), as required. We are now in a position to prove Theorem 17. Proof of Theorem 17.: Let \(c:=1/(1408\cdot d\cdot 32^{1/d}\cdot 13^{1/(2\gamma+d)})\). Suppose first that \[n<(8\cdot 2^{d})^{(2\gamma+d)/(2\gamma)}\bigg{(}\frac{13\lambda^{2}}{\sigma^{2} \log_{+}\big{(}1/(5\alpha)\big{)}}\bigg{)}^{d/(2\gamma)},\] so that \[\frac{1}{704\cdot d\cdot 4^{1/d}}\cdot\frac{1}{n^{1/d}}>c\bigg{(}\frac{\sigma^{2} }{n\lambda^{2}}\log_{+}\bigg{(}\frac{1}{5\alpha}\bigg{)}\bigg{)}^{1/(2\gamma+d )}.\] By Proposition 33, we have \[\inf_{\hat{A}\in\mathcal{A}_{n}(\tau,\alpha,\mathcal{P}^{\prime})} \sup_{P\in\mathcal{P}^{\prime}}\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{ \tau}(\eta)\setminus\hat{A}(\mathcal{D})\big{)}\big{\}} \geq\frac{1}{352\cdot d\cdot 4^{1/d}}\cdot\frac{1}{n^{1/d}}\] \[\geq\frac{1}{704\cdot d\cdot 4^{1/d}}\cdot\frac{1}{n^{1/d}}+\frac{c }{n^{1/d}}\] \[\geq c\bigg{[}1\wedge\bigg{\{}\bigg{(}\frac{\sigma^{2}}{n\lambda ^{2}}\log_{+}\Big{(}\frac{1}{5\alpha}\Big{)}\bigg{)}^{1/(2\gamma+d)}+\frac{1}{ n^{1/d}}\bigg{\}}\bigg{]}.\] We may therefore suppose that \[n\geq(8\cdot 2^{d})^{(2\gamma+d)/(2\gamma)}\bigg{(}\frac{13\lambda^{2}}{ \sigma^{2}\log_{+}\big{(}1/(5\alpha)\big{)}}\bigg{)}^{d/(2\gamma)}.\] Then, by Propositions 31 and 33, we have \[\inf_{A\in\mathcal{A}_{n}(\tau,\alpha,\mathcal{P}^{\prime})} \sup_{P\in\mathcal{P}^{\prime}}\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{ \tau}(\eta)\setminus\hat{A}(\mathcal{D})\big{)}\big{\}}\] \[\geq\frac{1}{40d}\bigg{\{}\frac{\sigma^{2}}{13n\lambda^{2}}\log_{ +}\Big{(}\frac{1}{5\alpha}\Big{)}\wedge 1\bigg{\}}^{1/(2\gamma+d)}\vee\frac{1}{352 \cdot d\cdot 4^{1/d}\cdot n^{1/d}}\] \[\geq c\bigg{[}1\wedge\bigg{\{}\bigg{(}\frac{\sigma^{2}}{n\lambda ^{2}}\log_{+}\Big{(}\frac{1}{5\alpha}\Big{)}\bigg{)}^{1/(2\gamma+d)}+\frac{1}{ n^{1/d}}\bigg{\}}\bigg{]},\] as required. ### Proofs from Section 4 Proof of Lemma 20.: We condition on \(\mathcal{D}_{X}\) throughout this proof. Let \(\Theta_{0}:=(-\infty,\tau)^{n(x)}\times(0,\infty)\) and \(P\in\mathcal{P}_{\mathrm{N},d}(\sigma_{*})\). Write \(\varphi(\cdot;a,\sigma^{2})\) for the density function of the \(\mathcal{N}(a,\sigma^{2})\) distribution. We fix \(k\in[n(x)]\) and initially operate on the event \(\big{\{}\max_{j\in[k]}Y_{(j)}(x)>\tau\big{\}}=\{\hat{\sigma}_{0,k}^{2}>0\}\). We claim that \(\big{(}(Y_{(j)}(x)\wedge\tau)_{j\in[k]},\hat{\sigma}_{0,k}^{2}\big{)}\) then maximises the conditional likelihood \(L(t,\sigma^{2}):=\prod_{j=1}^{k}\varphi\big{(}Y_{(j)}(x);t_{j},\sigma^{2}\big{)}\) over \(\Theta_{0,k}:=(-\infty,\tau]^{k}\times[0,\infty)\), where \(L(t,0):=\lim_{\sigma\searrow 0}L(t,\sigma^{2})=0\) for \(t\in(-\infty,\tau]^{k}\). To see this, note first that \(L(t,0)=0<\sup_{\sigma>0}L(t,\sigma^{2})\) for \(t\in(-\infty,\tau]^{k}\), so any maximiser must be contained in \(\Theta_{0,k}^{0}:=(-\infty,\tau]^{k}\times(0,\infty)\). Moreover, for any \((t_{1},\ldots,t_{j-1},t_{j+1},\ldots,t_{k},\sigma^{2})\in(-\infty,\tau]^{k-1} \times(0,\infty)\), the unique maximiser \(t_{j}^{0}\) of \(t_{j}\mapsto L(t,\sigma^{2})\) satisfies \(t_{j}^{0}=Y_{j}(x)\wedge\tau\). It therefore suffices to maximise \(\sigma^{2}\mapsto L(t^{0},\sigma^{2})\) over \((0,\infty)\), where \(t^{0}:=(t_{1}^{0},\ldots,t_{k}^{0})^{\top}\in(-\infty,\tau]^{k}\), and the unique maximiser is given by \(\sigma_{0}^{2}:=\hat{\sigma}_{0,k}^{2}\). Hence, writing \(t_{j}^{*}:=\eta\big{(}X_{(j)}(x)\big{)}\) for \(j\in[n(x)]\), when \(\hat{\sigma}_{0,k}^{2}>0\), we have for \(k\in[n(x)]\) that \[\frac{1}{\hat{p}_{\tau}^{k}(x,\mathcal{D})} =\frac{(2\pi)^{-k/2}\prod_{j=1}^{k}\hat{\sigma}_{1,j-1}^{-1}\exp \bigl{\{}-\big{(}Y_{(j)}(x)-\bar{Y}_{1,j-1}\big{)}^{2}\big{/}\big{(}2\hat{ \sigma}_{1,j-1}^{2}\big{)}\bigr{\}}}{(2\pi\hat{\sigma}_{0,k}^{2})^{-k/2}\exp \bigl{\{}-\sum_{j=1}^{k}\bigl{(}Y_{(j)}(x)-(Y_{(j)}(x)\wedge\tau)\big{)}^{2} \big{/}\big{(}2\hat{\sigma}_{0,k}^{2}\big{)}\bigr{\}}}\] \[=\frac{\prod_{j=1}^{k}\varphi\big{(}Y_{(j)}(x);\bar{Y}_{1,j-1}, \hat{\sigma}_{1,j-1}^{2}\big{)}}{\sup_{(t,\sigma^{2})\in\Theta_{0,k}}\prod_{j=1} ^{k}\varphi\big{(}Y_{(j)}(x);t_{j},\sigma^{2}\big{)}}\leq\frac{\prod_{j=1}^{k} \varphi\big{(}Y_{(j)}(x);\bar{Y}_{1,j-1},\hat{\sigma}_{1,j-1}^{2}\big{)}}{\prod_ {j=1}^{k}\varphi\big{(}Y_{(j)}(x);t_{j}^{*},\sigma_{*}^{2}\big{)}}=:\Lambda_{k}.\] We now claim that the process given by \((\Lambda_{k})_{k\in[n(x)]}\) defines a martingale with respect to the filtration \((\mathcal{F}_{j})_{j\in\{0\}\cup[n(x)]}\), where \(\mathcal{F}_{0}\) is the trivial \(\sigma\)-algebra and where \(\mathcal{F}_{j}\) denotes the \(\sigma\)-algebra generated by \(\big{(}Y_{(\ell)}(x)\big{)}_{\ell\in[j]}\), with \(\mathbb{E}(\Lambda_{1}|\mathcal{D}_{X})=1\). To see this, observe that \[\mathbb{E}(\Lambda_{k+1}\mid\mathcal{F}_{k},\mathcal{D}_{X}) =\Lambda_{k}\mathbb{E}\bigg{(}\frac{\varphi\big{(}Y_{(k+1)}(x); \bar{Y}_{1,k},\hat{\sigma}_{1,k}^{2}\big{)}}{\varphi\big{(}Y_{(k+1)}(x);t_{k+1} ^{*},\sigma_{*}^{2}\big{)}}\biggm{|}\mathcal{F}_{k},\mathcal{D}_{X}\bigg{)}\] \[=\Lambda_{k}\int_{-\infty}^{\infty}\frac{\varphi\big{(}y;\bar{Y}_ {1,k},\hat{\sigma}_{1,k}^{2}\big{)}}{\varphi\big{(}y;t_{k+1}^{*},\sigma_{*}^{2 }\big{)}}\cdot\varphi\big{(}y;t_{k+1}^{*},\sigma_{*}^{2}\big{)}\,dy\] \[=\Lambda_{k}\int_{-\infty}^{\infty}\varphi\big{(}y;\bar{Y}_{1,k},\hat{\sigma}_{1,k}^{2}\big{)}\,dy=\Lambda_{k}\] for \(k\in[n(x)-1]\). Hence, by Ville's inequality (Ville, 1939), for any \(\alpha\in(0,1)\), \[\mathbb{P}\big{(}\bar{p}_{\tau}(x,\mathcal{D})\leq\alpha\bigm{|} \mathcal{D}_{X}\big{)} =\mathbb{P}\bigg{(}\bigcup_{k\in[n(x)]}\Big{\{}\bar{p}_{\tau}^{k}(x,\mathcal{D})\leq\alpha,\hat{\sigma}_{0,k}^{2}>0\Big{\}}\Bigm{|}\mathcal{D}_{X} \bigg{)}\] \[=\mathbb{P}\bigg{(}\bigcup_{k\in[n(x)]}\Big{\{}\Lambda_{k}\geq 1/ \alpha,\hat{\sigma}_{0,k}^{2}>0\Big{\}}\Bigm{|}\mathcal{D}_{X}\bigg{)}\] \[\leq\mathbb{P}\Big{(}\max_{k\in[n(x)]}\Lambda_{k}\geq 1/\alpha \Bigm{|}\mathcal{D}_{X}\Big{)}\leq\alpha,\] as required. We prove Lemma 22 by establishing the generalisation given by Lemma 36 below. This latter result is stated for more general \(p\)-values that we now define. **Definition 35**.: _In the setting of Definition 1, let \(\nu\) be a measure supported on \([\tau,1]\), let \(\check{S}_{k}:=\sum_{j=1}^{k}Y_{(j)}(x)\) and define_ \[\check{p}_{\tau,\nu}(x,\mathcal{D}):=1\wedge\min_{k\in[n(x)]}\frac{\tau^{ \check{S}_{k}}(1-\tau)^{n-\check{S}_{k}}}{\int_{[\tau,1]}t^{\check{S}_{k}}(1-t )^{n-\check{S}_{k}}\,d\nu(t)}.\] If we take \(\nu\) to be the \(\mathrm{Unif}[\tau,1]\) distribution in Definition 35, then we recover the \(p\)-values from Definition 21 that are employed in Lemma 22. **Lemma 36**.: _Let \(\tau\in\mathbb{R}\), let \(\nu\) be a measure supported on \([\tau,1]\) and let \(P\) be a distribution on \(\mathbb{R}^{d}\times[0,1]\) with regression function \(\eta\). Fix \(x\in\mathbb{R}^{d}\) and suppose that \(\eta(x^{\prime})\leq\tau\) for all \(x^{\prime}\preccurlyeq x\). Given \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\), we have \(\mathbb{P}_{P}\big{\{}\check{p}_{\tau,\nu}(x,\mathcal{D})\leq\alpha|\mathcal{D }_{X}\big{\}}\leq\alpha\) for all \(\alpha\in(0,1)\)._ Proof of Lemma 36 (and hence Lemma 22).: Fix \(k\in\mathbb{N}\). For \(t\in[0,1]\) and \(z_{1},\ldots,z_{k}\in\{0,1\}\), write \(L_{k}(t;z_{1},\ldots,z_{k}):=t^{s_{k}}(1-t)^{k-s_{k}}\), where \(s_{k}:=\sum_{j=1}^{k}z_{j}\), for the likelihood function of an independent sample of \(k\) Bernoulli random variables with success probability \(t\). Further, let \(\bar{L}_{k,\nu}(z_{1},\ldots,z_{k}):=\int_{[\tau,1]}L_{k}(t;z_{1},\ldots,z_{k}) \,d\nu(t)\). Finally, let \((Z_{j})_{j\in\mathbb{N}}\) be a sequence of independent \([0,1]\)-valued random variables with \(\tilde{\tau}_{j}:=\mathbb{E}(Z_{j})\leq\tau\). We claim that the likelihood ratio sequence \(\big{(}\Lambda_{k}(Z_{1},\ldots,Z_{k})\big{)}_{k\in\mathbb{N}}\) given by \[\Lambda_{k}(Z_{1},\ldots,Z_{k}):=\frac{\bar{L}_{k,\nu}(Z_{1},\ldots,Z_{k})}{L_{k }(\tau;Z_{1},\ldots,Z_{k})}=\int_{[\tau,1]}\frac{t^{\check{S}_{k}}(1-t)^{k- \check{S}_{k}}}{\tau^{\check{S}_{k}}(1-\tau)^{k-\check{S}_{k}}}\,d\nu(t)\] defines a non-negative super-martingale with respect to the filtration \((\mathcal{F}_{k})_{k\in\mathbb{N}_{0}}\), where \(\mathcal{F}_{0}\) denotes the trivial \(\sigma\)-algebra and \(\mathcal{F}_{k}:=\sigma(Z_{1},\ldots,Z_{k})\) for \(k\in\mathbb{N}\). Indeed, by Fubini's theorem \[\mathbb{E}\big{(}\Lambda_{k}(Z_{1},\ldots,Z_{k})\mid\mathcal{F}_{k -1}\big{)}=\int_{[\tau,1]}\mathbb{E}\bigg{\{}\frac{t^{\sum_{j=1}^{k}Z_{j}}(1-t )^{k-\sum_{j=1}^{k}Z_{j}}}{\tau^{\sum_{j=1}^{k}Z_{j}}(1-\tau)^{k-\sum_{j=1}^{k }Z_{j}}}\biggm{|}\mathcal{F}_{k-1}\bigg{\}}\,d\nu(t)\] \[\quad=\int_{[\tau,1]}\bigg{(}\frac{t^{\sum_{j=1}^{k-1}Z_{j}}(1-t )^{(k-1)-\sum_{j=1}^{k-1}Z_{j}}}{\tau^{\sum_{j=1}^{k-1}Z_{j}}(1-\tau)^{(k-1)- \sum_{j=1}^{k-1}Z_{j}}}\cdot\frac{1-t}{1-\tau}\cdot\mathbb{E}\bigg{\{}\bigg{(} \frac{t(1-\tau)}{\tau(1-t)}\bigg{)}^{Z_{k}}\biggm{|}\mathcal{F}_{k-1}\bigg{\}} \bigg{)}\,d\nu(t)\] \[\quad\leq\int_{[\tau,1]}\bigg{(}\frac{t^{\sum_{j=1}^{k-1}Z_{j}}(1 -t)^{(k-1)-\sum_{j=1}^{k-1}Z_{j}}}{\tau^{\sum_{j=1}^{k-1}Z_{j}}(1-\tau)^{(k-1) -\sum_{j=1}^{k-1}Z_{j}}}\cdot\bigg{\{}\tilde{\tau}_{k}\cdot\frac{t}{\tau}+(1- \tilde{\tau}_{k})\cdot\frac{1-t}{1-\tau}\bigg{\}}\bigg{)}\,d\nu(t)\] \[\quad\leq\int_{[\tau,1]}\bigg{(}\frac{t^{\sum_{j=1}^{k-1}Z_{j}}(1 -t)^{(k-1)-\sum_{j=1}^{k-1}Z_{j}}}{\tau^{\sum_{j=1}^{k-1}Z_{j}}(1-\tau)^{(k-1) -\sum_{j=1}^{k-1}Z_{j}}}\bigg{)}\,d\nu(t)=\Lambda_{k-1}(Z_{1},\ldots,Z_{k-1}),\] where we have applied Garivier and Cappe (2011, Lemma 9) in the first inequality. Now let \((Z_{j})\) be an independent sequence of independent \([0,1]\)-valued random variables so that \(Z_{j}\) has the same distribution as the conditional distribution of \(Y_{(j)}(x)\) given \(\mathcal{D}_{X}\) for \(j\in[n(x)]\), and \(Z_{j}=0\) almost surely for \(j>n(x)\). We conclude by Ville's inequality (Ville, 1939) that \[\mathbb{P}_{P}\bigg{(}\tilde{p}_{\tau,\nu}(x,\mathcal{D})\leq \alpha\biggm{|}\mathcal{D}_{X}\bigg{)} =\mathbb{P}_{P}\bigg{(}\max_{k\in[n(x)]}\Lambda_{k}\big{(}Y_{(1)}( x),\ldots,Y_{(k)}(x)\big{)}\geq\frac{1}{\alpha}\biggm{|}\mathcal{D}_{X}\bigg{)}\] \[\leq\mathbb{P}\bigg{(}\sup_{k\in\mathbb{N}}\Lambda_{k}(Z_{1}, \ldots,Z_{k})\geq\frac{1}{\alpha}\bigg{)}\leq\alpha,\] for any \(\alpha\in(0,1)\), as required. Proof of Lemma 24.: Let \(\tilde{Y}_{i}:=\mathbbm{1}_{\{Y_{i}>\tau\}}\) for \(i\in[n]\). Suppose that \(x^{\prime}\preccurlyeq x\), so that \(\zeta_{\theta}(x^{\prime})\leq\zeta_{\theta}(x)<\tau\). Then \(\mathbb{P}_{P}\big{(}Y_{i}\leq\tau|X_{i}=x^{\prime}\big{)}\geq\theta\) and so \[\mathbb{E}_{P}(\tilde{Y}_{i}\mid X_{i}=x^{\prime})=\mathbb{P}_{P}(Y_{i}>\tau \mid X_{i}=x^{\prime})\leq 1-\theta.\] By Hoeffding's lemma, the conditional distribution of \(\tilde{Y}_{i}-\mathbb{E}_{P}(\tilde{Y}_{i}|X_{i})\) given \(X_{i}\) is sub-Gaussian with variance parameter \(1/4\). The first result now follows by Lemma 25, and the second follows similarly from Lemma 36. ## Appendix B Further simulation results ### Further performance comparisons To expand on the simulations in Section 5, we illustrate the performance of our procedure on eight more regression functions, which are presented in Table 2 and illustrated in Figure 12. Other than the choice of \(f\), the simulations were carried out in identical fashion to that described in Section 5, and the results are illustrated in Figures 13, 14 and 15. \begin{table} \begin{tabular}{c|c|c|c} Label & Function \(f\) & \(\tau\) & \(\gamma(P)\) \\ \hline (g) & \(\exp\bigl{(}\sum_{j=1}^{d}x^{(j)}\bigr{)}\) & \(\frac{e^{d/2}-1}{e^{d}-1}\) & 1 \\ (h) & \(\{1+\exp\bigl{(}-4\cdot\sum_{j=1}^{d}(x^{(j)}-0.5)\bigr{)}\}^{-1}\) & \(1/2\) & 1 \\ (i) & \(\sum_{j=1}^{d}\bigl{(}x^{(j)}\bigr{)}^{3}\) & \(\frac{1}{d}\left(\frac{\Gamma(1+d/3)}{2\Gamma(4/3)^{d}}\right)^{3/d}\) & 1 \\ (j) & \(\sum_{j=1}^{d}\bigl{(}x^{(j)}-1\bigr{)}^{3}\) & \(1-\frac{1}{d}\left(\frac{\Gamma(1+d/3)}{2\Gamma(4/3)^{d}}\right)^{3/d}\) & 1 \\ (k) & \(\sum_{j=1}^{d}[6\cdot x^{(j)}]/6\) & \(7/12\) & 0 \\ (l) & \(\sqrt{x^{(1)}+x^{(2)}}\) & \(0.584\) & 1 \\ (m) & \(\bigl{(}\sum_{j=1}^{d}(x^{(j)}-0.5)\bigr{)}^{1/3}\) & \(1/2\) & \(1/3\) \\ (n) & \(\bigl{(}\sum_{j=1}^{d}(x^{(j)}-0.5)\bigr{)}^{3}\) & \(1/2\) & 3 \\ \end{tabular} \end{table} Table 2: Definition of the functions used in the simulations. Here, \(x=(x^{(1)},\ldots,x^{(d)})^{\top}\in[0,1]^{d}\). For the regression function \(\eta\) given by \(\eta(x):=\bigl{(}f(x)-f(0)\bigr{)}/\bigl{(}f(1_{d})-f(0)\bigr{)}\), we have \(\mu\bigl{(}\mathcal{X}_{\tau}(\eta)\bigr{)}=1/2\), except for cases (i), (j), (k), where \(\mu\bigl{(}\mathcal{X}_{\tau}(\eta)\bigr{)}\approx 1/2\) for the considered values of \(d\). Figure 12: For \(d=2\), the contour lines (red) of the regression functions corresponding to the functions \(f\) in Table 2 at the levels \(k/6\) for \(k\in[5]\) are shown. The fill colour indicates the function value at the respective position from 0 (purple) to 1 (yellow). Figure 13: Estimates of \(\mathbb{E}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}\big{)}\big{\}}\) for \(d=2\) and \(\sigma=1/4\). the univariate setting in Section 3.2. However, since our sequence must be specified independently of the data used for testing, and there is no canonical total ordering in the multivariate case, sample splitting offers a potential way forward. We consider two such procedures: in the first, denoted \(\hat{A}^{\text{Split}}\), we use the first half of the data to compute \(p\)-values at each of our \(n\) data points. These are then ordered from smallest to largest, and this determines the ordering for our fixed sequence testing based on \(p\)-values computed on the second half of the data. The second procedure, denoted \(\hat{A}^{\text{Split,OR}}\), discards the first half of the data and instead uses an oracle ordering of the data points using the underlying knowledge of the regression function; the second stage of the procedure is then identical to \(\hat{A}^{\text{Split}}\). Results comparing \(\hat{A}^{\text{ISS}}\) with \(\hat{A}^{\text{Split}}\) and \(\hat{A}^{\text{Split,OR}}\) are presented in Figures 16 and 17, which indicate that both of these sample-splitting variants have considerably worse empirical performance than \(\hat{A}^{\text{ISS}}\). This is perhaps surprising given the impressive numerical results for sample splitting in conjunction with fixed sequence testing reported by Angelopoulos et al. (2021). However, the performance of sample-splitting approaches is highly dependent on the procedure used to determine the ordering of the hypotheses from the first split of the data. Even exact knowledge of the regression function may be insufficient to determine an ordering with high conditional power, as the distribution of the \(p\)-values on the second half of the data also depends on the marginal distribution of the covariates. This is reflected in the fact that \(\hat{A}^{\text{Split,OR}}\) has worse performance than \(\hat{A}^{\text{Split}}\) in some cases, especially when the regression function depends only on a strict subset of the \(d\) variables, such as case (l) in Figure 17. Figure 16: Estimates of \(\mathbb{E}\{\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}\big{)}\}\) for \(d=2\) and \(\sigma=1/4\). Figure 19: Computation time of the different estimates of \(\mathbb{E}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A}\big{)}\big{\}}\) for \(d=4\) and \(\sigma=1/64\). Comparison with procedure based on Meijer & Goeman (2015) As discussed in Section 2, our proposed procedure consists of two steps; calculating \(p\)-values to test whether the regression function \(\eta\) exceeds the threshold \(\tau\) at \(m\) given points and then controlling the FWER over \(\mathcal{P}_{\text{Mon},d}(\sigma)\) through a DAG testing procedure. For the second step, an alternative approach would be to use the algorithm introduced by Meijer and Goeman (2015). Indeed, the empirical results in Section 5 suggest that such a procedure can work well in some cases. However, we show in this section that it fails to attain the optimal worst-case regret over \(\mathcal{P}_{\text{Mon},d}(\sigma)\cap\mathcal{P}_{\text{Reg},d}(\tau,\theta, \gamma,\lambda)\). ### Description of procedure The iterative algorithm of Meijer and Goeman (2015) is an application of the sequential rejection principle (Goeman and Solari, 2010) to hypotheses indexed by elements of \(I=[m]\), for some \(m\in\mathbb{N}\), that are a priori arranged as a DAG \(G=(I,E)\)8. Inputs to the algorithm include a fixed significance level \(\alpha\in(0,1)\), a vector \(\mathbf{v}\in(0,\infty)^{m}\) and \((p_{i})_{i\in I}\in(0,1]^{m}\), with the latter thought of as a collection of \(p\)-values. Any choice of \(\mathbf{v}\) will correspond to a DAG testing procedure, as defined by Definition 3. Each iteration of the procedure comprises three steps: the first assigns to each unrejected hypothesis (or, equivalently, the corresponding node) a proportion \(\alpha_{i}\) of the \(\alpha\)-budget; the second step rejects any hypothesis \(i\in I\) for which \(p_{i}\leq\alpha_{i}\); and the third rejects all ancestors of rejected hypotheses. The procedure terminates if no new rejections are made in the second step of an iteration or if every hypothesis has been rejected, and hence takes at most \(m\) iterations. In more detail, in the first step, the \(\alpha\)-budget is split among the unrejected leaf nodes in proportion to the corresponding elements of \(\mathbf{v}\). These budgets are then propagated from the leaf nodes towards currently unrejected ancestor nodes. Meijer and Goeman (2015) suggest two variants for this, which we enumerate by \(\omega\in\{0,1\}\) and call the _all-parent variant_ (\(\omega=0\)) and the _any-parent variant_ (\(\omega=1\)). In the all-parent variant of the procedure, a node's entire budget is evenly distributed among its unrejected parents (keeping nothing for itself), whereas in the any-parent variant, the budget that would go to rejected parents if it were evenly distributed among all parents simply stays at the node and only the remaining budget is evenly distributed among the unrejected parents. Importantly, the order in which the nodes pass their budgets to their parents follows a reverse topological ordering of \(G\); all reverse topological orderings \(\pi_{G}\) lead to the same output in Algorithm 3, making the specific choice immaterial. Thus, a node only distributes its budget once all of its descendants have distributed theirs. Once this budget propagation has terminated, we move to the second step and reject all hypotheses whose \(p\)-value does not exceed the assigned budget. Finally, the third step is only relevant in the any-parent variant of the procedure, and rejecting the ancestors of nodes rejected at the second step does not increase the Type I error rate when \(G\) is \(G_{0}\)-consistent for a directed graph \(G_{0}\) encoding all logical relationships between hypotheses (see Section 3.1). A concise formal description of the Meijer and Goeman (2015) procedure, which outputs a set \(\mathcal{R}_{\alpha}^{\text{MG},\omega,\mathbf{v}}(G,\mathbf{p})\) of rejected hypotheses, is given in Algorithm 3. Footnote 8: Indeed, in order to fit the more general notion of Definition 3, we may assume that this DAG is weighted, although the weights will be irrelevant for the procedure. Meijer and Goeman (2015) prove that Algorithm 3 satisfies the two sufficient conditions for controlling the FWER described by Goeman and Solari (2010). The DAG testing procedures \(\mathcal{R}^{\mathrm{MG},\omega,\mathbf{v}}\) for \(\omega\in\{0,1\}\) motivate the following selection sets9 Footnote 9: We deviate slightly from the notation in Section 5: \(\hat{A}^{\mathrm{ISS},\mathrm{All}}\equiv\hat{A}^{\mathrm{ISS},0}\) and \(\hat{A}^{\mathrm{ISS},\mathrm{Any}}\equiv\hat{A}^{\mathrm{ISS},1}\). \[\hat{A}^{\mathrm{ISS},\omega} \equiv\hat{A}^{\mathrm{ISS},\omega,\mathbf{v}}_{\sigma,\tau,\alpha,m}( \mathcal{D})\] \[:=\big{\{}x\in\mathbb{R}^{d}:\!X_{i_{0}}\preccurlyeq x\text{ for some }i_{0}\in\mathcal{R}^{\mathrm{MG},\omega,\mathbf{v}}_{\alpha}\big{(} \mathcal{G}_{\mathrm{W}}(\mathcal{D}_{X,m}),\big{(}\hat{p}_{\sigma,\tau}(X_{i },\mathcal{D})\big{)}_{i\in[m]}\big{)}\big{\}}.\] Indeed, by a proof analogous to that of Theorem 9, we have \(\mathbb{P}_{P}\big{(}\hat{A}^{\mathrm{ISS},\omega,\mathbf{v}}_{\sigma,\tau, \alpha,m}(\mathcal{D})\subseteq\mathcal{X}_{\tau}(\eta)|\mathcal{D}_{X}\big{)} \geq 1-\alpha\) whenever \(P\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\). However, the budget propagation mechanism in the first step of each iteration has an important drawback: if \(I_{0}\subseteq I\) is such that there exists \(i_{*}\in I\) with \(\{i_{*}\}=\mathrm{ch}_{G}(i)\) for all \(i\in I_{0}\), then the sum of the budgets assigned to the nodes in \(I_{0}\) can never exceed the budget that passes through node \(i_{*}\). Moreover, the same conclusion holds for ancestors of nodes in \(I_{0}\) that do not have descendants belonging to an antichain with \(i_{*}\). Intuitively, this can make \(i_{*}\) a bottleneck in the sense that the potentially large number of hypotheses \(I_{0}\) may each only receive a fraction of the budget propagated through \(i_{*}\). ``` Input:\(\omega\in\{0,1\}\), \(\alpha\in(0,1)\), \(m\in\mathbb{N}\), a weighted DAG \(G=([m],E,\mathbf{w})\), \(\mathbf{p}=(p_{i})_{i\in[m]}\in(0,1]^{m}\), \(\mathbf{v}=(v_{i})_{i\in[m]}\in(0,\infty)^{m}\) \(\pi_{G}\leftarrow\) a topological ordering of \(G\) \(R_{0}^{\omega}\leftarrow\emptyset\) for\(\ell\in[m]\)do \(S_{\mathrm{L}}\gets L(G)\setminus R_{\ell-1}^{\omega}\)// \(S_{\mathrm{L}}\) is the set of currently unrejected leaf nodes \(v_{i}^{*}\gets v_{i}/\big{(}\sum_{i^{\prime}\in S_{\mathrm{L}}}v_{i^{ \prime}}\big{)}\) for all \(i\in S_{\mathrm{L}}\) \(\alpha_{\ell,0}^{\omega}(i)\leftarrow\mathbbm{1}_{\{i\in S_{\mathrm{L}}\}} \cdot\alpha\cdot v_{i}^{*}\) for all \(i\in[m]\) // iteratively distribute the \(\alpha\)-budget: for\(k\in[m]\)do \(i\leftarrow\pi_{G}^{-1}(k)\)// iterate through the nodes in order \(R_{\mathrm{P}}\leftarrow\mathrm{pa}_{G}(i)\cap R_{\ell-1}^{\omega}\)// currently rejected parents of node \(i\) \(S_{\mathrm{P}}\leftarrow\mathrm{pa}_{G}(i)\setminus R_{\ell-1}^{\omega}\)// currently unrejected parents of node \(i\) if\(S_{\mathrm{P}}\neq\emptyset\)then if\(\omega=0\)then // evenly distribute the entire \(\alpha\)-budget among nodes in \(S_{\mathrm{P}}\): \(\alpha_{\ell,k}^{\omega}(j)\leftarrow\alpha_{\ell,k-1}^{\omega}(j)+\frac{ \alpha_{\ell,k-1}^{\omega}(i)}{|S_{\mathrm{P}}|}\) for all \(j\in S_{\mathrm{P}}\) // no \(\alpha\)-budget remains in the node \(i\) itself: \(\alpha_{\ell,k}^{\omega}(i)\gets 0\) else // divide \(\alpha\)-budget among \(R_{\mathrm{P}}\cup S_{\mathrm{P}}\), but only distribute to \(S_{\mathrm{P}}\): \(\alpha_{\ell,k}^{\omega}(j)\leftarrow\alpha_{\ell,k-1}^{\omega}(j)+\frac{ \alpha_{\ell,k-1}^{\omega}(i)}{|\mathrm{pa}_{G}(i)|}\) for all \(j\in S_{\mathrm{P}}\) // keep the \(\alpha\)-budget that would go to nodes in \(R_{\mathrm{P}}\) in node \(i\): \(\alpha_{\ell,k}^{\omega}(i)\leftarrow|R_{\mathrm{P}}|\cdot\frac{\alpha_{\ell,k- 1}^{\omega}(i)}{|\mathrm{pa}_{G}(i)|}\) end for else \(\alpha_{\ell,k}^{\omega}(i)\leftarrow\alpha_{\ell,k-1}^{\omega}(i)\) end for end for \(\alpha_{\ell}^{\omega}(i)\leftarrow\alpha_{\ell,m}^{\omega}(i)\) for all \(i\in[m]\) // reject nodes based on the final distribution of the \(\alpha\)-budget: \(N\leftarrow\{i\in[m]:p_{i}\leq\alpha_{\ell}^{\omega}(i)\}\) if\(\omega=1\)then \(N\gets N\cup\bigcup_{i\in N}\mathrm{an}_{G}(i)\) end if if\(N=\emptyset\)then \(R_{m}^{\omega}\gets R_{\ell-1}^{\omega}\) break end if \(R_{\ell}^{\omega}\gets R_{\ell-1}^{\omega}\cup N\) end for Result: The set of rejected hypotheses \(\mathcal{R}_{\alpha}^{\mathrm{MG},\omega,\mathbf{v}}(G,\mathbf{p}):=R_{m}^{\omega}\) ``` **Algorithm 3**The Meijer and Goeman (2015) one-way logical relation DAG testing procedure \(\mathcal{R}^{\mathrm{MG}}\). ### Sub-optimal worst-case performance The following proposition illustrates that using the Meijer and Goeman (2015) procedure in our setting leads to a sub-optimal worst-case rate, as seen by comparison with the upper bound for \(\hat{A}^{\text{ISS}}\) established in Theorem 15. **Proposition 37**.: _Let \(d\geq 2\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(\theta\in[2^{d},\infty)\), \(\alpha\in(0,1/4]\) and \(\omega\in\{0,1\}\). There exists \(c>0\), depending only on \(d\), \(\alpha\), \(\sigma\), \(\lambda\) and \(\gamma\), such that for every \(n\in\mathbb{N}\),_ \[\min_{m\in[n]}\sup_{P\in\mathcal{P}^{\prime}}\inf_{\boldsymbol{v}\in(0,\infty )^{m}}\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta)\setminus\hat{A }^{\text{ISS},\omega,\boldsymbol{v}}_{\sigma,\sigma,m}(\mathcal{D})\big{)} \big{\}}\geq\frac{c}{n^{1/(2\gamma+d+1)}(\log_{+}n)^{2/d}},\] _where \(\mathcal{P}^{\prime}:=\mathcal{P}_{\text{Mon},d}(\sigma)\cap\mathcal{P}_{ \text{Reg},d}(\tau,\theta,\gamma,\lambda)\)._ The main idea of the proof of Proposition 37 is to construct a distribution in \(\mathcal{P}_{\text{Mon},d}(\sigma)\cap\mathcal{P}_{\text{Reg},d}(\tau,\theta, \gamma,\lambda)\), for which the Meijer and Goeman (2015) algorithm propagates little budget to points in the \(\tau\)-superlevel set of the regression function \(\eta\). This distribution, which belongs to \(\mathcal{P}_{\text{Mon},d}(\sigma)\cap\mathcal{P}_{\text{Reg},d}(\tau,\theta, \gamma,\lambda)\) (Lemma 38), is illustrated in Figure 20. It consists of \(q\) pairs of atoms, where the regression function \(\eta\) is well below \(\tau\), as well as an absolutely continuous component, where \(\eta\) is at least \(\tau\) (see Figure 20(a)). The probability masses at each atom are sufficiently large to ensure that, with high probability, we see at least one observation at each of them (Lemma 39). On this high probability event, the observations therefore induce the DAG illustrated in Figure 20(b). Moreover, the regression function at \(i_{2}^{*}\) and \(i_{q}\) is sufficiently below \(\tau\) that the corresponding \(p\)-values exceed \(\alpha\) with high probability Figure 20: Illustration of the probability distribution defined in Section C.2 and the resulting induced graph. The construction demonstrates a problematic consequence of the bottleneck effect described at the end of Section C.1: in order to identify the superlevel set, we need to reject nodes in \(I_{A}\), but unless rejections have been made in previous iterations of Algorithm 3, the combined budget of the nodes in \(I_{A}\) cannot exceed what is passed through \(i_{q}\), which will be very little, as most is propagated towards \(i_{2}^{*}\). (Lemma 40). At the same time, the marginal distribution and regression function on the set \(A\) in Figure 20(a) are chosen so that all of the \(p\)-values corresponding to points in \(A\setminus A_{\epsilon}\) exceed \(\alpha/2^{q-1}\) with high probability (Lemmas 41 and 42). But, as we argue in Lemma 43, the budget propagation of the Meijer and Goeman (2015) procedure means that a budget of at most \(\alpha/2^{q-1}\) is passed into \(A\). It then follows that with high probability, we can only reject hypotheses corresponding to points in \(A_{\epsilon}\), and in that case the corresponding data-dependent selection set returned will omit \(A\setminus A_{\epsilon}\). These ideas establish the result when \(n\) and \(m\) are sufficiently large; when \(n\) is small, we can apply our earlier bound in Theorem 17 and when \(m\) is small we can apply Proposition 44, which provides a lower bound for the worst-case performance of any data-dependent selection set that returns the upper hull of \(m\) observations. To begin our construction, let \(A:=[0,1/2]\times[1/2,1]\times[0,1/2]^{d-2}\) and \(q\in\mathbb{N}\). For \(\mathbf{j}=(j_{1},j_{2})^{\top}\in\{1,2\}\times[q]\), define \[z_{\mathbf{j}}:=\left(j_{1}-1,\frac{j_{2}}{2q}-1,0\ldots,0\right)^{\top}\in \mathbb{R}^{d}. \tag{16}\] For \(d\geq 2\) and \(q\in\mathbb{N}\), let \(\mu_{q}\) denote the distribution on \(\mathbb{R}^{d}\) satisfying: * \(\mu_{q}(\{z_{\mathbf{j}}\})=(2^{d}-1)/(2^{d+1}q)\) for all \(\mathbf{j}\in\{1,2\}\times[q]\); * \(\mu_{q}(A)=1/2^{d}\); * \(X|X\in A\sim\text{Unif}(A)\) when \(X\sim\mu_{q}\). Thus \(\sum_{\mathbf{j}\in\{1,2\}\times[q]}\mu_{q}(\{z_{\mathbf{j}}\})=1-1/2^{d}\) and \(\mu_{q}(B\cap A)=\mathcal{L}_{d}(B\cap A)\) for any Borel set \(B\subseteq\mathbb{R}^{d}\). Write \(x_{A}:=(0,1/2,0,\ldots,0)^{\top}\in\mathbb{R}^{d}\), so that \(x_{A}\in A\) and \(x\succcurlyeq x_{A}\) for all \(x\in A\). For \(q\in\mathbb{N}\), \(M>0\), \(\tau\in\mathbb{R}\), \(\gamma,\lambda>0\) define \(\eta_{q,M}\equiv\eta_{q,M,\tau,\lambda,\gamma}:\mathbb{R}^{d}\to\mathbb{R}\) by \[\eta_{q,M}(x^{(1)},\ldots,x^{(d)}):=\left\{\begin{array}{ll}\tau+\lambda \cdot\min_{j\in[d]}\bigl{(}x^{(j)}-x_{A}^{(j)}\bigr{)}^{\gamma}&\text{if }x \succcurlyeq x_{A}\\ \tau-M&\text{otherwise},\end{array}\right.\] where \(x_{A}^{(j)}\) denotes the \(j\)th coordinate of \(x_{A}\). Finally, for \(q\in\mathbb{N}\), \(M>0\), \(\sigma>0\), \(\tau\in\mathbb{R}\), \(\gamma,\lambda>0\), let \(P_{q,M}\equiv P_{q,M,\sigma,\tau,\lambda,\gamma}\) denote any joint distribution of \((X,Y)\) such that \(X\) has marginal distribution \(\mu_{q}\), and \(Y|X\sim\mathcal{N}\bigl{(}\eta_{q,M}(X),\sigma^{2}\bigr{)}\). **Lemma 38**.: _For \(d\geq 2\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(q\in\mathbb{N}\) and \(M>0\), we have \(P_{q,M}\equiv P_{q,M,\sigma,\tau,\lambda,\gamma}\in\mathcal{P}_{\mathrm{Mon},d }(\sigma)\cap\mathcal{P}_{\mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\) for all \(\theta\geq 2^{d}\)._ Proof.: We first prove that \(P_{q,M}\in\mathcal{P}_{\mathrm{Mon},d}(\sigma)\). Since the sub-Gaussianity condition is satisfied by construction, it suffices to show that \(\eta_{q,M}\) is coordinate-wise increasing on \(\mathbb{R}^{d}\). Whenever \(x_{0}\not\succcurlyeq x_{A}\), we have \(\eta_{q,M}(x_{0})=\inf_{x\in\mathbb{R}^{d}}\eta_{q,M}(x)\). On the other hand, for \(x_{0},x_{1}\in\mathbb{R}^{d}\) with \(x_{A}\preccurlyeq x_{0}\preccurlyeq x_{1}\), we have \[\eta_{q,M}(x_{0})=\tau+\lambda\cdot\min_{j\in[d]}\bigl{(}x_{0}^{(j)}-x_{A}^{( j)}\bigr{)}^{\gamma}\leq\tau+\lambda\cdot\min_{j\in[d]}\bigl{(}x_{1}^{(j)}-x_{A}^{( j)}\bigr{)}^{\gamma}=\eta_{q,M}(x_{1}),\] as required. We now show that \(P_{q,M}\in\mathcal{P}_{\mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\) and start by establishing that the condition in Definition 13_(i)_ is satisfied. For any \(x\in\mathcal{X}_{\tau}(\eta)\cap\mathrm{supp}(\mu_{q})=A\) and \(r\in(0,1]\), we have \(\mu_{q}\big{(}B_{\infty}(x,r)\big{)}\geq\big{(}r\wedge(1/2)\big{)}^{d}\geq(r/ 2)^{d}\). For \(r\geq 1/4^{d}\), we have \(\mu_{q}\big{(}B_{\infty}(x,r)\big{)}\leq 1\leq\theta\cdot(2r)^{d}\), so let \(r\in(0,1/4^{d})\). We then have for any \(x\in A\) that \(B_{\infty}(x,r)\cap\bigcup_{j\in\{1,2\}\times[q]}\{z_{j}\}=\emptyset\) and hence \(\mu_{q}\big{(}B_{\infty}(x,r)\big{)}=\mu_{q}\big{(}B_{\infty}(x,r)\cap A\big{)} \leq(2r)^{d}\). Finally, for Definition 13_(ii)_, observe that for any \(x\in\mathcal{X}_{\tau}(\eta)\cap\mathrm{supp}(\mu_{q})=A\) and \(r\in(0,1]\), we have \(x_{0}:=x+r\mathbf{1}_{d}\in B_{\infty}(x,r)\) satisfies \(x_{0}-x_{A}\succcurlyeq r\mathbf{1}_{d}\) and hence \(\eta_{q,M}(x_{0})\geq\tau+\lambda r^{\gamma}\), as required. **Lemma 39**.: _Fix \(d\geq 2\), \(\delta\in(0,1)\), positive integers \(m\leq n\) and \(q\leq\big{\lfloor}\frac{m}{32\log_{+}(m/\delta)}\big{\rfloor}\). If \(\mathcal{D}_{X,m}=\big{(}X_{1},\ldots,X_{m}\big{)}\sim\mu_{q}^{m}\), and we define \(\Omega_{1}:=\bigcap_{j\in\{1,2\}\times[q]}\big{\{}\{z_{j}\}\cap\mathcal{D}_{X, m}\neq\emptyset\big{\}}\), then \(\mathbb{P}_{\mu_{q}}(\Omega_{1}^{c})\leq\delta/4\)._ Proof.: For \(\boldsymbol{j}\in\{1,2\}\times[q]\), let \[\Omega_{1,\boldsymbol{j}}:=\bigg{\{}\frac{1}{m}\sum_{i=1}^{m}\mathbbm{1}_{\{X _{i}=z_{j}\}}\geq\frac{2^{d}-1}{2^{d+2}q}\bigg{\}}.\] Then by the multiplicative Chernoff bound (McDiarmid, 1998, Theorem 2.3(c)), the fact that \((2^{d}-1)/2^{d+4}\geq 1/32\) and the choice of \(q\), we have \[\mathbb{P}_{\mu_{q}}\bigg{(}\bigcup_{j\in\{1,2\}\times[q]}\Omega_{1, \boldsymbol{j}}^{c}\bigg{)}\leq 2q\cdot\exp\biggl{(}-\frac{2^{d}-1}{2^{d+4}q} \cdot m\biggr{)}\leq\frac{m}{4}\cdot\exp\Bigl{\{}-\log_{+}\Bigl{(}\frac{m}{ \delta}\Bigr{)}\Big{\}}\leq\frac{\delta}{4}.\] Moreover, \[\frac{(2^{d}-1)m}{2^{d+2}q}\geq\frac{m}{8q}\geq 4\log_{+}(m/\delta)\geq 1,\] whence \[\mathbb{P}_{\mu_{q}}\big{(}\Omega_{1}^{c}\big{)}=\mathbb{P}_{\mu_{q}}\bigg{(} \bigcup_{j\in\{1,2\}\times[q]}\bigg{\{}\frac{1}{m}\sum_{i=1}^{m}\mathbbm{1}_{\{ X_{i}=z_{j}\}}=0\bigg{\}}\bigg{)}\leq\mathbb{P}_{\mu_{q}}\bigg{(}\bigcup_{ \boldsymbol{j}\in\{1,2\}\times[q]}\Omega_{1,\boldsymbol{j}}^{c}\bigg{)}\leq \frac{\delta}{4},\] as required. **Lemma 40**.: _Fix \(d\geq 2\), \(\alpha\in(0,1)\), \(\delta\in(0,1]\), \(n\in\mathbb{N}\), \(m\in[n]\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(q\in\mathbb{N}\) and \(M\geq 1.7\sigma\sqrt{\log(41.6/\delta)}\). Let \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P_{q,M}^{n}\), and suppose that \(\big{\{}i\in[m]:X_{i}=z_{(j,q)}\big{\}}\neq\emptyset\) for \(j\in\{1,2\}\). Write \(i_{j}^{*}:=\max\{i\in[m]:X_{i}=z_{(j,q)}\}\) for \(j\in\{1,2\}\), and let_ \[\Omega_{2}:=\bigcap_{j=1}^{2}\{\hat{p}_{\sigma,\tau}(X_{i_{j}^{*}},\mathcal{D} )>\alpha\}.\] _Then \(\mathbb{P}_{P_{q,M}}\big{(}\Omega_{2}^{c}|\mathcal{D}_{X}\big{)}\leq\delta/4\)._ Proof.: For \(x\in\mathbb{R}^{d}\) and \(r>0\), define \(\mathcal{I}_{r}(x):=\{i\in[n]:X_{i}\preccurlyeq x,\|X_{i}-x\|_{\infty}\leq r\}\). Fix \(j\in\{1,2\}\), and note that \[\frac{\sigma}{|\mathcal{I}_{r}(X_{i_{j}^{*}})|}\cdot u_{\delta/8}\big{(}| \mathcal{I}_{r}(X_{i_{j}^{*}})|\big{)}\leq 1.7\sigma\sqrt{0.2+0.72\log(41.6/ \delta)}\leq 1.7\sigma\sqrt{\log(41.6/\delta)}\leq M\] for all \(r>0\). It follows by Lemma 45_(a)_ that, with probability at least \(1-\delta/8\) given \(\mathcal{D}_{X}\), we have simultaneously for all \(r>0\) that \[\sum_{i\in\mathcal{I}_{r}(X_{i^{*}_{j}})}\frac{Y_{i}-\tau}{\sigma} =\sum_{i\in\mathcal{I}_{r}(X_{i^{*}_{j}})}\frac{Y_{i}-(\tau-M)}{ \sigma}-|\mathcal{I}_{r}(X_{i^{*}_{j}})|\cdot\frac{M}{\sigma}\] \[\leq u_{\delta/8}\big{(}|\mathcal{I}_{r}(X_{i^{*}_{j}})|\big{)}-| \mathcal{I}_{r}(X_{i^{*}_{j}})|\cdot\frac{M}{\sigma}\leq 0,\] so that \(\hat{p}_{\sigma,\tau}(X_{i^{*}_{j}},\mathcal{D})=1\), and thus in particular \(\hat{p}_{\sigma,\tau}(X_{i^{*}_{j}},\mathcal{D})>\alpha\). Hence, the result follows by a union bound over \(j\in\{1,2\}\). **Lemma 41**.: _Fix \(d\geq 2\), \(\alpha\in(0,1)\), \(\delta\in(0,1/4]\), \(n\in\mathbb{N}\), \(\sigma,\gamma,\lambda>0\), \(s\in(0,1/2]\) and \(q\in\mathbb{N}\). Let \(\mathcal{D}_{X}=(X_{1},\ldots,X_{n})\sim\mu_{q}^{n}\) and let \(B_{j}:=x_{A}+[0,1/2]^{j-1}\times[0,s]\times[0,1/2]^{d-j}\) for \(j\in[d]\). Denote \(w_{n,m,\delta}:=173.13\big{(}\log_{+}\log n+\log_{+}(m/\delta)\big{)}\). If_ \[\frac{8}{3n}\log\Bigl{(}\frac{4d}{\delta}\Bigr{)}\leq\frac{s}{2^{d-1}}\leq \frac{\sigma^{2}}{2n\lambda^{2}s^{2\gamma}}\big{(}0.72q-w_{n,m,\delta}\big{)}\] _then, writing_ \[\Omega_{3}:=\bigcap_{j\in[d]}\biggl{\{}|\mathcal{D}_{X}\cap B_{j}|<\frac{ \sigma^{2}}{\lambda^{2}s^{2\gamma}}\big{(}0.72q-w_{n,m,\delta}\big{)}\biggr{\}},\] _we have \(\mathbb{P}_{\mu_{q}}(\Omega_{3}^{c})\leq\delta/4\)._ Proof.: Fix any \(j\in[d]\) and note that \(\mu_{q}(B_{j})=s/2^{d-1}\). By the upper bound on \(s\), a multiplicative Chernoff bound (McDiarmid, 1998, Theorem 2.3(b)) and the lower bound on \(s\), we have \[\mathbb{P}_{\mu_{q}}\Bigl{(}|\mathcal{D}_{X}\cap B_{j}|\geq\frac {\sigma^{2}}{\lambda^{2}s^{2\gamma}}\big{(}0.72q-w_{m,n,\delta}\big{)}\Bigr{)} \leq\mathbb{P}_{\mu_{q}}\Bigl{(}|\mathcal{D}_{X}\cap B_{j}|\geq \frac{ns}{2^{d-2}}\Bigr{)}\] \[\leq\exp\Bigl{(}-\frac{3ns}{8\cdot 2^{d-1}}\Bigr{)}\leq\frac{ \delta}{4d}.\] The result therefore follows by a union bound. **Lemma 42**.: _Fix \(d\geq 2\), \(\alpha\in(0,1)\), \(\delta\in(0,1]\), \(n\in\mathbb{N}\), \(m\in[n]\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\), \(q\in\mathbb{N}\), \(M>0\) and \(s>0\). Let \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P_{q,M}^{n}\), and let \(1\leq i_{1}<\ldots<i_{K}\leq m\) be such that \(\{i_{1},\ldots,i_{K}\}:=\bigl{\{}i\in[m]:X_{i}\in\mathcal{X}_{\tau}(\eta_{q,M}) \setminus\mathcal{X}_{\tau+\lambda s^{\gamma}}(\eta_{q,M})\bigr{\}}\). Denote further \(w_{n,m,\delta}:=173.13\big{(}\log_{+}\log n+\log_{+}(m/\delta)\big{)}\) as in Lemma 41. If \(q\geq w_{n,m,\delta}/0.72\geq 3\log(5.2\cdot 8\cdot m/\delta)+2\log\log(2n)\) and_ \[\max_{k\in[K]}\bigl{|}\{i\in[n]:X_{i}\in A,X_{i}\preccurlyeq X_{i_{k}}\} \bigr{|}\leq\frac{\sigma^{2}}{\lambda^{2}s^{2\gamma}}\big{(}0.72q-w_{n,m, \delta}\big{)}\] _then writing_ \[\Omega_{4}:=\bigcap_{k\in[K]}\Bigl{\{}\hat{p}_{\sigma,\tau}(X_{i_{k}},\mathcal{ D})>\frac{\alpha}{2^{q-1}}\Bigr{\}},\] _we have \(\mathbb{P}_{P_{q,M}}\big{(}\Omega_{4}^{c}|\mathcal{D}_{X}\big{)}\leq\delta/4\)._ Proof.: When \(K=0\), i.e. \(\big{\{}i\in[m]:X_{i}\in\mathcal{X}_{r}(\eta_{q,M})\setminus\mathcal{X}_{\tau+ \lambda s^{\gamma}}(\eta_{q,M})\big{\}}=\emptyset\), then \(\Omega_{4}^{c}=\emptyset\) and there is nothing to prove, so assume that \(K\in[m]\). For \(x\in\mathbb{R}^{d}\) and \(r>0\), define \(\mathcal{I}_{r}(x):=\{i\in[n]:X_{i}\preccurlyeq x,\|X_{i}-x\|_{\infty}\leq r\}\), \(\mathcal{I}_{r}^{A}(x):=\{i\in\mathcal{I}_{r}(x):X_{i}\in A\}\) and accordingly \(\mathcal{I}_{r}^{A^{c}}(x):=\{i\in\mathcal{I}_{r}(x):X_{i}\notin A\}\). Fix any \(k\in[K]\) and note first that by assumption, \[|\mathcal{I}_{r}^{A}(X_{i_{k}})|\leq\big{|}\{i\in[n]:X_{i}\in A,X_{i}\preccurlyeq X _{i_{k}}\}\big{|}\leq\frac{\sigma^{2}}{\lambda^{2}s^{2\gamma}}\big{(}0.72q-w_ {n,m,\delta}\big{)}\] for all \(r>0\). Hence \[\frac{\lambda^{2}s^{2\gamma}}{\sigma^{2}}\big{|}\mathcal{I}_{r}^ {A}(X_{i_{k}})\big{|}^{2} \leq\big{|}\mathcal{I}_{r}^{A}(X_{i_{k}})\big{|}\cdot\big{(}0.72q- w_{n,m,\delta}\big{)}\] \[\leq\frac{|\mathcal{I}_{r}(X_{i_{k}})|}{2}\bigg{\{}2.0808\cdot q \cdot\log 2-6\cdot 1.7^{2}\cdot\log_{+}\log n\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-4\cdot 1.7^{2} \cdot 0.72\cdot 5.2\cdot 8\cdot\log_{+}\Big{(}\frac{m}{\delta}\Big{)}\bigg{\}}\] \[\leq\frac{|\mathcal{I}_{r}(X_{i_{k}})|}{2}\bigg{\{}2.0808\log \Bigl{(}\frac{5.2}{\alpha}\cdot 2^{q-1}\Bigr{)}+\Big{(}1.7^{2}-4\cdot 1.7^{2} \Big{)}\log\log\bigl{(}2|\mathcal{I}_{r}(X_{i_{k}})|\bigr{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-4\cdot 1.7^{2} \cdot 0.72\cdot\log\Bigl{(}\frac{5.2\cdot 8\cdot K}{\delta}\Bigr{)}\bigg{\}}\] \[\leq|\mathcal{I}_{r}(X_{i_{k}})|\bigg{\{}\sqrt{1.7^{2}\cdot 0.72\log \Bigl{(}\frac{5.2}{\alpha}\cdot 2^{q-1}\Bigr{)}+1.7^{2}\log\log\bigl{(}2| \mathcal{I}_{r}(X_{i_{k}})|\bigr{)}}\] \[\qquad\qquad\qquad\qquad-\sqrt{2\cdot 1.7^{2}\log\log\bigl{(}2| \mathcal{I}_{r}(X_{i_{k}})|\bigr{)}+2\cdot 1.7^{2}\cdot 0.72\cdot\log \Bigl{(}\frac{5.2\cdot 8\cdot K}{\delta}\Bigr{)}}\bigg{\}}^{2}\] \[=\Big{(}u_{\alpha/2^{q-1}}\big{(}|\mathcal{I}_{r}(X_{i_{k}})| \big{)}-\sqrt{2}\cdot u_{\delta/(8K)}\big{(}|\mathcal{I}_{r}(X_{i_{k}})| \big{)}\Big{)}^{2}, \tag{17}\] where in the final inequality, we used the fact that \((a-2b)/2\leq\big{(}\sqrt{a}-\sqrt{b}\big{)}^{2}\) for \(a,b\geq 0\). By Lemma 45_(a)_, with probability at least \(1-\delta/(4K)\) conditional on \(\mathcal{D}_{X}\), we have simultaneously for all \(r>0\) that \[\sum_{i\in\mathcal{I}_{r}(X_{i_{k}})}\frac{Y_{i}-\tau}{\sigma} \leq\sum_{i\in\mathcal{I}_{r}^{A}(X_{i_{k}})}\frac{Y_{i}-(\tau+ \lambda s^{\gamma})}{\sigma}+\sum_{i\in\mathcal{I}_{r}^{A^{c}}(X_{i_{k}})} \frac{Y_{i}-(\tau-M)}{\sigma}+\frac{\lambda s^{\gamma}}{\sigma}|\mathcal{I}_{r }^{A}(X_{i_{k}})|\] \[<u_{\delta/(8K)}\big{(}|\mathcal{I}_{r}^{A}(X_{i_{k}})|\big{)}+u_{ \delta/(8K)}\big{(}|\mathcal{I}_{r}^{A^{c}}(X_{i_{k}})|\big{)}+\frac{\lambda s ^{\gamma}}{\sigma}|\mathcal{I}_{r}^{A}(X_{i_{k}})|\] \[\leq\sqrt{2}\cdot u_{\delta/(8K)}\big{(}|\mathcal{I}_{r}(X_{i_{k} })|\big{)}+\frac{\lambda s^{\gamma}}{\sigma}|\mathcal{I}_{r}^{A}(X_{i_{k}})|\] \[\leq u_{\alpha/2^{q-1}}\big{(}|\mathcal{I}_{r}(X_{i_{k}})|\big{)},\] where the third inequality follows from that fact that \(\sqrt{a}+\sqrt{b}\leq\sqrt{2}\cdot\sqrt{a+b}\) for all \(a,b\geq 0\), and the fourth follows from (17) and the fact that \(u_{\alpha/2^{q-1}}\big{(}|\mathcal{I}_{r}(X_{i_{k}})|\big{)}\geq\sqrt{2}\cdot u _{\delta/(8K)}\big{(}|\mathcal{I}_{r}(X_{i_{k}})|\big{)}\) since \(q\geq 3\log(5.2\cdot 8\cdot m/\delta)+2\log\log(2n)\). But \[\bigg{\{}\sum_{i\in\mathcal{I}_{r}(X_{i_{k}})}\frac{Y_{i}-\tau}{\sigma}<u_{ \alpha/2^{q-1}}\big{(}|\mathcal{I}_{r}(X_{i_{k}})|\big{)}\bigg{\}}=\big{\{}\hat{p }_{\sigma,\tau}(X_{i_{k}},\mathcal{D})>\alpha/2^{q-1}\big{\}},\] so the result follows by a union bound over \(k\in[K]\) **Lemma 43**.: _Fix \(m,q\in\mathbb{N}\) and suppose that \(\mathcal{D}_{X,m}=\{X_{i}:i\in[m]\}\subseteq\big{\{}z_{j}:\mathbf{j}\in\{1,2\}\times[q ]\}\cup A\) with \(\{z_{j}\}\cap\mathcal{D}_{X,m}\neq\emptyset\) for all \(\mathbf{j}\in\{1,2\}\times[q]\). Fix \(\alpha\in(0,1)\) and let \((p_{i})_{i\in[m]}\in(0,1]^{m}\) be such that \(\min_{j\in\{1,2\}}p_{i_{j}^{*}}>\alpha\), where \(i_{j}^{*}:=\max\{i\in[m]:X_{i}=z_{(j,q)}\}\). Then for \(\omega\in\{0,1\}\) and \(\mathbf{v}\in(0,\infty)^{m}\), we have_ \[\mathcal{R}_{\alpha}^{\mathrm{MG},\omega,\mathbf{v}}\big{(}\mathcal{G}_{\mathrm{W} }(\mathcal{D}_{X,m}),(p_{i})_{i\in[m]}\big{)}\cap\{i\in[m]:p_{i}>\alpha/2^{q-1} \text{ and }X_{i}\in A\}=\emptyset.\] Proof.: Let \(R_{0}^{\omega}:=\emptyset\), and for \(\ell\in[m]\) and \(k\in[m]\cup\{0\}\), let \(R_{\ell}^{\omega}\), \(\alpha_{\ell}^{\omega}\) and \(\alpha_{\ell,k}^{\omega}\) be as in Algorithm 3. Furthermore, for ease of notation, let \(I_{A}:=\{i\in[m]:X_{i}\in A\}\), \(I_{j}:=\{i\in[m]:X_{i}=z_{\mathbf{j}}\}\) for \(\mathbf{j}\in\{1,2\}\times[q]\) and \(I_{A^{c}}:=\{i\in[m]:X_{i}\notin A\}=\bigcup_{\mathbf{j}\in\{1,2\}\times[q]}I_{\bm {j}}\); see Figure 20(b). For each \(\ell\in[m]\cup\{0\}\) let \(P(\ell)\) denote the proposition that \[R_{\ell}^{\omega}\cap\Big{(}\big{\{}i\in I_{A}:p_{i}>\alpha(1/2)^{q-1}\big{\}} \cup I_{A^{c}}\Big{)}=\emptyset.\] Since \(\mathcal{R}_{\alpha}^{\mathrm{MG},\omega,\mathbf{v}}\big{(}\mathcal{G}_{\mathrm{W }}(\mathcal{D}_{X,m}),(p_{i})_{i\in[m]}\big{)}=R_{m}^{\omega}\), the result will follow if \(P(\ell)\) is true for all \(\ell\in[m]\cup\{0\}\), and we prove this by induction on \(\ell\). First, note that \(P(0)\) is true since \(R_{0}^{\omega}=\emptyset\). Now, fix any \(\ell_{0}\in[m-1]\cup\{0\}\) such that \(P(\ell_{0})\) holds true. In particular, this means that no hypothesis corresponding to a node in \(I_{A^{c}}\) has been rejected in the first \(\ell_{0}\) steps of the algorithm. For \(j\in[q]\), define \(i_{j}:=\max I_{(1,j)}\), so that \(i_{q}=i_{1}^{*}\), and let \(G:=\mathcal{G}_{\mathrm{W}}(\mathcal{D}_{X,m})\). Since \(\mathrm{pa}_{G}(i)\subseteq I_{A^{c}}\) whenever \(i\in I_{A^{c}}\setminus\{i_{1}^{*}\}\), we have that \(P(\ell_{0}+1)\) is true if \(i_{1}^{*},i_{2}^{*}\notin R_{\ell_{0}+1}^{\omega}\) and \(\alpha_{\ell_{0}+1}^{\omega}(i)\leq\alpha/2^{q-1}\) for all \(i\in I_{A}\). Regarding the first of these conditions, we have \(\sum_{i\in[m]}\alpha_{\ell}^{\omega}(i)=\alpha\) for all \(\ell\in[m]\cup\{0\}\), so in particular \(\alpha_{\ell_{0}+1}^{\omega}(i_{j}^{*})\leq\alpha<p_{i_{j}^{*}}\) for \(j\in\{1,2\}\) and hence \(i_{1}^{*},i_{2}^{*}\notin R_{\ell_{0}+1}^{\omega}\). For the second condition, note that by definition, \(L(G)=\{i_{\mathrm{L}}\}\) with \(i_{\mathrm{L}}:=\min I_{(1,1)}\). There exists exactly one directed path from \(i_{j}\) to \(i_{\mathrm{L}}\), unless \(j=1\) and \(|I_{(1,1)}|=1\). Thus, for any \(j\in[q]\), it follows that \(k_{j}:=\min\bigl{\{}k\in[m]\cup\{0\}:\alpha_{\ell_{0}+1,k}^{\omega}(i_{j})>0 \bigr{\}}\) is the maximiser of \(k\mapsto\alpha_{\ell_{0}+1,k}^{\omega}(i_{j})\) over \(k\in[m]\cup\{0\}\). We now claim that \(\alpha_{\ell_{0}+1,k_{j}}^{\omega}(i_{j})=\alpha/2^{j-1}\) and show this by another induction, this time on \(j\in[q]\). First, each node in \(I_{(1,1)}\setminus\{i_{1}\}\) has exactly one parent and this parent is itself contained in \(I_{(1,1)}\), so that \(\alpha_{\ell_{0}+1,k_{1}}^{\omega}(i_{1})=\alpha_{\ell_{0}+1,0}^{\omega}(i_{ \mathrm{L}})=\alpha\) for any \(\mathbf{v}\). If \(q=1\), this establishes the claim; otherwise, fix \(j_{0}\in[q-1]\) for which \(\alpha_{\ell_{0}+1,k_{j_{0}}}^{\omega}(i_{j_{0}})=\alpha/2^{j_{0}-1}\). By construction, \(\mathrm{pa}_{G}(i_{j_{0}})=\{\min I_{(1,j_{0}+1)},\min I_{(2,j_{0})}\}\) and each node in \(I_{(1,j_{0}+1)}\setminus\{i_{j_{0}+1}\}\) has again exactly one parent, which is contained in \(I_{(1,j_{0}+1)}\), while at the same time each node in \(I_{(1,j_{0}+1)}\setminus\{\min I_{(1,j_{0}+1)}\}\) has exactly one child, which is also contained in \(I_{(1,j_{0}+1)}\). Hence \(\alpha_{\ell_{0}+1,k_{j_{0}+1}}^{\omega}(i_{j_{0}+1})=\alpha_{\ell_{0}+1,k_{j_ {0}}}^{\omega}(i_{j_{0}})/2=\alpha(1/2)^{j_{0}}\), which completes the induction on \(j\in[q]\). Since for any \(i\in I_{A}\), any directed path in \(G\) from \(i\) to \(i_{\mathrm{L}}\) necessarily contains \(i_{q}\), we deduce that \[\max_{i\in I_{A}}\alpha_{\ell_{0}+1}^{\omega}(i)\leq\sum_{i\in I_{A}}\alpha_{ \ell_{0}+1}^{\omega}(i)\leq\alpha_{\ell_{0}+1,k_{q}}^{\omega}(i_{q})=\frac{ \alpha}{2^{q-1}},\] which completes the induction on \(\ell\in[m]\cup\{0\}\) and hence the proof. For \(m\in[n]\), let \(\hat{\mathcal{A}}_{n,m}^{\mathrm{U}}(\tau,\alpha,\mathcal{P})\subseteq\hat{ \mathcal{A}}_{n}(\tau,\alpha,\mathcal{P})\) denote the subfamily of data-dependent selection sets that control the Type I error at level \(\alpha\) over \(\mathcal{P}\) and for which \(\hat{A}(\mathcal{D})\) is almost surely the upper hull of a subset of \(\mathcal{D}_{X,m}\). Thus, for example, \(\hat{A}_{\sigma,\tau,\alpha,m}^{\mathrm{ISS},\omega,\mathbf{v}}(\mathcal{D})\in\hat{ \mathcal{A}}_{n,m}^{\mathrm{U}}\big{(}\tau,\alpha,\mathcal{P}_{\mathrm{Mon},d}( \sigma)\big{)}\) and \(\hat{A}_{\sigma,\tau,\alpha,m}^{\mathrm{ISS}}(\mathcal{D})\in\hat{\mathcal{A}} _{n,m}^{\mathrm{U}}\big{(}\tau,\alpha,\mathcal{P}_{\mathrm{Mon},d}(\sigma) \big{)}\). **Proposition 44**.: _Fix \(d\in\mathbb{N}\), \(\alpha\in(0,1/4]\), \(n\in\mathbb{N}\), \(m\in[n]\), \(\tau\in\mathbb{R}\), \(\sigma,\gamma,\lambda>0\) and \(\theta>1\). There exists \(c\in(0,1)\), depending only on \(d\), such that_ \[\sup_{P\in\mathcal{P}^{\prime}}\inf_{A\in\mathcal{A}_{n,m}^{\mathrm{U}}(\tau, \alpha,\mathcal{P}^{\prime})}\mathbb{E}_{P}\big{\{}\mu\big{(}\mathcal{X}_{\tau }(\eta)\setminus\hat{A}(\mathcal{D})\big{)}\big{\}}\geq c\cdot\frac{1}{m^{1/d}},\] _where \(\mathcal{P}^{\prime}:=\mathcal{P}_{\mathrm{Mon},d}(\sigma)\cap\mathcal{P}_{ \mathrm{Reg},d}(\tau,\theta,\gamma,\lambda)\)._ Proof.: For \(q\in\mathbb{N}\), let the antichain \(\mathbb{W}_{q,d}\), hypercubes \(\mathcal{H}_{\mathbf{j}}^{q}\) for \(\mathbf{j}\in\mathbb{W}_{q,d}\) as well as \(P_{S}\), \(\eta_{S}\) for \(S\subseteq\mathbb{W}_{q,d}\) be defined as in Section A.3 and let \(\mu:=\mathrm{Unif}\big{(}[0,1]^{d}\big{)}\). For ease of notation, we write \(P_{*}:=P_{S}\) and \(\eta_{*}:=\eta_{S}\) when \(S=\mathbb{W}_{q,d}\). Note first that for any \(\hat{A}\in\hat{\mathcal{A}}_{n,m}^{\mathrm{U}}(\tau,\alpha,\mathcal{P}^{ \prime})\) we have on \(\{\hat{A}(\mathcal{D})\subseteq\mathcal{X}_{\tau}(\eta_{*})\}\) that \[\hat{S}:=\{\mathbf{j}\in\mathbb{W}_{q,d}:\hat{A}(\mathcal{D})\cap\mathcal{H}_{\bm {j}}^{q}\neq\emptyset\}\subseteq\{\mathbf{j}\in\mathbb{W}_{q,d}:\mathcal{D}_{X,m} \cap\mathcal{H}_{\mathbf{j}}^{q}\neq\emptyset\}=:\tilde{S}.\] Now, for any \(q\in\mathbb{N}\) and \(\hat{A}\in\hat{\mathcal{A}}_{n,m}^{\mathrm{U}}(\tau,\alpha,\mathcal{P}^{ \prime})\), we have \(\mu\big{(}\mathcal{X}_{\tau}(\eta_{*})\setminus\hat{A}(\mathcal{D})\big{)} \geq|\mathbb{W}_{q,d}\setminus\hat{S}|/q^{d}\) and \(|\mathbb{W}_{q,d}|\geq q^{d-1}/d\), so that \[\mathbb{E}_{P_{*}}\big{\{}\mu\big{(}\mathcal{X}_{\tau}(\eta_{*})\setminus\hat {A}(\mathcal{D})\big{)}\big{\}}\geq\frac{|\mathbb{W}_{q,d}|}{q^{d}}\cdot \mathbb{E}_{P_{*}}\bigg{(}\frac{|\mathbb{W}_{q,d}\setminus\hat{S}|}{|\mathbb{ W}_{q,d}|}\bigg{)}\geq\frac{1}{d\cdot q}\cdot\mathbb{E}_{P_{*}}\bigg{(} \frac{|\mathbb{W}_{q,d}\setminus\hat{S}|}{|\mathbb{W}_{q,d}|}\bigg{)}. \tag{18}\] On the other hand, \[\mathbb{E}_{P_{*}}\big{(}|\mathbb{W}_{q,d}\setminus\tilde{S}|\big{)}=\mathbb{ E}_{P_{*}}\bigg{(}\sum_{\mathbf{j}\in\mathbb{W}_{q,d}}\mathbbm{1}_{\{\mathcal{H}_{ \mathbf{j}}^{q}\cap\mathcal{D}_{X,m}=\emptyset\}}\bigg{)}=|\mathbb{W}_{q,d}| \Big{(}1-\frac{1}{q^{d}}\Big{)}^{m}.\] Hence, when setting \(q=\lceil(2m)^{1/d}\rceil\) and writing \(S_{*}:=\mathbb{W}_{\lceil(2m)^{1/d}\rceil,d}\), we find that \(\mathbb{E}_{P_{*}}\big{(}|S_{*}\setminus\tilde{S}|/|S_{*}|\big{)}\geq\big{(}1- 1/(2m)\big{)}^{m}\geq 1/2\), so that \(\mathbb{P}_{P_{*}}\big{(}|S_{*}\setminus\tilde{S}|/|S_{*}|\geq 1/5\big{)} \geq 3/8\). Since \(\mathbb{P}_{P_{*}}\big{(}\hat{A}(\mathcal{D})\subseteq\mathcal{X}_{\tau}( \eta_{*})\big{)}\geq 3/4\) as \(\alpha\in(0,1/4]\), it follows that \[\mathbb{P}_{P_{*}}\bigg{(}\frac{|S_{*}\setminus\hat{S}|}{|S_{*}|} \geq\frac{1}{5}\bigg{)} \geq\mathbb{P}_{P_{*}}\bigg{(}\bigg{\{}\frac{|S_{*}\setminus\tilde{S }|}{|S_{*}|}\geq\frac{1}{5}\bigg{\}}\cap\big{\{}\hat{A}(\mathcal{D})\subseteq \mathcal{X}_{\tau}(\eta_{*})\big{\}}\bigg{)}\] \[\geq\mathbb{P}_{P_{*}}\bigg{(}\frac{|S_{*}\setminus\tilde{S}|}{|S _{*}|}\geq\frac{1}{5}\bigg{)}-\mathbb{P}_{P_{*}}\big{(}\hat{A}(\mathcal{D}) \nsubseteq We are now in a position to prove Proposition 37. Proof of Proposition 37.: Fix \(\delta\in(0,1)\). To begin with, we consider cases arising when either \(n\) or \(m\) are small, before the main part of the proof deals with \(m\) and \(n\) sufficiently large. First, suppose that \[n<\exp\Bigl{(}\frac{\sigma^{2}}{\lambda^{2}}\cdot 2^{2\gamma-13}\Bigr{)}\lor \biggl{\{}2^{d+1}\Bigl{(}\frac{\lambda^{2}}{\sigma^{2}}\Bigr{)}^{1/(2\gamma+d )}\log\Bigl{(}\frac{4d}{\delta}\Bigr{)}\biggr{\}}^{(2\gamma+d+1)/(2\gamma+d)} \lor 2^{(2\gamma+d+1)/d}.\] Then, by Theorem 17, there exists \(c_{1}^{\prime}(\delta)\equiv c_{1}^{\prime}(\delta,\alpha,d,\sigma,\lambda, \gamma)>0\) such that \[\sup_{P\in\mathcal{P}^{\prime}}\mathbb{E}_{P}\bigl{\{}\mu\bigl{(}\mathcal{X}_{ \tau}(\eta)\setminus\hat{A}_{\sigma,\tau,\alpha,m}^{\mathrm{ISS},\omega, \boldsymbol{v}}(\mathcal{D})\bigr{)}\bigr{\}}\geq c_{1}^{\prime}(\delta).\] Second, if \(m<2^{15}(1\lor\lambda^{2}/\sigma^{2})^{(d-1)/(2\gamma+d)}\cdot n^{d/(2\gamma+ d+1)}\log_{+}^{2}\bigl{(}n/(\alpha\wedge\delta)\bigr{)}\), then we have by the proof of Proposition 44 that there exists \(c_{2}^{\prime}(\delta)\equiv c_{2}^{\prime}(\delta,\alpha,d,\sigma,\lambda, \gamma)>0\) such that \[\sup_{P\in\mathcal{P}^{\prime}}\mathbb{E}_{P}\bigl{\{}\mu\bigl{(}\mathcal{X}_{ \tau}(\eta)\setminus\hat{A}_{\sigma,\tau,\alpha,m}^{\mathrm{ISS},\omega, \boldsymbol{v}}(\mathcal{D})\bigr{)}\bigr{\}}\geq\frac{1}{80\cdot 2^{1/d}dm^{1/d}} \geq\frac{c_{2}^{\prime}(\delta)}{n^{1/(2\gamma+d+1)}(\log_{+}n)^{2/d}}.\] Hence, we may suppose for the remainder of the proof that \[m\geq 2^{15}\Bigl{(}1\lor\frac{\lambda^{2}}{\sigma^{2}}\Bigr{)}^{(d-1)/(2 \gamma+d)}\cdot n^{d/(2\gamma+d+1)}\log_{+}^{2}\Bigl{(}\frac{n}{\alpha\wedge \delta}\Bigr{)}\geq n^{d/(2\gamma+d+1)}\] and \[n\geq\exp\Bigl{(}\frac{\sigma^{2}}{\lambda^{2}}\cdot 2^{2\gamma-13}\Bigr{)} \lor\biggl{\{}2^{d+1}\Bigl{(}\frac{\lambda^{2}}{\sigma^{2}}\Bigr{)}^{1/(2 \gamma+d)}\log\Bigl{(}\frac{4d}{\delta}\Bigr{)}\biggr{\}}^{(2\gamma+d+1)/(2 \gamma+d)}\lor 2^{(2\gamma+d+1)/d}.\] Write \[\rho_{0}:=\frac{1}{\log n}\log\biggl{(}\frac{m}{2^{15}(1\lor\lambda^{2}/ \sigma^{2})^{(d-1)/(2\gamma+d)}\log^{2}\bigl{(}n/(\alpha\wedge\delta)\bigr{)} }\biggr{)}\] and \(\rho:=(1-\rho_{0})\cdot(2\gamma+d)/(2\gamma+1)\). By our assumption on \(m\), we have \(\rho_{0}\geq d/(2\gamma+d+1)\) and hence \(\rho\leq(2\gamma+d)/(2\gamma+d+1)\). Moreover, by definition of \(\rho_{0}\) we have that \[q:=\biggl{\lceil}484\Bigl{(}1\lor\frac{\lambda^{2}}{\sigma^{2}} \Bigr{)}^{(d-1)/(2\gamma+d)}\cdot n^{\rho_{0}}\log\Bigl{(}\frac{n}{\alpha \wedge\delta}\Bigr{)}\biggr{\rceil} \leq 2^{9}\Bigl{(}1\lor\frac{\lambda^{2}}{\sigma^{2}}\Bigr{)}^{(d- 1)/(2\gamma+d)}\cdot n^{\rho_{0}}\log\Bigl{(}\frac{n}{\alpha\wedge\delta} \Bigr{)}\] \[\leq\frac{m}{64\log\bigl{(}n/(\alpha\wedge\delta)\bigr{)}}\leq \biggl{\lfloor}\frac{m}{32\log(m/\delta)}\biggr{\rfloor}.\] Next, let \[s:=\biggl{(}\frac{2\sigma^{2}}{n^{\rho}\lambda^{2}}\log\Bigl{(}\frac{n}{ \alpha\wedge\delta}\Bigr{)}\biggr{)}^{1/(2\gamma+d)}\geq\biggl{(}\frac{\sigma ^{2}}{n^{\rho}\lambda^{2}}\biggr{)}^{1/(2\gamma+d)}.\] Note also that \[s \leq\bigg{(}\frac{2\sigma^{2}}{\lambda^{2}}\bigg{)}^{1/(2\gamma+d)}n^ {-(1-\rho_{0})/(2\gamma+1)}\log^{1/(2\gamma+1)}\bigg{(}\frac{n}{\alpha\wedge \delta}\bigg{)}\] \[=\bigg{(}\frac{2\sigma^{2}}{\lambda^{2}}\bigg{)}^{1/(2\gamma+d)} \bigg{\{}\frac{m/n}{2^{15}(1\vee\lambda^{2}/\sigma^{2})^{(d-1)/(2\gamma+d)}\log ^{2}\!\big{(}n/(\alpha\wedge\delta)\big{)}}\cdot\log\!\Big{(}\frac{n}{\alpha \wedge\delta}\Big{)}\bigg{\}}^{1/(2\gamma+1)}\] \[\leq 2^{1/(2\gamma+d)-15/(2\gamma+1)}\bigg{(}\frac{(\sigma^{2}/ \lambda^{2})^{(2\gamma+1)/(2\gamma+d)}}{(\lambda^{2}/\sigma^{2})^{(d-1)/(2 \gamma+d)}\log n}\bigg{)}^{1/(2\gamma+1)}\] \[\leq 2^{1/(2\gamma+d)-15/(2\gamma+1)}\bigg{(}\frac{\sigma^{2}}{ \lambda^{2}\log n}\bigg{)}^{1/(2\gamma+1)}\leq 2^{1/(2\gamma+d)-15/(2\gamma+1) -(2\gamma-13)/(2\gamma+1)}\leq 1/2.\] Writing \(w_{n,m,\delta}:=173.13\big{(}\log_{+}\log n+\log_{+}(m/\delta)\big{)}\leq 1 73.13\cdot 2\log\!\big{(}n/(\alpha\wedge\delta)\big{)}\), we claim that \[\frac{8}{3n}\log\!\Big{(}\frac{4d}{\delta}\Big{)}\leq\frac{s}{2^{d-1}}\leq \frac{\sigma^{2}}{2n\lambda^{2}s^{2\gamma}}\big{(}0.72q-w_{n,m,\delta}\big{)}. \tag{19}\] To see this, first note for the lower bound that \[ns\geq n\bigg{(}\frac{\sigma^{2}}{n^{\rho}\lambda^{2}}\bigg{)}^{1/(2\gamma+d) }\geq\bigg{(}\frac{\sigma^{2}}{\lambda^{2}}\bigg{)}^{1/(2\gamma+d)}n^{(2\gamma +d)/(2\gamma+d+1)}\geq 2^{d+1}\log\!\Big{(}\frac{4d}{\delta}\Big{)}\geq 2^{d-1} \cdot\frac{8}{3}\log\!\Big{(}\frac{4d}{\delta}\Big{)}.\] As for the upper bound, note first that since \(n^{\rho_{0}}\geq 2\) and the lower bound on \(s\), \[0.72q\geq\bigg{\{}173.13+\bigg{(}\frac{\lambda^{2}}{\sigma^{2}}\bigg{)}^{(d-1 )/(2\gamma+d)}n^{\rho_{0}}\bigg{\}}\cdot 2\log\!\Big{(}\frac{n}{\alpha\wedge \delta}\Big{)}\geq w_{n,m,\delta}+\frac{4n^{1-\rho}}{(2s)^{d-1}}\log\!\Big{(} \frac{n}{\alpha\wedge\delta}\Big{)}.\] Hence \[2\log\!\Big{(}\frac{n}{\alpha\wedge\delta}\Big{)}\leq\frac{(2s)^{d-1}}{2n^{1- \rho}}(0.72q-w_{n,m,\delta}).\] This yields that \[\frac{s}{2^{d-1}}=\frac{1}{2^{d-1}s^{2\gamma+d-1}}\cdot\frac{2\sigma^{2}}{n^{ \rho}\lambda^{2}}\log\!\Big{(}\frac{n}{\alpha\wedge\delta}\Big{)}\leq\frac{ 1}{2n}\cdot\frac{\sigma^{2}}{\lambda^{2}s^{2\gamma}}\big{(}0.72q-w_{n,m,\delta }\big{)},\] thus establishing the claim (19). Now fix \(M:=1.7\sigma\sqrt{\log(41.6/\delta)}\). By Lemma 39, we have \(\mathbb{P}_{P_{q,M}}\big{(}\Omega_{1}\big{)}\geq 1-\delta/4\) for \(\Omega_{1}\) defined as in that lemma. For \(\Omega_{2}\) defined as in Lemma 40, it then holds that \(\mathbb{P}_{P_{q,M}}\big{(}\Omega_{2}|\mathcal{D}_{X}\big{)}\geq 1-\delta/4\) on \(\Omega_{1}\) by that same lemma. Let \((B_{j})_{j\in[d]}\) and \(\Omega_{3}\) be as in Lemma 41, so that \(\mathbb{P}_{P_{q,M}}\big{(}\Omega_{3}\big{)}\geq 1-\delta/4\). Moreover, let \(1\leq i_{1}<\ldots<i_{K}\leq m\) be such that \(\{i_{1},\ldots,i_{K}\}:=\{i\in[m]:X_{i}\in\mathcal{X}_{\tau}(\eta_{q,M})\setminus \mathcal{X}_{\tau+\lambda s^{\gamma}}(\eta_{q,M})\}\) as in Lemma 42. Note that \(\mathcal{X}_{\tau+\lambda s^{\gamma}}(\eta_{q,M})=\{x\in\mathbb{R}^{d}:x\succcur (0,1/2,0,\ldots,0)^{\top}+s\cdot\mathbf{1}_{d}\}\), so that \(B_{1},\ldots,B_{d}\) define a covering of \(\mathcal{X}_{\tau}(\eta_{q,M})\setminus\mathcal{X}_{\tau+\lambda s^{\gamma}}( \eta_{q,M})\). Hence, for every \(k\in[K]\), there exists \(j_{k}\in[d]\) such that \(X_{i_{k}}\in B_{j_{k}}\) and \[\big{|}\{i\in[n]:X_{i}\in A,X_{i}\preccurlyeq X_{i_{k}}\}\big{|}\leq\big{|}\{i \in[n]:X_{i}\in B_{j_{k}}\}\big{|},\] so that \[\Omega_{3}\subseteq\bigcap_{k\in[K]}\bigg{\{}\big{|}\{i\in[n]:X_{i}\in A,X_{i} \preccurlyeq X_{i_{k}}\}\big{|}\leq\frac{\sigma^{2}}{\lambda^{2}s^{2\gamma}} \big{(}0.72q-w_{n,m,\delta}\big{)}\bigg{\}}=:\Omega_{3}^{*}.\] Moreover, we have by Lemma 42 that the set \(\Omega_{4}\) defined therein satisfies \(\mathbb{P}_{P_{q,M}}\big{(}\Omega_{4}|\mathcal{D}_{X}\big{)}\geq 1-\delta/4\) on \(\Omega_{3}^{*}\). Hence, by Lemma 43, we have on \(\Omega_{1}\cap\Omega_{2}\cap\Omega_{3}\cap\Omega_{4}\) that \(\hat{A}^{\mathrm{ISS},\omega,\boldsymbol{v}}_{\sigma,\tau,\alpha,m}(\mathcal{ D})\subseteq\mathcal{X}_{\tau+\lambda s^{\gamma}}(\eta_{q,M})\), so \[\mu_{q}\big{(}\mathcal{X}_{\tau}(\eta_{q,M})\setminus\hat{A}^{ \mathrm{ISS},\omega,\boldsymbol{v}}_{\sigma,\tau,\alpha,m}(\mathcal{D})\big{)} =\frac{1}{2^{d}}-\mu_{q}\big{(}\hat{A}^{\mathrm{ISS},\omega, \boldsymbol{v}}_{\sigma,\tau,\alpha,m}(\mathcal{D})\big{)}\] \[\geq\frac{1}{2^{d}}-\mu_{q}\big{(}\mathcal{X}_{\tau+\lambda s^{ \gamma}}(\eta_{q,M})\big{)}=\frac{1-(1-2s)^{d}}{2^{d}}\geq\frac{s}{2^{d-1}},\] where the final inequality uses the fact that \(s\leq 1/2\). It follows that \[\mathbb{P}_{P_{q,M}}\bigg{\{}\mu_{q}\big{(}\mathcal{X}_{\tau}(\eta)\setminus \hat{A}^{\mathrm{ISS},\omega,\boldsymbol{v}}_{\sigma,\tau,\alpha,m}(\mathcal{ D})\big{)}\geq\frac{s}{2^{d-1}}\bigg{\}}\geq\mathbb{P}_{P_{q,M}}\big{(}\Omega_{1} \cap\Omega_{2}\cap\Omega_{3}\cap\Omega_{4}\big{)}\geq 1-\delta.\] Setting \(\delta=1/2\), we see that there exists \(c_{3}>0\), depending only on \(d\), \(\sigma\), \(\lambda\) and \(\gamma\), such that \[\mathbb{E}_{P_{q,M}}\big{\{}\mu_{q}\big{(}\mathcal{X}_{\tau}(\eta)\setminus \hat{A}^{\mathrm{ISS},\omega,\boldsymbol{v}}_{\sigma,\tau,\alpha,m}(\mathcal{ D})\big{)}\big{\}}\geq\frac{s}{2^{d}}\geq\frac{c_{3}}{n^{1/(2\gamma+d+1)}},\] so that the result follows for \(c:=c_{1}^{\prime}(1/2)\wedge c_{2}^{\prime}(1/2)\wedge c_{3}\). ## Appendix D Auxiliary Results **Lemma 45** (Howard et al., 2021).: _Let \((Z_{j})_{j\in\mathbb{N}}\) be a sequence of independent, sub-Gaussian random variables with variance parameter 1._ 1. _For any_ \(\alpha\in(0,1)\)_,_ \[\mathbb{P}\bigg{(}\bigcup_{k=1}^{\infty}\biggl{\{}\sum_{j=1}^{k}Z_{j}\geq u_ {\alpha}(k)\biggr{\}}\bigg{)}\leq\alpha,\] _where_ \(u_{\alpha}(k):=1.7\sqrt{k\bigl{\{}\log\log(2k)+0.72\log(5.2/\alpha)\bigr{\}}}\)_._ 2. _For any_ \(\alpha\in(0,1)\) _and_ \(\rho>0\)_,_ \[\mathbb{P}\bigg{(}\bigcup_{k=1}^{\infty}\biggl{\{}\sum_{j=1}^{k}Z_{j}\geq u_ {\alpha,\rho}^{\mathrm{NM}}(k)\biggr{\}}\bigg{)}\leq\alpha,\] _where_ \(u_{\alpha,\rho}^{\mathrm{NM}}(k):=\sqrt{2(k+\rho)\log\Bigl{(}\frac{1}{2\alpha }\sqrt{\frac{k+\rho}{\rho}}+1\Bigr{)}}\)_._ The following simple lemma on testing Gaussian distributions is used in the proof of one of our minimax lower bounds (Proposition 31). **Lemma 46**.: _Fix \(\alpha\in(0,1/4]\), \(\sigma>0\), \(n\in\mathbb{N}\) and \(\Delta\in\big{(}0,\frac{\sigma}{\sqrt{1.6n}}\log^{1/2}\bigl{(}\frac{1}{2\alpha }\bigr{)}\big{]}\). Let \(P_{0}=\mathcal{N}(0,\sigma^{2})\) and \(P_{1}=\mathcal{N}(\Delta,\sigma^{2})\), and let \(Z_{1},\ldots,Z_{n}\stackrel{{\mathrm{iid}}}{{\sim}}P\) for some \(P\in\{P_{0},P_{1}\}\). If \(\psi:\mathbb{R}^{n}\to\{0,1\}\) is a Borel measurable function satisfying \(\mathbb{P}_{P_{0}}\big{(}\psi(Z_{1},\ldots,Z_{n})=1\big{)}\leq\alpha\), then \(\mathbb{P}_{P_{1}}\big{(}\psi(Z_{1},\ldots,Z_{n})=1\big{)}\leq 1/2\)._ Proof.: Write \(\Phi\) and \(\phi\) for the standard normal distribution and density function respectively. By Gordon (1941), \[1-\Phi(z)>\frac{\phi(z)}{z+1/z}\geq\frac{\phi(z)}{3.2z}\] for all \(z\geq\sqrt{5/11}\). Hence for all \(z\geq\sqrt{5/11}\), we have \(G(z):=\big{(}1-\Phi(z)\big{)}e^{1.6z^{2}}>0\) and \[G^{\prime}(z)=3.2ze^{1.6z^{2}}\Big{(}1-\Phi(z)-\frac{\phi(z)}{3.2z}\Big{)}>0,\] so that \(1-\Phi(z)\geq G(\sqrt{5/11})\exp(-1.6z^{2})\). Since \(G(\sqrt{5/11})\geq 1/2\), it follows with \(z_{\alpha}:=1.6^{-1/2}\log^{1/2}\!\left(\frac{1}{2\alpha}\right)\geq\sqrt{5/11}\) that \(1-\Phi(z_{\alpha})\geq\alpha\). By the Neyman-Pearson lemma, for any Borel measurable function \(\psi:\mathbb{R}^{n}\to\{0,1\}\) satisfying \(\mathbb{P}_{P_{0}}\big{(}\psi(Z_{1},\ldots,Z_{n})=1\big{)}\leq\alpha\), we deduce that \[\mathbb{P}_{P_{1}}\big{(}\psi(Z_{1},\ldots,Z_{n})=1\big{)} \leq\mathbb{P}_{P_{1}}\big{(}n^{1/2}\bar{Z}>\sigma\Phi^{-1}(1- \alpha)\big{)}=1-\Phi\bigg{(}\Phi^{-1}(1-\alpha)-\frac{n^{1/2}\Delta}{\sigma} \bigg{)}\] \[\leq 1-\Phi\Big{(}\Phi^{-1}(1-\alpha)-z_{\alpha}\Big{)}\leq\frac{ 1}{2},\] as required. **Corollary 47**.: _Suppose that \(\alpha\in(0,2/3]\), \(\sigma>0\), \(t\in\mathbb{R}\), \(n\in\mathbb{N}\), \(p\in[8/n,1]\) and \(\Delta\in\big{(}0,\frac{\sigma}{\sqrt{3.2np}}\log_{+}^{1/2}\!\left(\frac{1}{5 \alpha}\right)\big{]}\), and let \(S\subseteq[0,1]^{d}\) be a Borel set. Let \(P_{0}\) and \(P_{1}\) denote Borel probability distributions over random pairs \((X,Y)\) taking values in \([0,1]^{d}\times\mathbb{R}\). For \(\omega\in\{0,1\}\), let \(P_{X}^{\omega}\) denote the corresponding marginal distribution over \(X\) and for \(x\in[0,1]^{d}\) let \(P_{Y|X=x}^{\omega}\) denote the corresponding conditional distribution of \(Y\) given \(X=x\). Assume that \(P_{X}^{0}=P_{X}^{1}\) and \(p=P_{X}^{0}(S)=P_{X}^{1}(S)\), and that for all \(x\in[0,1]^{d}\setminus S\) we have \(P_{Y|X=x}^{0}=P_{Y|X=x}^{1}\). Suppose further that \(P_{Y|X=x}^{\omega}=\mathcal{N}(t+\omega\cdot\Delta,\sigma^{2})\) for \(\omega\in\{0,1\}\) and \(x\in S\). Let \(\mathcal{D}=\big{(}(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\big{)}\sim P^{n}\) for some \(P\in\{P_{0},P_{1}\}\). If \(\psi:([0,1]^{d}\times\mathbb{R})^{n}\to\{0,1\}\) is a Borel measurable function satisfying \(\mathbb{P}_{P_{0}}\big{(}\psi(\mathcal{D})=1\big{)}\leq\alpha\), then \(\mathbb{P}_{P_{1}}\big{(}\psi(\mathcal{D})=0\big{)}\geq 1/20\)._ Proof.: First suppose that \(\alpha\in(0,1/10]\) and define Borel subsets \(A_{0}\), \(A_{1}\subseteq([0,1]^{d})^{n}\) by \[A_{0} :=\bigg{\{}(x_{i})_{i\in[n]}\in([0,1]^{d})^{n}:\sum_{i=1}^{n} \mathbb{1}_{\{x_{i}\in S\}}\leq 2np\bigg{\}},\] \[A_{1} :=\bigg{\{}(x_{i})_{i\in[n]}\in([0,1]^{d})^{n}:\mathbb{P}_{P_{0}} \big{\{}\psi(\mathcal{D})=1\big{|}\ (X_{i})_{i\in[n]}=(x_{i})_{i\in[n]}\big{\}}\leq\frac{5\alpha}{2}\bigg{\}}.\] Now, for \((x_{i})_{i\in[n]}\in([0,1]^{d})^{n}\), the Radon-Nikodym derivative of the conditional distribution of \(\mathcal{D}\sim P_{1}^{n}\) given \((X_{i})_{i\in[n]}=(x_{i})_{i\in[n]}\) with respect to the corresponding conditional distribution for \(\mathcal{D}\sim P_{0}^{n}\) is equal to the product of \(\sum_{i=1}^{n}\mathbb{1}_{\{x_{i}\in S\}}\) likelihood ratios between two univariate Gaussians with common variance \(\sigma^{2}\) and mean differing by \(\Delta\). Hence, for \((x_{i})_{i\in[n]}\in A_{0}\cap A_{1}\), an application of Lemma 46 with \(\sum_{i=1}^{n}\mathbb{1}_{\{x_{i}\in S\}}\leq 2np\) in place of \(n\) and \(5\alpha/2\) in place of \(\alpha\) yields \[\mathbb{P}_{P_{1}}\big{\{}\psi(\mathcal{D})=1|(X_{i})_{i\in[n]}=(x_{i})_{i\in[ n]}\big{\}}\leq\frac{1}{2}. \tag{20}\] Moreover, by the multiplicative Chernoff bound (McDiarmid, 1998, Theorem 2.3(b)), we have \[\mathbb{P}_{P_{1}}\big{\{}(X_{i})_{i\in[n]}\notin A_{0}\big{\}}\leq e^{-3np/8} \leq\frac{1}{20}.\] Moreover, by Markov's inequality we have \(\mathbb{P}_{P_{1}}\{(X_{i})_{i\in[n]}\notin A_{1}\}\leq 2/5\). Combining with (20) yields \(\mathbb{P}_{P_{1}}\big{(}\psi(\mathcal{D})=1\big{)}\leq 19/20\), as required. Now suppose that \(\alpha\in(1/10,2/3]\). By Pinsker's inequality we have \[\mathrm{TV}\left(P_{0}^{n},P_{1}^{n}\right)\leq\sqrt{\frac{n}{2}\cdot\mathrm{ KL}(P_{0},P_{1})}=\sqrt{\frac{np}{2}\cdot\mathrm{KL}\big{(}\mathcal{N}(t,\sigma^{2}), \mathcal{N}(t+\Delta,\sigma^{2})\big{)}}\leq\frac{\sqrt{np}\cdot\Delta}{2 \sigma}\leq\frac{1}{2\sqrt{3.2}}.\] Hence, for any Borel measurable function satisfying \(\mathbb{P}_{P_{0}}\big{(}\psi(\mathcal{D})=1\big{)}\leq\alpha\leq 2/3\), we have \[\mathbb{P}_{P_{1}}\big{(}\psi(\mathcal{D})=0\big{)}\geq\mathbb{P}_{P_{0}}\big{(} \psi(\mathcal{D})=0\big{)}-\mathrm{TV}\left(P_{0}^{n},P_{1}^{n}\right)\geq 1- \alpha-\frac{1}{2\sqrt{3.2}}\geq\frac{1}{20},\] as required. **Lemma 48**.: 1. _Let_ \(P,Q\) _denote probability measures on a measurable space_ \((\mathcal{X},\mathcal{A})\)_. Then_ \[\mathrm{TV}(P,Q)=\inf_{(X,Y)\sim(P,Q)}\mathbb{P}(X\neq Y),\] _where the infimum is taken over all pairs of random variables_ \(X\sim P\) _and_ \(Y\sim Q\) _defined on the same probability space, and where the infimum is attained._ 2. _For_ \(i\in[n]\)_, let_ \(P_{i},Q_{i}\) _denote probability measures on a measurable space_ \((\mathcal{X}_{i},\mathcal{A}_{i})\)_, and let_ \(P:=\times_{i=1}^{n}P_{i}\) _and_ \(Q:=\times_{i=1}^{n}Q_{i}\) _denote the corresponding product measures. Then_ \[\mathrm{TV}(P,Q)\leq\sum_{i=1}^{n}\mathrm{TV}(P_{i},Q_{i}).\] Proof.: _(a)_ Let \(X\sim P\) and \(Y\sim Q\) be defined on the same probability space. Then for any \(A\in\mathcal{A}\), \[P(A)-Q(A)=\mathbb{P}(X\in A)-\mathbb{P}(Y\in A)\leq\mathbb{P}(X\in A,Y\notin A )\leq\mathbb{P}(X\neq Y).\] Similarly, \(Q(A)-P(A)\leq\mathbb{P}(X\neq Y)\), so since \(A\in\mathcal{A}\) was arbitrary, we see that \(\mathrm{TV}(P,Q)\leq\mathbb{P}(X\neq Y)\). This bound holds for all couplings of \(X\sim P\) and \(Y\sim Q\), so \[\mathrm{TV}(P,Q)\leq\inf_{(X,Y)\sim(P,Q)}\mathbb{P}(X\neq Y).\] To see that this bound is in fact an equality, and that the infimum is attained, let \(p,q\) denote the respective densities of \(P\) and \(Q\) with respect to \(P+Q\). We construct a coupling of \(X\sim P\) and \(Y\sim Q\) as follows: with probability \(1-\mathrm{TV}(P,Q)\), sample \(X=Y\) from a distribution having density \((p\wedge q)/\big{(}1-\mathrm{TV}(P,Q)\big{)}\) with respect to \(P+Q\), and otherwise sample \(X\) and \(Y\) independently from distributions having respective densities \((p-q)\mathbb{1}_{\{p>q\}}/\mathrm{TV}(P,Q)\) and \((q-p)\mathbb{1}_{\{q>p\}}/\mathrm{TV}(P,Q)\) with respect to \(P+Q\). The fact that the given expressions are indeed densities with respect to \(P+Q\) follows because \(\operatorname{TV}(P,Q)=\frac{1}{2}\int_{\mathcal{X}}|p-q|\,d(P+Q)\). We then have for any \(A\in\mathcal{A}\) that \[\mathbb{P}(X\in A)=\int_{A}(p\wedge q)\,d(P+Q)+\int_{A}(p-q)\mathbb{1}_{\{p>q \}}\,d(P+Q)=\int_{A}p\,d(P+Q)=P(A),\] and similarly \(\mathbb{P}(Y\in A)=Q(A)\). Thus \(X\sim P\) and \(Y\sim Q\), and since \(\mathbb{P}(X\neq Y)\leq\operatorname{TV}(P,Q)\), the result follows. _(b)_ By _(a)_, there exist independent pairs \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\) with \(X_{i}\sim P_{i}\), \(Y_{i}\sim Q_{i}\) and \(\mathbb{P}(X_{i}\neq Y_{i})=\operatorname{TV}(P_{i},Q_{i})\) for \(i\in[n]\). Then \(X:=(X_{1},\ldots,X_{n})\sim P\) and \(Y:=(Y_{1},\ldots,Y_{n})\sim Q\), and by a union bound, \[\operatorname{TV}(P,Q)\leq\mathbb{P}(X\neq Y)=\mathbb{P}\left(\bigcup_{i=1}^{ n}\{X_{i}\neq Y_{i}\}\right)\leq\sum_{i=1}^{n}\mathbb{P}(X_{i}\neq Y_{i})= \sum_{i=1}^{n}\operatorname{TV}(P_{i},Q_{i}).\] **Lemma 49**.: _The following inequalities hold:_ 1. \(\log_{+}(xy)\leq y\cdot\log_{+}x\) _for_ \(x>0\) _and_ \(y\geq 1\)_._ 2. \(\log_{+}(x^{a})\leq a\cdot\log_{+}x\) _for_ \(x>0\) _and_ \(a\geq 1\)_._ 3. \(\log(xy)\leq y\cdot\log x\) _for_ \(x\geq 2\) _and_ \(y\geq 2\)_._ 4. \(\log(xy)\leq y\cdot\log x\) _for_ \(x\geq e\)_,_ \(y\geq 1\)_._ 5. \(\log_{+}(xy)\leq\log_{+}x+\log_{+}y\) _for_ \(x,y>0\)_._ Proof.: _(i)_ Suppose first that \(x\in(0,e)\). Then, \(y\log_{+}x=y\geq 1+\log y=\log_{+}(ey)\geq\log_{+}(xy)\). If on the other hand \(x\geq e\), then \(xy\geq e\) and the result is a consequence of _(iv)_ below. _(ii)_ We have \(\log_{+}(x^{a})=\log(x^{a}\lor e)\leq\log(x^{a}\lor e^{a})=a\log_{+}x\). _(iii)_ As \(z\mapsto(z-1)/\log z\) is increasing for \(z>1\), we have \((y-1)/\log y\geq 1/\log 2\). Thus \(\log(xy)\leq\log x+(y-1)\log 2\leq y\log x\). _(iv)_ Since \(\log x\geq 1\), we have \(\log(xy)\leq\log x+(y-1)\leq y\log x\). _(v)_ We have \(\log_{+}(xy)=\log(xy\lor e)\leq\log\bigl{(}(x\lor e)(y\lor e)\bigr{)}=\log_{+} x+\log_{+}y\). **Acknowledgements:** The research of TIC was supported by Engineering and Physical Sciences Research Council (EPSRC) New Investigator Award EP/V002694/1. The research of RJS was supported by EPSRC Programme grant EP/N031938/1 and ERC Advanced Grant 101019498.
2305.14499
NAIL: Lexical Retrieval Indices with Efficient Non-Autoregressive Decoders
Neural document rerankers are extremely effective in terms of accuracy. However, the best models require dedicated hardware for serving, which is costly and often not feasible. To avoid this serving-time requirement, we present a method of capturing up to 86% of the gains of a Transformer cross-attention model with a lexicalized scoring function that only requires 10-6% of the Transformer's FLOPs per document and can be served using commodity CPUs. When combined with a BM25 retriever, this approach matches the quality of a state-of-the art dual encoder retriever, that still requires an accelerator for query encoding. We introduce NAIL (Non-Autoregressive Indexing with Language models) as a model architecture that is compatible with recent encoder-decoder and decoder-only large language models, such as T5, GPT-3 and PaLM. This model architecture can leverage existing pre-trained checkpoints and can be fine-tuned for efficiently constructing document representations that do not require neural processing of queries.
Livio Baldini Soares, Daniel Gillick, Jeremy R. Cole, Tom Kwiatkowski
2023-05-23T20:09:52Z
http://arxiv.org/abs/2305.14499v2
# NAIL: Lexical Retrieval Indices with Efficient Non-Autoregressive Decoders ###### Abstract. Neural document rerankers are extremely effective in terms of accuracy. However, the best models require dedicated hardware for serving, which is costly and often not feasible. To avoid this serving-time requirement, we present a method of capturing up to 86% of the gains of a Transformer cross-attention model with a lexicalized scoring function that only requires \(10^{-6}\%\) of the Transformer's FLOPs per document and can be served using commodity CPUs. When combined with a BM25 retriever, this approach matches the quality of a state-of-the art dual encoder retrieval, that still requires an accelerator for query encoding. We introduce nail (**N**on-**A**utoregressive **I**ndexing with **L**anguage models) as a model architecture that is compatible with recent encoder-decoder and decoder-only large language models, such as T5, GPT-3 and PaLM. This model architecture can leverage existing pre-trained checkpoints and can be fine-tuned for efficiently constructing document representations that do not require neural processing of queries. ## 1. Introduction We attempt to answer the following question: to what extent can the computationally-intensive inference in modern neural retrieval systems be pushed entirely to indexing time? Neural networks have revolutionized information retrieval, both with powerful reranking models that cross-attend to query and document, and with dual-encoder models that map queries and documents to a shared vector space, leveraging approximate nearest neighbor search for top-k retrieval. The strongest systems typically use a dual-encoder for retrieval followed by a cross-attention reranker to improve the ordering. However, both these components tend to be built on increasingly large Transformers (Sohn et al., 2015; Kwiatkowski et al., 2016; Kwiatkowski et al., 2017; Kwiatkowski et al., 2018) and thus rely on dedicated accelerators to process queries quickly at serving time. In many application settings, this may be impractical or costly, and as we will show, potentially unnecessary. In particular, we explore a retrieval paradigm where documents are indexed by predicted query token scores. As a result, scoring a query-document pair \((q,d)\) simply involves looking up the scores for the tokens in \(q\) associated with \(d\) in the index. While the scores are predicted by a neural network, the lookup itself involves no neural network inference so can be far faster than other approaches. However, this naturally means that there can be no cross-attention between a specific query and document or even a globally learned semantic vector space. Given these shortcomings, it would seem surprising that such a model, which offloads all neural network computation to indexing time, could be a practical alternative to its more expensive neural counterparts. In addition, while we want to make use of large pre-trained language models, which have been shown to generalize well over a number of language and retrieval tasks (Bahdan et al., 2015; Kwiatkowski et al., 2016; Kwiatkowski et al., 2017; Kwiatkowski et al., 2018; Kwiatkowski et al., 2018), a key challenge is that they have universally adopted a sequence-to-sequence architecture which is not obviously compatible with precomputing query scores. Naive approaches are either computationally infeasible (scoring all possible queries), or rely on sampling a small, incomplete set of samples (such as in Lewis et al. (Lewis et al., 2018)). To overcome this challenge, we introduce a novel use of non-autoregressive decoder architecture that is compatible with existing Transformer-based language models (whether Encoder-Decoder or Decoder-only (Kwiatkowski et al., 2016)). It allows the model, in a single decode step, to score all vocabulary items in parallel. This makes document indexing with our model approximately as expensive as indexing with document encoders used in recent dual-encoder retrieval systems (Kwiatkowski et al., 2016; Kwiatkowski et al., 2018; Kwiatkowski et al., 2018). We call the retrieval system based on this proposed model nail (Non-Autoregressive **I**ndexing with **L**anguage models). We summarize our contributions as follows: 1. We advance prior work on learned sparse retrieval by leveraging pretrained encoder-decoder LMs with a novel non-autoregressive decoder. 2. We describe a range of experiments using the BEIR benchmark (Kwiatkowski et al., 2018) that explore the performance and efficiency of our model as a reranker and as a retriever compared with a variety of existing systems. As a reranker, nail can recover 86% of the performance of a large cross-attention reranker (Kwiatkowski et al., 2018), while requiring \(10^{-6}\%\) of the inference-time FLOPS per query. As a retriever, nail has an extremely high upper bound for recall--exceeding the performance of all other retrievers in the zero-shot setting. Finally, by using BM25 as a retriever and nail as a reranker, we can match state-of-the-art dual-encoders (Kwiatkowski et al., 2018; Kwiatkowski et al., 2018) with \(10^{-4}\%\) of the inference-time FLOPS. 3. We propose our model as a preferred solution when significant compute is available at indexing time, but not on-demand at serving time, and we provide a cost analysis that illustrates when our approach could be preferred to previous work that harnesses LLMs. ## 2. Related Work There has been much work in information retrieval leveraging neural networks in recent years, which we cannot adequately cover in this paper. For a comprehensive overview, we refer the reader to the survey by Hambarde and Proenca (Hambarde and Proenca, 2018). In this section we describe methods that minimize the use of expensive neural methods at query inference time which are typically methods of _sparse retrieval_, focusing on those that leverage large language models. ### LM-based Term Weighting Bag-of-words models, such as TF-IDF and BM25 (Wang et al., 2017), use term weighting based on corpus statistics to determine relevance of document terms to query terms. Our work can be seen as a way to construct document term weights that are both (1) unconditional with respect to the query, and (2) indexed using lexicalized features (specifically, we use a vector of token scores). As a result, this type of document representation can be precomputed (at indexing time) and does not require expensive computation at query-time. Prior work on leveraging language models to produce such lexicalized term weighting can be roughly divided into two groups: those with just document-side encoders, and those with query-side and document-side encoders. Examples of the first group include DeepCT (Chen et al., 2016), DeepTR (Wang et al., 2017), and DeepImpact (Wang et al., 2018), Tilde v2 (Wang et al., 2018), and Splade-doc (Chen et al., 2017). These systems are examples of the model paradigm we are exploring, in which all neural network computation happens at indexing time. Our work can be seen as an attempt to update these systems (which use word2vec embeddings or encoder-only language models) to modern encoder-decoder architectures. Splade-doc is the most recent (and performant) of these, so is in many cases the most useful point of comparison for our work. We include results for the best version of Splade-doc (Chen et al., 2018). Examples of the second group include SPARTA (Wang et al., 2018), ColBERT (Chen et al., 2018), ColBERT v2 (Wang et al., 2018), COIL (Chen et al., 2018), Splade (Chen et al., 2018), and Splade v2 (Chen et al., 2017). These sparse dual-encoders have proven themselves competitive with dense dual-encoders, and have some advantages like improved interpretability. We demonstrate comparable performance without the need for any query-side encoder. ### LM-based Document Expansion Another way to improve retrieval indices with the help of language models is to perform document expansion. This consists of augmenting the terms in a document that do not occur in its original text, but are likely to be useful for retrieval. When used in combination with a lexicalized retrieval index, document expansion can be implemented without additional query-time computational requirements. Recent examples of LM-based document expansion systems include Doc2Query (Wang et al., 2018) and Doc2Query-T5 (Wang et al., 2018). Other forms of document expansion include the _Probably asked questions_ database (Zhu et al., 2019) which, via an expensive offline system, uses a generative language model to produce lists of questions for every document in the corpus. We agree with Lin and Ma that document expansion typically improves the quality of retrieval systems, irrespective of representation used (Lin and Ma, 2019). Our approach, however, makes no assumptions about which terms should be used to index a document, allowing the model to score all tokens in the vocabulary. ### Non-autoregressive decoders Non-autoregressive sequence-to-sequence models have been previously proposed and studied, particularly in the context of machine translation (Lin and Ma, 2019; Chen et al., 2019; Wang et al., 2019), motivated by the computational complexity of standard auto-regressive decoding, which requires a decode step per generated token. Non-autoregressive decoding breaks the inter-step dependency and thus provides two computational benefits: (1) a single step through the decoder can produce outputs for more than one position, and (2) computation can be easily parallelized since are is no time-wise dependencies between computations. While these systems use non-autoregressive decoding to perform iterative generation of text, we know of no existing work that uses non-autoregressive decoding to produce document representations or for retrieval purposes. ## 3. Nail Model A major goal of this work is to investigate retrieval methods that forego neural computation and the need for specialized accelerator hardware _at query time_. As such, we focus on a method that uses a large neural model to precompute the required representations of the retrieval items (documents) ahead of time. Then, at retrieval time, the method performs only basic featurization (e.g., tokenization) of the queries. Specifically, we investigate query-document scoring functions that score the compatibility of a query-document pair with the inner-product of separate featurizations of the query \(\phi_{q}(q)\) and document \(\phi_{d}(d)\). \[\text{score}(q,d)=\langle\phi_{q}(q),\phi_{d}(d)\rangle \tag{1}\] This form is familiar from both traditional lexicalized retrieval and from more recent work on dense retrieval. In lexicalized retrieval, (e.g., TF-IDF and BM25) (Wang et al., 2017; Wang et al., 2017), \(\phi_{q}\) and \(\phi_{d}\) assign non-zero scores to sub-strings of \(q\) and \(d\). On the other hand, in dense retrieval (Lin and Ma, 2019; Chen et al., 2017; Wang et al., 2017), \(\phi_{q}\) and \(\phi_{d}\) are neural networks that map \(q\) and \(d\) to dense vectors. Note that this formulation does not allow for deeper interactions between \(d\) and \(q\), such as typical cross-encoder scorers, as these cannot be computed efficiently and without an accelerator at query time. We investigate an alternative formulation of Equation 1 than either traditional lexicalized retrieval or dense retrieval. In this formulation, \(\phi_{d}\) can be an arbitrarily complex neural network, but \(\phi_{q}\) must be a sparse featurization that can be quickly computed on commodity CPUs. This way, it is possible to push all costly neural network inference to indexing time, and avoid the need for accelerators at serving-time. For this paper, we choose \(\phi_{q}\) to be a \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline System & \multicolumn{3}{c}{Cross-Attention Enc.} & \multicolumn{3}{c}{Query/Dual Enc.} & \multicolumn{3}{c}{Lexical} \\ \cline{2-10} & MonoT5-3B & MiniLM-L6 & TinyBERT-L6 & GTR-XXL & Contriever & Splade-v2 & BERT-tiny & Splade-doc & NAIL \\ \hline \(10^{11}\) & \(10^{10}\) & \(10^{10}\) & \(10^{11}\) & \(10^{9}\) & \(10^{9}\) & \(10^{8}\) & \(10^{2}\) & \(10^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 1. Estimated FLOPS required to score a (_query_, _document_) pair (Chen et al., 2016). Note that for dual-encoder and lexical systems, document representations are precomputed. _query_ is assumed to be of length 16 tokens, and _document is assumed length of 128 tokens. Note that the standard versions of Splade-v2 and Contriever are based on BERT-base. simple tokenizer, but we believe that our results could also extend to more complex sparse featurizations. ### Independent prediction of query tokens Given the choice of \(\phi_{q}\) described above, we need to learn a function \(\phi_{d}\) that can assign high scores to tokens that are are likely to occur in a query associated with the input document and low scores to tokens that are unlikely to appear in such a query. This goal differs from related work on query prediction for document expansion (Zhu et al., 2017; Chen et al., 2017) where only a few likely query terms are added to the set of document terms. Instead of aiming to predict a small number of queries that are related to \(d\), we aim to predict a featurization of \(d\) that can be used to score _any_ query. Given that an important motivation of this work is to make use of large pretrained language models, we must also investigate how best to adapt the sequence-to-sequence generative architecture that most such models have adopted (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017). In particular, the Transformer-based language models adopt an autoregressive decoding strategy, where the model predicts a single token position at a time, conditioned on the output of previous predictions. A naive decoding strategy, of decoding every possible target query ahead of time, is not computationally feasible, requiring \(32k^{16}=10^{72}\) decode steps. How do we generate document representations, using a sequence-to-sequence architecture, in a computationally efficient way? To do this, while also making use of pre-trained Transformer language models, we modify the decoder stack to support independent predictions of the output tokens (also known in the literature as _non-autoregressive decoding_(Kal scores for each token. However, we found that the model was able to represent a more diverse and better-performing distribution of query tokens when it could distribute their predictions over multiple output positions. ### Contrastive training Similar to previous work that has trained dual encoders for retrieval, we utilize negative training examples in order to do contrastive learning. In particular, we assume training data of the form \(\mathcal{D}=\{(q_{0},q_{0}^{+},q_{0}^{-}),\ldots,(q_{m},d_{n}^{+},\mathbf{q}_{n}^ {-})\}\) made up of triples that associate a query \(q_{i}\) with a positive passage \(d_{i}^{+}\) and a set of \(k\) negative passages \(\mathbf{d}_{i}^{-}=\{d_{i:\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! in zero-shot setting. While neural models have made huge gains over BM25 on _in-domain_ data, BEIR shows that a variety of neural retrievers underperform relative to BM25 on _out-of-domain_ data. BEIR results are typically presented as two separate tasks, where most systems are only evaluated on either the _reranking_ variant or the _full retrieval_ variant. In the full retrieval variant, systems must retrieve over the provided corpus of document passages, which range from a few thousand to a few million, and they are generally evaluated based both on their recall@100 and their nDCG@10 (Liu et al., 2011), providing a view into their ability to retrieve the gold passages into the top 100 and the ordering of the top ten passages, respectively. In the reranking variant, models do not have to do retrieval, and the recall@100 is fixed to the performance of an off-the-shelf BM25 system, so only nDCG@10 is reported. ## 5. Experimental Evaluation In this section, we compare the proposed nail system to other systems that have published results on BEIR. To compare with some sparse systems that have not been evaluated on BEIR datasets, we also make of use the MS-MARCO passage ranking task. We focus on answering the following questions: * How does nail perform as a reranker, particularly when compared to much more expensive neural reranker systems? * What is the potential for using nail for full corpus retrieval? Can nail representations be effectively sparsified? * How does nail compare to recent term weighting retrieval systems that use neural or language models? * How does nail compare with a similarly trained dual-encoder system that uses an expensive query-side encoder? ### Reranking In the _reranking_ BEIR task, each system must rerun the 100 passages returned by an off-the-shelve BM25 system. **Baselines** In this section we divide approaches into two types of systems: lexical-based approaches and cross-encoders. In the cross-encoder category, we compare to MonoT5-3B (Meng et al., 2017) and MiniLM-L6 1. MiniLM-L6 is a BERT-based models trained on MS-MARCO using a cross-encoder classifier. MonoT5-3B uses a T5-based model fine-tuned on MS-MARCO, using a generative loss for reranking. Footnote 1: [https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) **Results** Table 2 shows the reranking results. The baseline comparison for nail's performance here is BM25 alone: using BM25 without a reranker is the only other method that does not need to run a neural network for each query. We see that nail improves over BM25 fairly consistently. The improvement on MS-MARCO, which has in-domain training data, is especially striking. On BEIR, nail improves performance on 10 out of the 12 datasets increasing the average score by over 5 points. While cross-encoder models are much more powerful, they are also much more expensive. Cross-encoder models have to run inference on all 100 documents for each query. Thus, nail uses 8 to 9 orders of magnitude fewer FLOPS than the cross encoder models, corresponding to almost 1 trillion fewer FLOPS for a single query. Moreover, nail significantly closes the gap between the BM25 baseline and the top performing cross-encoder rerankers, capturing 86% of the gains on MS MARCO and 45% of the gains on the broader suite of BEIR tasks. Thus, it presents an attractive alternative to expensive rerankers when compute is limited. ### Full Corpus Retrieval In the _full corpus retrieval_ task, each system must retrieve and rank from each dataset's entire passage corpus for the dataset. Because nail is very cheap to run as a reranker, it is reasonable to compare the BM25+nail results from Section 5.1 to direct retrieval systems that do not include a reranking step, but typically consume many orders of magnitude more FLOPs at query time. Table 3 presents this comparison. As nail could be used to populate an inverted index, we investigate how well nail works when scoring all candidates in the corpus, \begin{table} \begin{tabular}{l c c c c} \hline \hline **nDCG@10** & \multicolumn{3}{c}{Cross Enc.} & \multicolumn{2}{c}{Lexical} \\ \cline{2-5} & MonoT5-3B & MiniLM-L6 & BM25 & **nail** \\ \hline MS-Marco (in-domain) & 0.398 & **0.401** & 0.228 & 0.377 \\ \hline Arguana & 0.288 & 0.415 & 0.472 & **0.522** \\ Climate-Fever & **0.28** & 0.24 & 0.186 & 0.206 \\ DBFedia-entity & 0.478 & **0.542** & 0.320 & 0.376 \\ Fever & **0.85** & 0.802 & 0.650 & 0.692 \\ FiQA-2018 & **0.514** & 0.334 & 0.254 & 0.411 \\ HotPotQA & **0.756** & 0.712 & 0.602 & 0.644 \\ NFCorpus & **0.384** & 0.36 & 0.343 & 0.367 \\ Natural Questions & **0.633** & 0.53 & 0.326 & 0.487 \\ StelDocs & **0.197** & 0.164 & 0.165 & 0.160 \\ SciFact & **0.777** & 0.682 & 0.691 & 0.710 \\ Tree-Covid & **0.795** & 0.722 & 0.688 & 0.766 \\ Touché 2020 & 0.3 & 0.349 & 0.347 & 0.240 \\ \hline Avg & **0.511** & 0.481 & 0.405 & 0.458 \\ Avg. w/o MS-Marco & **0.521** & 0.488 & 0.420 & 0.465 \\ \hline Total FLOPS & \(10^{13}\) & \(10^{12}\) & 0 & \(10^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 2. BEIR results on reranking task (top 100 results from BM25). Note that we use the BM25 candidates from the ElasticSearch system. Results for all systems excluding nail are copied from the BEIR reranking leaderboard. Figure 2. Effect of sparsification of document representation on recall@100, using a top-k strategy. which is an upper-bound for a nail-only retrieval system. These results are presented as nail-exh in Table 3. We later present a brief investigation into the effect of sparsification of the nail output, to further understand the potential for using nail to populate a sparse inverted index for retrieval. **Baselines** For full retrieval, we compare nail to lexical-based and dual-encoder systems. GTR-XXL (Zhou et al., 2019) is one of the largest and best performing dual-encoder systems publicly available. It is pre-trained on a large, non-public, corpus of 2 billion question answer pairs scraped from the web, and fine-tuned on MS-MARCO. Contierer is another dual-encoder system, which employs novel self-supervised pretraining task (Zhou et al., 2019), and is also fine-tuned on MS-MARCO; we describe it in more detail in Section 5.4. SPLADE v2 (Zhou et al., 2019) develops a query encoder and a document encoder to produce sparse representations, differing from dense dual-encoders systems. The query and document representations in SPLADE v2 are used for slightly different objectives. The query encoder is used to perform query expansion into BERT vocabulary, and the document encoder is used to produce sparse document representations for indexing. This system is trained via distillation of a cross-encoder reranker, and finally fine-tuned on MS-MARCO. Colbert v2 adopts a late interaction model that produce multi-vector representations for both documents and passages. In this model, per-token affinity between query and document tokens are scored using per-token representations. This model is also trained via distillation of a cross-encoder reranker. Besides BM25 and nail, SPLADE-doc\({}^{+}\) is the only other retriever that does not require neural network inference at query time. This model is a variant of SPLADE v2 where the query encoder is dropped, and only the document encoder is used (Kang et al., 2019). As with SPLADE v2, SPLADE-doc\({}^{+}\) is trained using distillation of cross-encoder reranker, with additional fine-tuning on MS-MARCO. **Results** Table 3 shows the results for nDCG@10 and recall@100 on BEIR full corpus retrieval for all systems that report it. We stratify the results into two sets, (1) MS-MARCO, which with the exception of BM25, is used as a training dataset, and (2) the average over all the other BEIR datasets, which are evaluated as zero-shot. On the out-of-domain BEIR tasks, BM25+nail beats all but one of the neural retrieval systems, despite not needing to encode the query with a neural network at query time and being limited in recall to BM25. Additionally, we note that nail-exh outperforms all other retrieval systems according to the recall@100 metric, suggesting potential for a nail-based retriever that uses nail to populate an inverted index. However, given the lower nDCG@10 than BM25+nail, this may only be worthwhile to implement if combined with a different reranker. Note that while this recall@100 result is highest for nail on the out-of-domain BEIR tasks, nail does worse than other models like GTR-XXL on the in-domain MSMARCO task. This is likely to be, in part, due to the training recipes used by other work to optimize for MS-MARCO performance, including model distillation and large non-public corpora of QA pairs. When comparing to the other system that does not require query-time use of an encoder, SPLADE-doc, we observe that nail is lagging behind on the in-domain evaluation, but outperforms SPLADE-doc on both metrics of the zero-shot datasets in BEIR. As with many of the other retrievers, the SPLADE-doc model was distilled from a cross-attention reranker teacher that is trained on MS-MARCO, which may account for this in-domain gain in performance (Zhou et al., 2019; Zhou et al., 2019). **Sparsification** To further explore the potential for using nail for full retrieval, we experiment with a naive approach to sparsifying nail document representations. Specifically, we simply order tokens by their scores and keep the _top-k_ scoring tokens. Figure 2 demonstrates the effect on the recall@100 metric of reducing the number of terms per document from the original vocabulary of 32 thousand tokens down to 100 tokens. For both MS-MARCO and other BEIR datasets, recall@100 falls considerably when using only the top 100 tokens. Nonetheless, with only two thousand tokens we are able to maintain the same level of performance for MS-MARCO and roughly 97% of the recall performance on BEIR. This observation, along with the results in Table 3, suggest that nail could be used to populate an efficient inverted index for retrieval, with little loss of recall. Such an index could serve as a more powerful alternative to BM25. We leave this to future work. ### Comparison to Term Weighting Models In this work we are primarily interested in zero-shot retrieval evaluations, which is why we focus on BEIR. However, there are a few recently proposed retrieval systems that also make use of language models to compute term weights. In this section, we compare these systems to nail using the MS-MARCO passage retrieval task. The metrics typically used in this task are the mean reciprocal rank \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline **Metric** & \multicolumn{3}{c}{Dual encoder} & \multicolumn{3}{c}{Query encoder} & \multicolumn{3}{c}{Lexical (no inf. net.)} \\ \cline{3-10} & & GTR-XXL & Contierer & SPLADE v2 & Colbert v2 & BM25 & SPLADE-doc\({}^{+}\) & nail-exh & BM25+nail \\ \hline MS-MARCO & nDCG@10 & **0.442** & 0.407 & 0.433 & \(-\) & 0.228 & 0.431 & 0.396 & 0.377 \\ & recall@100 & **91.6** & 89.1 & \(-\) & \(-\) & 66.0 & 88.4 & 89.5 & 66.0 \\ \hline Other BEIR & nDCG@10 & 0.459 & 0.445 & \(-\) & **0.469** & 0.420 & 0.429 & 0.432 & 0.465 \\ (avg. over 12 datasets) & recall@100 & 64.4 & 64.4 & \(-\) & \(-\) & 64.6 & 61.8 & **66.5** & 64.6 \\ \hline Pt. w/ large QA corpus & Yes & No & No & No & No & No & No \\ Pt. w/ distillation & No & No & Yes & Yes & No & Yes & No \\ Pt. w/ self-supervision & No & Yes & No & No & No & Yes & Yes \\ \hline \hline \end{tabular} \end{table} Table 3. BEIR nDCG@10 and recall@100 results on the _full retrieval_ task. The SPLADE-doc\({}^{+}\) results are previously unpublished, corresponding to the model described in (Kang et al., 2019), and obtained via correspondence with authors. Other numbers are obtained from their respective publications. with a cutoff of 10 (MRR@10) measuring the ranking of the top ten results and recall@1000. Table 4 contains the results. For nail, we report both the version which uses BM25 retrievals (in that case, the recall metric is derived from the BM25 system) and the system described in the previous section which uses exhaustive scoring. The results demonstrate that both nail-exh and BM25+nail outperform the other term weighting models presented on the MRR@10 metric for the MS-MARCO passage ranking task. With respect to recall metric, nail-exh clearly improves over the previous systems. Exhaustive scoring is much more expensive than the other systems shown; however, given the sparsification results shown in Figure 2, we believe a sparse version of nail would be competitive with the models presented. ### Comparison to Contriever There are several confounding factors in comparing the systems presented in Tables 2 and 3. As mentioned, each system uses different training recipes and training data while also having slightly different architectures. Training techniques presented in the baselines presented in this work include unsupervised pretraining, hard negative mining, and distillation from a cross-attention teacher. These factors can make it difficult to pinpoint the cause of the variance in performance across models. However, nail and Contriever (Nail and Contriever, 2014) share training recipes to a large extent, having both a similar pretraining stage followed by fine-tuning on MS-MARCO. Contriever is a recently introduced dual-encoder model that inspired the pretraining task in this work. However, architecturally, nail and Contriever are quite different. nail's query representation is not learned and is tied to the fixed set of vocabulary terms; this approach is potentially less powerful than a fully learned dense representation. The summary of the comparison is available in Table 5. We observe that on the BEIR reranking task, nail matches both the in-domain and zero-shot performance of the Contriever model, despite lacking a query time neural network. Without using BM25 for initial retrievals, both methods perform slightly worse on nDCG@10 for the zero-shot BEIR tasks, but they remain comparable. ### Performance versus query-time FLOPS We have motivated this work by asking how much can we leverage large language models at indexing time while making query time computational costs small enough for a commodity CPU. As the results in this section shows, there are tradeoffs between reranking accuracy improvements and computational costs. To illustrate this tradeoff, we present results of percentage nDGC@10 improvement over BM25 versus query-time FLOPS in 3. In general, we think lexicalized approaches like nail provide another interesting point on this curve, where much higher performance than BM25 can be achieved for only a small amount more compute. Note that Lassance and Clinchant (2019) discuss smaller versions of Splade; see Table 1 for the approximate reduction. ## 6. Alternate Training Recipes Our primary goal has been to determine the extent to which the performance of an expensive neural network can be captured in a fast, sparse, featurization for general purpose retrieval. Subsequently, we have prioritized a training recipe that is aligned with previous work and well suited to the multi-domain BEIR task. However, the performance of learned retrievers as rerankers is very sensitive to the exact nature of the training recipe, and in this section we present \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline **metric** & DeepCT (Chen et al., 2017) & DeepImpact(Chen et al., 2017) & COIL-tok (Chen et al., 2017) & uniCOIL (Liu et al., 2018) & SPLADE-doc (Chen et al., 2017) & BM25+nail & nail-exh \\ \hline MRR@10 & 0.243 & 0.326 & 0.341 & 0.315 & 0.322 & **0.363** & 0.356 \\ Recall@1000 & 0.913 & 0.948 & 0.949 & - & 0.946 & 0.814 & **0.981** \\ \hline \hline \end{tabular} \end{table} Table 4. Evaluation on the MS-MARCO dev set for passage ranking task. Numbers reported are taken from corresponding publications. Except for DeepImpact system, all results are obtained with no document expansion, while DeepImpact results include doc2query-T5 (Kumar et al., 2018) document expansion terms. Figure 3. Improvement over BM25 and extra FLOPS to score one query on the BEIR retrieval task. The nail and MonoT5 use BM25 retrievals; SPLADE-v2 uses its own retrievals over the full corpus. Note that the vast majority of the computation for SPLADE and dual encoders is in encoding the query; reranking BM25 retrievals would not reduce computation. analyses of the choices we made, and the associated trade-offs on BEIR and MSMARCO performance. ### Effects of hard-negative selection during fine-tuning One key choice in contrastive learning is the distribution of negative examples used in Equation 2. This is commonly a combination of hard negatives, which are chosen to be challenging for a single example, and batch negatives, which are drawn from the distribution of all positive and hard-negative candidates across training examples (Han et al., 2016; Wang et al., 2017; Wang et al., 2018). Our pretraining task (described in Section 4.1) does not use hard negatives; however, the MS-MARCO fine-tuning task includes hard negatives created by running BM25 retrieval over the set of candidate passages. Table 6 shows how BEIR and MS-MARCO results change as we change the number of MS-MARCO hard-negatives that we sample during fine tuning. As this number increases, the MS-MARCO performance also increases until it matches the performance of the cross-attention rerankers in Table 2 when 63 hard negatives are sampled for each training example. However, increasing the number of MS-MARCO hard negatives also hurts BEIR performance. ### Effects of pretraining and fine-tuning The training recipe, presented in Section 4.1, has two stages beyond the language model training from Raffel et al. (Raffel et al., 2017). Table 7 shows that both stages benefit both the BEIR and MSMARCO results. However, NAIL still yields a nice improvement over BM25 across the BEIR tasks using only the pre-training task. This is encouraging because these data are heuristically generated rather than relying on human relevance labels, so they can be trivially applied to new domains. The MS-MARCO results are unsurprisingly more dependent on fine-tuning on MS-MARCO. Pre-trained nail does not outperform BM25 on MS-MARCO without fine-tuning. More sophisticated methods of synthetic training data generation, such as Promptagator(Bordes et al., 2016), could also help improve nail further, but we leave this to future work. ## 7. Qualitative Analysis In this section, we present a qualitative analysis of the tokens that score highest according to the nail model for a given input. We choose the Natural Questions (NQ) subset of the BEIR benchmark for this analysis, as the queries tend to be complete questions that are easily interpretable. Table 8 shows the percentage of nail's top predicted tokens that appear in the passage input to the nail model along with the gold query that is paired with this passage in the NQ development set. Figure 4 presents the top predicted terms for a randomly sampled set of passages. Almost all of the tokens in both the input passages and the unseen query are present in nail's top 1000 predictions (Table 8). However, tuning towards MS-MARCO significantly increases the number of query tokens predicted in the top 100 and 1000 positions, while simultaneously reducing the number of passage tokens predicted. This is unsurprising: the fine-tuning stage represents a domain shift from the pre-training task, which is predicting document tokens, toward predicting query tokens. One indication of this shift is the increase in the prevalence of 'wh' words (what, who, where) in the top terms from the finetuned model in Figure 4. Figure 4 also illustrates some other interesting shifts in nail's output during fine-tuning. For example, in Example (3) the pre-trained model predicts many dates associated with the Eagles (e.g., album release years). These are likely to occur in adjacent passages in the same document as the input passage, so they are good predictions for the pre-training task (Section 4.1). However, they are very unlikely to occur in queries associated with the input passage, and thus they are replaced in the fine-tuned predictions with terms that are more likely to occur in queries targeting the passage ('sang','sing', 'wrote', 'who','released'). Figure 4 also illustrates nail's ability to predict the type of query that is likely to be paired with a given passage. Passages containing definitions, such as the one presented in Example (1), are highly associated with the wh-word 'what'. On the other hand, passages about individuals or groups of individuals (Examples (3) and (4)) are more highly associated with 'who'. Finally, the predicted terms in Figure 4 contain a lot of small surface-form variations of the same root word, with different segmentations and capitalizations treated separately by the query tokenizer. For example, the tokens 'chic', 'chi', 'CHI', 'Ch', 'ch', 'CH' in Example (2) are all probably coming from different forms of the word 'Chicago' presented in different contexts. This redundancy illustrates a drawback of our featurization: unlike neural models, \begin{table} \begin{tabular}{l l l} \hline \hline **\# of hard** **negatives** & **MS-MARCO** **nDCG@10** & **Avg. BEIR** **nDCG@10** \\ \hline 3 & 0.377 & 0.465 \\ 7 & 0.378 & 0.461 \\ 15 & 0.391 & 0.460 \\ 31 & 0.394 & 0.457 \\ 63 & 0.397 & 0.457 \\ \hline \hline \end{tabular} \end{table} Table 6. Effect of varying the number of hard negatives on reranking evaluation for MS-MARCO and BEIR. The BEIR average is computed without MS-MARCO. \begin{table} \begin{tabular}{l l l l l} \hline \hline & \multicolumn{2}{c}{top-100} & \multicolumn{2}{c}{top-1000} \\ & pretrained & tuned & pretrained & tuned \\ \hline query & 53 & 74 & 85 & 94 \\ passage & 65 & 54 & 90 & 88 \\ \hline \hline \end{tabular} \end{table} Table 8. Percent of NQ query and gold passage tokens contained in the top 100 and 1000 scores from nail. it does not abstract over diverse surface forms. Future work could examine more efficient and discriminative featurizations than the tokenization used in this work. ## 8. Concluding Remarks We introduce a new model for sparse, lexicalized retrieval, called nail. With nail, we are able to adapt expensive pretrained sequence-to-sequence language models that use Transfomer architectures (e.g., T5, PaLM, GPT-3) for document indexing. The main elements of nail are (1) the use of a non-autoregressive decoder, and (2) the use of vocabulary based representation for documents and queries. We train nail using a query prediction task, finding that pretraining on self-supervised retrieval is critical for good performance. With nail we study the tradeoffs of offloading expensive neural computation wholly to indexing time, allowing serving to operate cheaply and without the use of accelerators. Evaluating retrieval on BEIR, we show that the nail approach is as effective as recent dual-encoder systems and captures up to 86% of the performance gains of a cross-attention model on MS-MARCO while being able to serve requests on commodity CPUs.
2310.10449
Text Summarization Using Large Language Models: A Comparative Study of MPT-7b-instruct, Falcon-7b-instruct, and OpenAI Chat-GPT Models
Text summarization is a critical Natural Language Processing (NLP) task with applications ranging from information retrieval to content generation. Leveraging Large Language Models (LLMs) has shown remarkable promise in enhancing summarization techniques. This paper embarks on an exploration of text summarization with a diverse set of LLMs, including MPT-7b-instruct, falcon-7b-instruct, and OpenAI ChatGPT text-davinci-003 models. The experiment was performed with different hyperparameters and evaluated the generated summaries using widely accepted metrics such as the Bilingual Evaluation Understudy (BLEU) Score, Recall-Oriented Understudy for Gisting Evaluation (ROUGE) Score, and Bidirectional Encoder Representations from Transformers (BERT) Score. According to the experiment, text-davinci-003 outperformed the others. This investigation involved two distinct datasets: CNN Daily Mail and XSum. Its primary objective was to provide a comprehensive understanding of the performance of Large Language Models (LLMs) when applied to different datasets. The assessment of these models' effectiveness contributes valuable insights to researchers and practitioners within the NLP domain. This work serves as a resource for those interested in harnessing the potential of LLMs for text summarization and lays the foundation for the development of advanced Generative AI applications aimed at addressing a wide spectrum of business challenges.
Lochan Basyal, Mihir Sanghvi
2023-10-16T14:33:02Z
http://arxiv.org/abs/2310.10449v2
Text Summarization Using Large Language Models: A Comparative Study of MPT-7b-instruct, Falcon-7b-instruct, and OpenAI Chat-GPT Models ###### Abstract Text summarization is a critical Natural Language Processing (NLP) task with applications ranging from information retrieval to content generation. Leveraging Large Language Models (LLMs) has shown remarkable promise in enhancing summarization techniques. This paper embarks on an exploration of text summarization with a diverse set of LLMs, including MPT-7b-instruct, falcon-7b-instruct, and OpenAI ChatGPT text-davincal-003 models. The experiment was performed with different hyperparameters and evaluated the generated summaries using widely accepted metrics such as the Bilingual Evaluation Understudy (BLEU) Score, Recall-Oriented Understudy for Gisting Evaluation (ROUGE) Score, and Bidirectional Encoder Representations from Transformers (BERT) Score. According to the experiment, text-davincal-003 outperformed the others. This investigation involved two distinct datasets: CNN Daily Mail and XSum. Its primary objective was to provide a comprehensive understanding of the performance of Large Language Models (LLMs) when applied to different datasets. The assessment of these models' effectiveness contributes valuable insights to researchers and practitioners within the NLP domain. This work serves as a resource for those interested in harnessing the potential of LLMs for text summarization and lays the foundation for the development of advanced Generative AI applications aimed at addressing a wide spectrum of business challenges. Text Summarization, MPT-7b-instruct, Falcon-7b-instruct, OpenAI ChatGPT ## I Introduction In the era of Big Data, the abundance of textual information has underscored the importance of efficient text summarization techniques. Text summarization, the task of distilling long documents or articles into concise, coherent summaries while preserving the core meaning and essential information, holds immense value across various domains. From aiding in information retrieval to content generation, summarization has emerged as a pivotal component of Natural Language Processing (NLP) applications. Recent advancements in NLP have been characterized by the rise of Large Language Models (LLMs), OpenAI ChatGPT[1], MPT-7b-instruct[2, 3], flan-t5-xl[4], falcon-7b-instruct[5, 6], and others, which have demonstrated remarkable capabilities in understanding and generating human-like text. These LLMs have opened new avenues for text summarization by providing powerful generative capabilities and the ability to adapt to diverse tasks through fine-tuning. The 'text-davinci-003 (Legacy)'[1] model represents a remarkable leap in the field of Natural Language Processing (NLP). It exhibits an unparalleled ability to handle a wide range of language tasks with exceptional precision and quality. Notably, it surpasses its predecessors, including the Curie, babbage, and Ada models, in terms of generating text of higher quality, offering longer outputs, and consistently following provided instructions. This legacy model has a token capacity of 4,097, enabling it to handle extensive text generation with ease. Moreover, 'text-davinci-003 (Legacy)' introduces innovative features, such as the capability to insert text seamlessly into generated content, thereby expanding its utility for diverse text manipulation tasks. The 'MPT-7B-Instruct' model, as cited in[2, 3], is specifically designed for short-form instruction-following tasks, making it an ideal choice for a wide range of instruction-based applications. It is created through the fine-tuning process of a base model, MPT-7B, using a dataset sourced from the Databricks Dolly-15k and the Anthropic Helpful and Harmless (HH-RLHF) datasets. This tailored approach results in a model that excels at understanding and following instructions with precision and accuracy. The model follows a modified decoder-only transformer architecture, optimized for superior performance in instruction-following tasks. Falcon-7B-Instruct', as cited in[5, 6], represents a formidable 7 billion-parameter causal decoder-only model meticulously crafted by the Technology Innovation Institute (TII). This model is built upon the robust foundation of Falcon-7B and undergoes a fine-tuning process using a composite dataset sourced from both chat and instruct domains. 'Falcon-7B-Instruct' is generously made available under the Apache 2.0 license. The focus of this paper is to delve into the world of text summarization with LLMs, offering a comprehensive exploration of their potential and limitations. Specifically, we investigate various LLMs, experiment with different hyperparameters, and evaluate the quality of summaries generated by these models. To ensure a robust evaluation, we employ well-established metrics such as BLEU Score, Rouge Score, and Bert Score. This paper serves as a vital resource for those seeking to harness the power of LLMs for NLP applications and lays the groundwork for the development of advanced Generative AI solutions to address a wide range of business challenges. In the following sections, the paper provides detailed explanations of the text summarization methods discussed in Section II, supervised and unsupervised summarization in Section III, datasets and evaluation metrics presented in Section IV, inference with different LLMs in Section V, and offers a roadmap for future enhancements, concluding with Section VI. Lastly, the author acknowledges the support received during the research and experiments. ## II Text Summarization Methods Text summarization is a fundamental task in Natural Language Processing (NLP) that aims to condense large volumes of text into shorter, coherent representations while preserving the essential information. There are primarily two approaches to text summarization: abstractive and extractive summarization. ### _Abstractive Text Summarization_ Abstractive summarization involves generating a concise summary that may contain words, phrases, or sentences not present in the source text. This approach relies on understanding the context and generating human-like language to convey the central ideas. Abstractive summarization methods often use advanced language models, such as Large Language Models (LLMs), to rewrite and rephrase content in a more concise form. ### _Extractive Text Summarization_ Extractive summarization, on the other hand, aims to select and extract the most important sentences or phrases directly from the source text to form the summary. It does not involve rephrasing or generating new sentences. Extractive summarization methods use various techniques, such as sentence scoring and ranking, to identify and extract the most salient content. ## III Supervised and Unsupervised Summarization Text summarization techniques can be broadly categorized into two main approaches based on dataset labeling: supervised and unsupervised summarization. Each approach has its methodologies and advantages, serving different use cases and data availability scenarios. ### _Supervised Summarization_ Supervised summarization is a method that relies on labeled training data, where human annotators provide summaries for a given set of source texts. Machine learning models are then trained on this data to learn the mapping between source texts and their corresponding summaries. This approach is particularly effective when high-quality, domain-specific summaries are available for training. ### _Unsupervised Summarization_ Unsupervised summarization, on the other hand, does not require labeled training data. Instead, it seeks to extract the most relevant information from the source text using algorithms that consider factors like sentence importance, coherence, and redundancy. Unsupervised methods are often employed when labeled summarization datasets are scarce or costly to obtain. ## IV Datasets and Evaluation Metrics In our study, we conducted experiments and evaluations on two distinct datasets, CNN/Daily Mail 3.0.0[7] and the Extreme Summarization (XSum)[8] to assess the performance of various Large Language Models (LLMs) in the context of text summarization. These datasets serve as the foundation for our evaluation and comparison of LLM-generated summaries. ### _Datasets_ * **CNN/Daily Mail 3.0.0 Dataset:** The CNN/Daily Mail 3.0.0 Dataset, a valuable resource in the realm of natural language processing, comprises more than 300,000 unique news articles authored by journalists from CNN and the Daily Mail. Originally designed to facilitate machine reading and comprehension, this English-language dataset has since evolved to support both extractive and abstractive summarization tasks. The dataset provides three key data fields for each entry: 'id,' which contains the hexadecimal-formatted SHA1 hash of the URL from which the story was retrieved; 'article,' which contains the body of the news article itself; and 'highlights,' featuring the article's highlights as written by the original author. * **XSum Dataset:** XSum dataset is a valuable resource tailored for extreme summarization tasks. It consists of news articles with three key features: the 'document,' serving as the input news article, the'summary' providing a one-sentence summary of the article, and the 'id,' which uniquely identifies each article using the BBC ID. The inclusion of these diverse datasets allows us to evaluate the performance of LLMs across various content types, ensuring that our study provides a holistic view of their summarization capabilities. ### _Evaluation Metrics_ To assess the quality and effectiveness of the generated summaries, we employed a set of widely accepted evaluation metrics: * **BLEU Score[9]:** BLEU is a metric employed to assess the quality of machine translations. It operates by measuring the similarity between n-grams present in machine-translated sentences and those in human-translated sentences. It is generally noted that the BLEU score tends to decrease with longer sentence lengths, although variations in this trend can occur depending on the translation model in use. * **ROUGE Score[10, 12]:** The ROUGE Score assesses the overlap of n-grams (sequences of words) between the generated summary and reference summaries. It considers metrics such as ROUGE-N (unigrams, bigrams, etc.) and ROUGE-L (longest common subsequence) to evaluate content overlap. * **BERT Score[11, 12]:** The BERT Score utilizes contextual embeddings from the BERT model to measure the similarity between the generated summary and reference summaries. It is designed to capture the nuances of language and context, providing a robust evaluation metric. By calculating these metrics for summaries generated with different LLMs, we aim to provide a comprehensive assessment of their performance, enabling researchers and practitioners to make informed decisions when choosing an LLM and fine-tuning their summarization models for specific tasks and datasets. ## V Inference with Different LLMs In this section, the results of the experiments are presented, wherein a variety of Large Language Models (LLMs) were utilized to generate summaries for two distinct datasets. The LLMs employed for these experiments include falcon-7b-instruct, mpt-7b-instruct, and text-davinci-003. The primary objective is to offer a comparative analysis of their performance concerning text summarization. ### _Experiment Setup_ For each LLM, experiments were conducted using a temperature value of 0.1 and a maximum token length of 100. These experiments involved summarizing 25 test samples of each dataset. The process of generating the text summary entailed the utilization of LangChain and Hugging Face pipelines for prompt engineering, ensuring precision and efficiency in the summarization process. This experiment was executed by hosting custom Google Compute Engine Virtual Machine (GCE VM) instances equipped with NVIDIA T4 Graphics Processing Units (GPUs) sourced from the Google Cloud Platform (GCP). ### _Results_ The performance of different LLMs on two distinct datasets, utilizing the specified temperature value, was displayed. Metrics were computed for each LLM, offering a comprehensive perspective on their summarization capabilities, as available on the GitHub repository cited in this paper[13]. These tables, as referenced in Table I and Table II, present a comprehensive evaluation of various Large Language Models (LLMs) for text summarization across two distinct datasets: CNN/Daily Mail 3.0.0 and XSum. The performance of each LLM is assessed using several key metrics, including BLEU, ROUGE, and BERT. The table highlights varying performance across LLMs and datasets. Notably, the OpenAI model, text-davinci-003, consistently exhibits strong performance, achieving high BLEU, ROUGE, and BERT Scores. This exceptional performance can be attributed to davinci being the largest and most powerful model, with 175 billion parameters and 45TB of text data. When comparing the two 7b parameter fine-tuned models, MPT-7b-instruct performed slightly better than Falcon-7b-instruct. However, their overall performance was somewhat similar. These findings underscore the significance of model architecture and size in text summarization tasks, as well as the potential of OpenAI's model for achieving state-of-the-art results in diverse NLP applications. ## VI Conclusion and Future Enhancements This research embarked on a comprehensive exploration of text summarization techniques using various Large Language Models (LLMs), with the goal of shedding light on their performance in different settings and scenarios. The study encompassed the evaluation of LLMs such as mpt-7b-instruct, falcon-7b-instruct, and text-davinci-003, as well as their summarization capabilities across two diverse datasets, 'CNN/Daily Mail 3.0.0' and 'XSum. The experiment results, as indicated by the model performance table and human evaluation of the generated text summaries, highlight the exceptional performance of OpenAI's model, text-davinci-003, in comparison to other models. These models consistently demonstrated a superior ability to produce high-quality summaries across various datasets and temperature settings. In the coming days, this work can be extended to leverage inferences from larger samples using higher-parameter models, such as mosaicml/mpt-30b-instruct and tiuiae/falcon-40b-instruct, potentially leading to even more robust and accurate summarization results. Additionally, the human evaluation metrics and inferences can be generated from datasets with varying word counts and output token lengths. The continual advancement of Large Language Models (LLMs) with increasing model size and capabilities offers an exciting opportunity to explore how these models can further enhance the quality of text summarization, translation, and content generation. Moreover, the fine-tuning of LLMs on specific domains and datasets could unlock the potential for domain-specific summarization models with exceptional performance. In conclusion, this research contributes valuable insights into the field of text summarization with LLMs and offers a glimpse into future research directions. As the NLP landscape continues to evolve, leveraging the capabilities of LLMs, especially those offered by OpenAI, holds great promise for the development of advanced Generative AI applications across diverse business domains. ## Acknowledgment The author would like to express heartfelt gratitude to Mihir Sanghvi, Mentor of the KaggleX BIPOC Mentorship Program Cohort 3. Mihir's invaluable guidance, mentorship, and insights have significantly contributed to the success of this research. Additionally, appreciation is extended to Kaggle for providing the opportunity to participate in the KaggleX BIPOC Mentorship Program, which facilitated the collaboration and learning experiences that enriched this work. Furthermore, the support provided by Kaggle in the form of the Kaggle-KaggleX Google Cloud Platform (GCP) Coupon is acknowledged. This support enabled access to essential computing resources on Google Cloud, which was instrumental in conducting the experiments for this research. Gratitude is expressed for the collective efforts of the Kaggle community, which continues to foster a collaborative and innovative environment for data science and machine learning research.
2310.08206
Long-Tailed Classification Based on Coarse-Grained Leading Forest and Multi-Center Loss
Long-tailed (LT) classification is an unavoidable and challenging problem in the real world. Most existing long-tailed classification methods focus only on solving the class-wise imbalance while ignoring the attribute-wise imbalance. The deviation of a classification model is caused by both class-wise and attribute-wise imbalance. Due to the fact that attributes are implicit in most datasets and the combination of attributes is complex, attribute-wise imbalance is more difficult to handle. For this purpose, we proposed a novel long-tailed classification framework, aiming to build a multi-granularity classification model by means of invariant feature learning. This method first unsupervisedly constructs Coarse-Grained forest (CLF) to better characterize the distribution of attributes within a class. Depending on the distribution of attributes, one can customize suitable sampling strategies to construct different imbalanced datasets. We then introduce multi-center loss (MCL) that aims to gradually eliminate confusing attributes during feature learning process. The proposed framework does not necessarily couple to a specific LT classification model structure and can be integrated with any existing LT method as an independent component. Extensive experiments show that our approach achieves state-of-the-art performance on both existing benchmarks ImageNet-GLT and MSCOCO-GLT and can improve the performance of existing LT methods. Our codes are available on GitHub: \url{https://github.com/jinyery/cognisance}
Jinye Yang, Ji Xu, Di Wu, Jianhang Tang, Shaobo Li, Guoyin Wang
2023-10-12T10:51:23Z
http://arxiv.org/abs/2310.08206v3
# Long-Tailed Classification Based on Coarse-Grained Leading Forest and Multi-Center Loss ###### Abstract Long-tailed(LT) classification is an unavoidable and challenging problem in the real world. Most of the existing long-tailed classification methods focus only on solving the inter-class imbalance in which there are more samples in the head class than in the tail class, while ignoring the intra-lass imbalance in which the number of samples of the head attribute within the same class is much larger than the number of samples of the tail attribute. The deviation in the model is caused by both of these factors, and due to the fact that attributes are implicit in most datasets and the combination of attributes is very complex, the intra-class imbalance is more difficult to handle. For this purpose, we proposed a long-tailed classification framework, known as Cognisance, which is founded on Coarse-Grained Leading Forest (CLF) and Multi-Center Loss (MCL), aiming to build a multi-granularity joint solution model by means of invariant feature learning. In this method, we designed an unsupervised learning method, i.e., CLF, to better characterize the distribution of attributes within a class. Depending on the distribution of attributes, we can flexibly construct sampling strategies suitable for different environments. In addition, we introduce a new metric learning loss, i.e., MCL, which aims to gradually eliminate confusing attributes during the feature learning process. More importantly, this approach does not depend on a specific model structure and can be integrated with existing LT methods as an independent component. We have conducted extensive experiments and our approach has state-of-the-art performance in both existing benchmarks ImageNet-GLT and MSCOCO-GLT, and can improve the performance of existing LT methods. Our codes are available on GitHub: [https://github.com/jinvery/cognisance](https://github.com/jinvery/cognisance) Imbalanced learning, long-tailed learning, invariant feature learning. ## I Introduction In real-world applications, training samples typically exhibit a long-tailed distribution, especially for large-scale datasets [1, 2]. Long-tailed distribution means that a small number of head categories contain a large number of samples, while the majority of tail categories have a relatively limited number of samples. This imbalance can lead to traditional classification algorithms preferring to handle head categories and performing poorly when dealing with tail categories. Solving the problem of long-tailed classification is crucial as tail categories may contain important information such as rare diseases, critical events, or characteristics of minority groups [3, 4, 5, 6]. In order to effectively address this challenge, researchers have proposed various methods. However, what this article wants to emphasize is that the long-tailed distribution problem that truly troubles the industry is not all the inter-class long-tailed problem which is currently the most studied. What truly hinders the further implementation of machine learning methods in the industry is the intra-class long-tailed problem, such as when it comes to the long-tailed distribution in unmanned vehicle training data, the weather distribution, day and night distribution [7], etc., are not the targets predicted by the model. Therefore, the long-tailed here is not among classes, but among attributes, where the attribute represents all the factors that cause intra-class changes, including object-level attributes (such as the specific vehicle model, brand, color, etc.) and image-level attributes (such as lighting, climate conditions, etc.), this emerging task is named **Generalized Long-Tailed Classification** (GLT) [8]. In Fig. 1, it is evident that there is a significant difference in the number of samples among different categories, especially for the head class such as "Coast", which has a much larger number of samples than the tail class such as "Escalator". However, even within a category, there is a significant difference in the number of samples corresponding to different attributes. For example, in the category "Coast", the number of samples during the day is much greater than that at night, and the number of samples on sumny days is also much greater than that on cloudy days. The imbalance in sample size among attributes within an category is fundamentally more difficult to avoid than the former, as shown in Fig. 2, this imbalance in attributes undermines the performance of the model in two ways [8]: I) Firstly, it weakens the accuracy of images with tail attributes. For example, black swans are more likely to be misclassified than white swans in the category "Swan", even though they both belong to the same category. II) It leads Fig. 1: Inter-class long-tailed distribution and intra-class long-tailed distribution. to some attributes being mistakenly associated with certain categories, e.g. the attribute "White" may be falsely correlated with "Swan". Therefore, when "White" appears in images of other birds (e.g., "Cock"), there is a high risk that the sample will be misclassified as a "Swan". Current research mostly focuses on solving the inter-class long-tailed problem, where resampling [9, 10] or loss reweighting [11, 12] methods are often used when dealing with imbalanced data, aiming to rebalance the training process. However, most of these methods sacrifice the performance of the head class to improve the performance of the tail class, which is like playing a performance seesaw, making it difficult to fundamentally improve the performance of all classes. In addition, some methods believe that data imbalance does not affect feature learning, so the training of the model is divided into two stages through decoupling feature learning and classifier learning [13, 14]. However, this adjustment is only a trade-off between accuracy and precision [8, 15], and the confusion areas of similar attributes in the feature space learned by the model do not change. As mentioned earlier in this paper, it is precisely the long-tailed between attributes within a category that leads to spurious correlations between the head attributes of certain categories and that category. This means that these attributes correspond to the spurious features (i.e., a confusing region in the feature space) of the category. The method proposed in this paper aims to eliminate the confusing region based on the invariant feature learning, which ultimately achieves the goal of improving the model's precision and accuracy at the same time [16, 17, 18]. In the framework proposed in this paper, inter-class and intra-class long-tailed problems are handled simultaneously by constructing different environments, in which we design a new sampling method **Coarse-Grained Leading Forest (CLF)**, which is based on unsupervised learning that can characterize the attribute distributions within a class and guide the data sampling in different environments during the training process. In the experimental setup of this paper, two environments are constructed, one of which is the original environment without special treatment, and the other environment in which the distributions of categories and attributes tend to be balanced. Finally, in order to gradually eliminate confusing pseudo-features during the training process, we design a new metric learning loss, **Multi-Center Loss (MCL)**, which is inspired by [8] and [19], extends the center loss to its Invariant Risk Minimization (IRM) version, and further improves the robustness of the model compared to the previous two, giving the model the ability to learn invariant features. In addition, this method is not coupled with a specific backbone model or loss function, and can be seamlessly integrated into other LT methods for performance enhancement on top of the original. Our contributions can be summarized as follows: * We designed a novel unsupervised learning-based sampling scheme to guide the sampling of different environments in the IRM process, while dealing with the inter-class long-tail problem as well as the intra-class long-tailed problem among attributes, which is often neglected in traditional methods. * We combined the idea of invariant feature learning to design a new metric learning loss, which can enable the model to gradually remove the influence of pseudo-features during the training process, further improving the robustness of the model and taking into account the precision and accuracy of the prediction. * We conducted extensive experiments on two existing benchmarks, ImageNet-GLT and MSCOCO-GLT, to validate the effectiveness of this framework while improving the performance of all popular LT lineups. ## II Related Work ### _Long-Tailed Classification_ The key challenge of long-tailed classification is to effectively deal with the imbalance of data distribution to ensure that excellent classification performance can be achieved both between the head and the tail. Current treatments for long-tailed classification can be broadly categorized into three groups [1]: 1) Class Re-balancing, which is the mainstream paradigm in long-tailed learning, aims to enhance the influence of tail samples on the model by means of re-sampling [20, 10, 21, 22], re-weighting [23, 24, 25, 26, 27] or logit adjustment [28] during the model training process, and some of these methods [13, 14] consider that the unbalanced samples do not affect the learning of the features, and thus divide feature learning and classifier learning into two phases, and perform operations such as resampling only in the classifier learning phase. 2) Information Augmentation, information augmentation based approaches seek to introduce additional information in model training in order to improve model performance in long-tailed learning. There are two approaches in this method type: migration learning [29, 30, 31] and data augmentation [32, 33, 34]. 3) Module Improvement, in addition to class rebalancing and information augmentation, researchers have explored ways to improve network modules in long-tailed learning such as RIDE [35] and TADE [36], both of which deal with long-tail recognition independent of test distribution by introducing integrated learning of multi-expert models in the network. In addition, a recent study proposed the concept of Generalized Long-Tailed Classification (GLT) [8], which first pointed out the problem of long-tailed of attributes within a class, which pointed out that the traditional long-tailed classification methods represent the classification model as \(p(Y|X)\), i.e., predicting the label \(Y\) from the input image \(X\), which can be further decomposed as \(p(Y|X)\propto p(X|Y)\cdot p(Y)\), the Fig. 2: Spurious correlation of the “White” attribute with the “Swan” category. formula identifies the cause of class bias as \(p(Y)\). However, the distribution of \(p(X|Y)\) also changes in different domains, so the classification model is extended to the form of Equation 1 based on the Bayesian Theorem in this study. \[p(Y=k|z_{c},z_{a})=\frac{p(z_{c}|Y=k)}{p(z_{c})}\cdot\underbrace{\frac{p(z_{a}|Y= k,z_{c})}{p(z_{a}|z_{c})}}_{attribute\ bias}\cdot\underbrace{p(Y=k)}_{class\ bias}, \tag{1}\] where \(z_{c}\) is the invariant present in the category, and the attribute-related variable \(z_{a}\) is the domain-specific knowledge in different distributions. Taking the mentioned "swan" as an example, the attribute "color" of "Swan" belongs to \(z_{a}\), while the attributes of "Swan" such as feathers and shape belong to \(z_{c}\), and it is worth noting that in practical applications the formula does not impose the untangling assumption, i.e., it does not assume that a perfect feature vector \(z=[z_{c};z_{a}]\) can be obtained, where \(z_{a}\) and \(z_{c}\) are separated. ### _Invariant Risk Minimization_ IRM [17] was proposed by Arjovsky et al. in 2019, and its main goal is to build robust learning models that can have the same performance on different data distributions. In machine learning, we usually hope that the trained model can perform well on future data, which is called Risk Minimization. However, in practice, there may be distributional differences between the training data and the test data, known as Domain Shift, which causes the model to perform poorly on new domains. The core idea of IRM is to solve the domain adaptation problem by encouraging models to learn features that are invariant across data domains. This means that the model should focus on those shared features that are present in all data domains rather than overfitting a particular data distribution. \[subject\ to\quad w\in\operatorname*{arg\,min}_{\overline{w}: \mathcal{H}\rightarrow\mathcal{Y}}R^{e}(\overline{w}\circ\Phi),\ for\ all\ e\in\varepsilon_{tr} \tag{2}\] As shown in 2, where \(\varepsilon_{tr}\) represents all environments, \(\mathcal{X}\), \(\mathcal{H}\), and \(\mathcal{Y}\) represent inputs, feature representations, and prediction results, respectively, \(\Phi\) and \(w\) are feature learner and classifier, respectively, and \(R^{e}\) denotes the risk under the environment \(e\), the goal of IRM is to find a common solution that can perform stably on all environments, thus improving the model's generalization ability. ### _Optimal Leading Forest_ The CLF proposed in this paper starts with OLeaF [37], so we provide a brief introduction to its ideas and algorithms here. The concept of optimal leading forest originates from a clustering method based on density peak [38], and the two most critical factors in the construction of OLeaF are the density of the data points and the distance of the data points to their nearest neighbors with higher densitiy. Let \(I=\{1,2,...,N\}\) be the index set of dataset \(\mathcal{X}\), and \(d_{i,j}\) represent the distance between data points \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) (any distance metric can be used), and let \(\rho_{i}=\sum_{j\in I\setminus\{i\}}exp(-(d_{i,j}/d_{c})^{2})\) be the density of data point \(\mathbf{x}_{i}\), where \(d_{c}\) is the cut-off distance. If there exists \(\xi_{i}=argmin_{j}\left\{d_{i,j}|\rho_{j}>\rho_{i}\right\}\), then \(\mathbf{x}_{\xi_{i}}\) is said to be the leading node of \(\mathbf{x}_{i}\). Based on this a partial order relation can be established, i.e., if there exists \(\xi_{i}=\eta(\mathbf{x}_{i})\), then \(\mathbf{x}_{i}\prec\mathbf{x}_{\xi_{i}}\), and connecting every \(\mathbf{x}_{i}\) and \(\mathbf{x}_{\xi_{i}}\) (for \(r=argmax_{1\leq i\leq N}\left\{\rho_{i}\right\}\), then \(\mathbf{x}_{r}\) is the root node) based on the partial order bias, a tree structure can be established and is known as the Leading Tree. And then let \(\delta_{i}=d_{\xi_{i},i}\), \(\gamma_{i}=\rho_{i}\times\delta_{i}\), then the larger \(\gamma_{i}\) represents the higher potential of the data point \(\mathbf{x}_{i}\) to be selected as the cluster center. Intuitively, if an object a has a large \(\mathbf{x}_{i}\), it means it has a lot of close neighbors, and \(\delta_{i}\) is large, it is far away from another data point with a large \(\rho_{i}\) value, so \(\mathbf{x}_{i}\) has a good chance to become the center of the cluster, and the root node can be selected based on the ordering of \(\gamma_{i}\). Then a Leading Tree will be partitioned in this way, and the resulting multiple Leading Tree will be called an Optimal Leading Forest (OLeaF). ## III The proposed Cognisance framework In the previous introduction of GLT [8], attribute bias and class bias were introduced, and traditional LT methods tend to focus only on the latter, and the resampling, reweighting, etc. designed based on this tend to focus only on the distribution of classes. As a matter of fact, as the example given earlier, attribute bias not only impairs the model's performance on tail attribute samples within a class, but also it is the spurious correlation between some head attributes with pseudo-features and certain classes that leads to the imbalance of the model's performance among classes. Therefore, in order to improve the generalization ability of the model on data with different category distributions and different attribute distributions, this paper proposes an framework named Cognisance based on invariant feature learning, which firstly uses Coarse-Grained Leading Forest to construct different environments with different data distributions, and then under Multi-Center Loss, the model is allowed to learn invariant features in each data domain instead of overfitting a certain distribution to solve the multi-domain adaptation problem. ### _Coarse-Grained Leading Forest_ In the concept of IRM [17], the construction of different environments is a prerequisite for training, and the challenge in this paper is to construct controllable environments with different attribute distributions. Environments with different category distributions are easy to construct because the labels of the categories are explicit in the dataset, while the attributes are implicit for most datasets, and even if the category imbalance is completely eliminated, its attribute imbalance still exists. At the same time, because of the nature that attributes can be continuously superimposed and combined, the boundaries of attributes are also complex, thus we design a sampling method based on unsupervised learning, which can portray the distribution of attributes within the same category and can control the granularity of the portraying of attributes according to the setting of hyperparameters. Our motivation is based on a reasonable assumption that the differences between samples within the same category are the result of the gradual separation and evolution of attributes. This is somewhat similar to a biological evolutionary tree, where the gap from the root node to the leaf nodes doesn't happen all at once, but is the result of constant evolution and branching, while the evolution is not only reflected at a coarse-grained level, but within the same category, the transition will be more subtle. As shown in Fig. 3, even within the "human" category, the samples between different ages are clustered and transition gradually. Of course, in a manually collected dataset, not all changes within a class may be gradual, and if the differences among more granular groups within a class are too large, then the class cannot be portrayed by a single tree. In other words, when there is a common ancestor node, the samples within a class can be viewed as a whole with the characteristics of streaming transformation, but when the ancestor node is removed, the gap between two sub-branches will be too large, which should actually be portrayed with two or more trees instead of forcibly connected together. Based on the above analysis, we designed a new clustering algorithm Coarse-Grained Leading Forest in combination with the construction of OLeaF [37], and its construction process is shown in Algorithm 1. Firstly, the distance matrix between the sample points in the dataset \(X\) is computed using an arbitrary distance metric, and then the density of each sample point is computed, where the density of sample point \(i\) is computed as shown in Equation 3: \[\rho_{i}=\sum_{j\in\mathbf{\Gamma}\{i\}\backslash\mathbf{O}_{i}}exp\Big{(}-(d_ {i,j}/d_{max})^{2}\Big{)}, \tag{3}\] where \(\textbf{\emph{I}}=\{1,2,...,N\}\) is the index set of dataset \(X\), \(\textbf{\emph{O}}_{i}\) is the set of nodes whose distance from node \(i\) exceeds \(d_{max}\), \(d_{i,j}\) is the distance between node \(i\) and node \(j\), and \(d_{max}\) is the cut-off distance. Next sort \(I\) in descending order according to the density value, denoted as \(S\), i.e., \(\textbf{\emph{S}}_{i}\) is the index of the data point with the \(i\)-th largest density value. The next step is to perform small-scale clustering on the points in \(S\) sequentially, which relies on the hyperparameter \(d_{min}\), and if the data point \(\textbf{\emph{S}}_{i}\) has not yet been merged into a coarse-grained node, then the data point \(\textbf{\emph{S}}_{i}\) and the points within \(d_{min}\) distance from it are formed into a coarse-grained node, i.e.: \[\textbf{\emph{C}}_{mem}=\{\textbf{\emph{S}}_{i}\}\cup\textbf{\emph{K}} \setminus\textbf{\emph{A}},\quad\textbf{\emph{S}}_{i}\notin\textbf{\emph{A}}, \tag{4}\] where \(\textbf{\emph{C}}_{mem}\) are members of the newly generated coarse-grained node, \(\textbf{\emph{S}}_{i}\) will be a agent node for that coarse-grained node, \(K\) is the set of nodes that are within \(d_{min}\) from \(\textbf{\emph{S}}_{i}\), i.e., \(\textbf{\emph{K}}=\{j\mid j\in\textbf{\emph{I}},\ d_{S_{i},j}<d_{min}\}\), and \(A\) is the set of visited nodes, i.e., the set of nodes that have already been merged into a particular coarse-grained node. Note that if \(\textbf{\emph{S}}_{i}\) itself is already in \(A\) then it skips the creation of that new coarse-grained node and continues to process the next node in \(S\). ``` Input: All training samples \(X\) of a given category. Output: A Coarse-Grained Leading Forest _clf_ for a given category. Parameter:\(d_{min}\) is the radius of the coarse-grained node, \(d_{max}\) is the cut-off distance. 1\(\mathit{dist}\) = calculate_distance(\(X\)); 2\(\mathit{density}\) = calculate_density(\(\mathit{dist}\), \(d_{\text{max}}\)); 3\(\mathit{density}\)_argsort = argsort(\(\mathit{density}\), descend=True); // Record the index of samples that have been visited 4\(\mathit{accessed}\) = initial_vector(length(\(\mathit{density}\)), False); 5foriinrange(length(\(\mathit{density}\)))do 6\(\mathit{node}\)_idx = density_argsort[i]; 7ifaccessed[node_idx]then 8continue; 9 10 end if // Combine points that are within \(d_{min}\) and have not been visited into one coarse-grained node. 11\(\mathit{c}\)_members = where(accessed == False and \(\mathit{dist}\)[node_idx] \(\leq d_{\text{min}}\)); 12accessed[\(\mathit{c}\)_members] = True; 13\(\mathit{coarse}\)_node = CoarseNode(id=Autoincrement, agent=node_idx, members=\(\mathit{c}\)_members); // A node becomes a root node if no denser leading node can be found within \(d_{max}\). 14\(\mathit{leader}\)_node = find_leader(node_idx, \(d_{\text{max}}\)); 15ifleader_node is not nullthen 16coarse_node.leader = leader_node; 17 18 end if 19else 20\(\mathit{clf.root.append(coarse\_node)}\); 21 22 23 end if 24 25 end for returnclf; ``` **Algorithm 1**Construction of CLF Next step is to find the leading node of the newly constructed coarse-grained node. Here, the problem is transformed into finding the leading node of the agent node \(\textbf{\emph{S}}_{i}\) of the coarse-grained node, denoted as \(l_{i}\), and then the coarse-grained node where \(l_{i}\) is located is used as the leading node of the current coarse-grained node. The process of finding the leading node \(l_{i}\) of \(\textbf{\emph{S}}_{i}\) can be referred to Equation 5, where \(\textbf{\emph{O}}_{S_{i}}\) is the set of nodes whose distance from \(\textbf{\emph{S}}_{i}\) exceeds \(d_{max}\): \[l_{S_{i}}=argmin_{j}\left\{d_{S_{i},j}|\rho_{j}>\rho_{S_{i}}\right\},j\in \textbf{\emph{I}}\backslash\{\textbf{\emph{S}}_{i}\}\backslash\textbf{\emph{O} }_{S_{i}}, \tag{5}\] note that \(l_{S_{i}}\) may not exist, and when \(l_{S_{i}}\) is not found, the coarse-grained node where \(\textbf{\emph{S}}_{i}\) is located automatically Fig. 3: Evolution of intra-class samples, the samples within the class have delicate transitions. becomes a root in the whole coarse-grained leading forest. Also since the density of nodes in \(\mathbf{S}\) is decreasing, when \(l_{S_{i}}\) is found, it must have been processed and already merged into a coarse-grained node since \(l_{S_{i}}\) is denser. ### _Sampling based on CLF_ By constructing the CLF we can simulate the portraying of attribute distributions within a category, and by adjusting the hyperparameters \(d_{min}\) and \(d_{max}\), we can control the fineness of the portraying. Next, we can construct the environment of different attribute distributions based on CLF, as shown in Fig. 4, because each branch represents a new direction of attribute evolution, and we categorize all the data points on each path from the root node to the leaf nodes as members of a certain attribute branch. When we obtain the samples of each attribute, we can refer to the idea of inter-class resampling [13] to resample on the attributes, as shown in Equation 6, when inter-class sampling is performed, \(j\) represents the class index, \(n_{j}\) represents the number of samples of class \(j\), and \(C\) represents the total number of classes, where \(q\in[0,1]\), and inter-class balanced sampling is performed when \(q=0\). \[p_{j}=\frac{n_{j}^{q}}{\sum_{i=1}^{C}n_{i}^{q}} \tag{6}\] As shown in Algorithm 2, when sampling different attributes within a category, the idea is the same as inter-class sampling, only need to regard \(j\) as the attribute index, \(n_{j}\) as the number of samples of attribute \(j\), and \(C\) as the total number of attributes. In addition, it should be noted that in intra-class sampling, unlike inter-class sampling, the same sample may have more than one attribute (e.g., the root node appears in all branches of the same tree, so it has all the attributes that this tree can represent), so the weight of such a sample point needs to be penalized to some extent. At the same time, due to the concept of coarse-grained node in our algorithm, as shown in Fig. 4, the variance of the members in the coarse-grained nodes is extremely small, i.e., the representativeness of the sample decreases, and the value of the information provided decreases, so the sampling weight of the members in the coarse-grained nodes should be appropriately reduced. In this paper, we use the method where all members within a CoarseNode share the weight of that CoarseNode equally. Taking the trunk tree in CLF in Fig. 4 as an example (ignoring isolated samples for demonstration purposes), we will provide a detailed introduction to attribute balanced sampling. Firstly, different attributes can be separated based on CLF, as shown in Fig. 4, each path from the root node to the leaf node can be considered as an attribute, which means that if you want to evenly sample the samples of each attribute, you only need to assign equal sampling weights to each attribute. Where for nodes that are repeated in multiple paths, we simply divide their sampling weight by the number of repetitions as a penalty. Take the example of calculating the weight of the root node, firstly since there are three attribute groups, so \(weight=\frac{1}{3}\), and then the node's weight in the three attribute groups are \(\frac{5}{1}\), \(\frac{1}{4}\), and \(\frac{1}{5}\), so the root node's sampling weights globally are \(\frac{1}{15}\), \(\frac{1}{12}\), and \(\frac{1}{15}\), and summing up the three weights yields \(weight=\frac{13}{60}\), and finally an appropriate penalty is to be applied, i.e., \(weight=\frac{weight}{reception}=\frac{13}{180}\). Furthermore, for a CoarseNode containing multiple samples, the penalty is similar, setting the sampling weight of each sample to \(weight=CoarseNode.weight/CoarseNode.length\). Taking the second coarse-grained node in Attribute 2 as an example, the sampling weight of this node is \(weight=(\frac{1}{3}\times\frac{1}{4}+\frac{1}{3}\times\frac{1}{5})/2=\frac{3} {40}\), and the sampling weight of each sample in this coarse-grained node is \(weight=\frac{3/40}{2}=\frac{3}{80}\). ### _Multi-Center Loss_ In the previous step multiple training environments for invariant feature learning were constructed, and the next step is to train using the objective function of IRM [17], however, Fig. 4: The left is an example of CLF constructed for category “sand”, while the right is an example of attribute splitting using CLF. Each path from the root node to the leaf nodes can be considered as an attribute, and the samples within the coarse-grained nodes are extremely similar, requiring an appropriate reduction in sampling weights. In addition the samples within the red and pink boxes demonstrate the potential of this method for noisy recognition. since the original IRM loss has convergence problems in realistic datasets, we designed a new goal Multi-Center Loss based on the idea of IRM and the center loss in IFL [8], which can be formulated as the following optimization problem: \[\begin{split}&\min_{\theta,\;\omega}\sum_{e\in\varepsilon}\sum_{i\in e }L_{cls}(f(x_{i}^{e};\;\theta),\;y_{i}^{e};\;\omega)\\ &\mathrm{s.t.}\quad\theta\in\operatorname*{arg\,min}_{\Theta} \sum_{e\in\varepsilon}\sum_{i\in e}\left\|f(x_{i}^{e};\;\Theta)-C(x_{i}) \right\|_{2},\end{split} \tag{7}\] where \(\Theta\) and \(\omega\) are the learnable parameters of the backbone and classifier, respectively, \(x_{i}^{e}\) and \(y_{i}^{e}\) are the \(i\)-th instance in environment \(e\) and its label, respectively, \(\varepsilon\) is all the training environments, \(f(x_{i}^{e};\;\theta)\) is the feature extracted by the backbone from \(x_{i}^{e}\), \(L_{cls}(f(x_{i}^{e};\;\theta),\;y_{i}^{e};\;\omega)\) is the classification loss under environment \(e\) (arbitrary loss function), and \(C(x_{i})\) is the center to which \(x_{i}^{e}\) belongs in all environments \(\varepsilon\). Note that the number of centers in each category \(n_{c_{y_{i}}}\geq 1\), and the center to which \(x_{i}^{e}\) belongs depends on which tree in the CLF of that category \(x_{i}^{e}\) is located, i.e., \(n_{c_{y_{i}}}=n_{t_{y_{i}}}\), where \(n_{t_{y_{i}}}\) is the number of trees in the CLF constructed from all samples of that category, and the initial value of \(C(x_{i})\) is the value of the _agent_ of the root node of the tree where \(x_{i}^{e}\) is located. The practical version of this optimization problem is shown in Equation 8, where \(L_{IFL}=\left\|f(x_{i}^{e};\;\theta)-C(x_{i})\right\|_{2}\) is the constraint loss for invariant feature learning and \(\alpha\) is the trade-off parameter: \[\begin{split}\min_{\theta,\;\omega}\sum_{e\in\varepsilon}\sum_{ i\in e}L_{mc}&=\min_{\theta,\;\omega}\sum_{e\in\varepsilon}\sum_{i \in e}L_{cls}+\alpha\cdot L_{IFL}\\ &=\min_{\theta,\;\omega}\sum_{e\in\varepsilon}\sum_{i\in e}L_{ cls}(f(x_{i}^{e};\;\theta),\;y_{i}^{e};\;\omega)\\ &\quad+\alpha\cdot\left\|f(x_{i}^{e};\;\theta)-C(x_{i})\right\| _{2},\end{split} \tag{8}\] this loss is the IRM version of center loss, on the other hand, relative to the original version of center loss increases the robustness, as in the previous introduction of CLF, in some artificial datasets, even within the same category there may be a situation where the gap between the samples is extremely large, i.e., the number of trees in CLF is greater than 1, as shown in Fig. 5, in category "garage", it can actually be divided into three categories: "Outside the garage", "Inside the garage with car", and "Inside the garage without car", and the features of these three subcategories vary greatly. If only one center is used, making each category's features gradually approach one center during the training process will actually damage the learning of features, and this is the starting point for using Multi-Center. ### _Overall Framework_ The overall framework of this scheme is shown in Fig. 6. Each environment has a pair of \((q_{cls},q_{attr})\), with the former being the balance factor for inter-class sampling and the latter being the balance factor for intra-class sampling. Different environments can be constructed through different \((q_{cls},q_{attr})\) Fig. 5: Multiple trees in the CLF of category “garage” (only part of it is shown), there are multiple subclasses in the category “garage”, e.g., outside the garage, inside the garage (parked cars), and inside the garage (no parked cars), and there is a huge disparity among these subclasses. pairs, and model parameters can be shared for training under the constraint of MCL. The portraying of attribute distribution during sampling is achieved through CLF, while the number of centers in MCL is determined by the number of trees in CLF of the corresponding category. The algorithm process of the overall framework is shown in Algorithm 3, which is divided into two stages: firstly, since the initial features of the samples need to be used for clustering when constructing CLF, M-round normal sampling training is required to obtain an initial model with imperfect predictions; Next, the initial features are used to construct the CLF, and different environments are constructed through the CLF and different balance factor pairs. For example, in the experimental phase, this article sets up two environments with balance factor pairs of \((q_{cls}^{e_{i}}=1,q_{attr}^{e_{i}}=1)\) and \((q_{cls}^{e_{i}}=0,q_{attr}^{e_{i}}=0)\), where the former is a normal i.i.d. sampling environment and the latter is a balanced sampling environment for both categories and attributes. Then, the feature learner is continuously updated to further update the center in the CLF and MCL. Of course, the number of epoch steps for executing updates in the second stage can be adjusted, rather than being fixed to one update per epoch. ``` Input: the original training set \(\{(x,y)\}\); balance parameter pairs \(\{(q_{cls}^{e},q_{attr}^{e})\}\) for different environments. Output: backbone \(f(\cdot;\theta)\), classifier \(g(\cdot;w)\) 1 Initialize: backbone \(f(\cdot;\theta)\), classifier \(g(\cdot;w)\); 2for\(M\)warm-up epochsdo 3 // optimizing the model from cross-entropy classification loss; 4\(\theta,w\leftarrow\hat{\theta},\hat{w}\in\text{arg min}_{\theta,w}L_{cls}(g(f(x ;\theta);w),y)\); 5 6 end for 7\(\{F_{y}\}=\text{ClfConstruct}(\{(x,y)\},\theta,w)\); 8\(\{e_{n}\}=\text{EnvConstruct}(\{(q_{cls}^{e},q_{attr}^{e})\},\{F_{y}\})\); 9for\(N\)epochsdo 10\(\{(x^{e},y^{e})\}=\text{DataLoader}(\{e_{n}\})\); 11\(\{C_{y}\}=\text{ReadCenters}(\{F_{y}\})\); 12\(\theta,w\leftarrow\hat{\theta},\hat{w}\in\text{arg min}_{\theta,w}L_{cls}(g(f (x;\theta);w),y)+\alpha\cdot L_{mc}(f(x;\theta),\{C_{y}\})\); 13\(\{F_{y}\}=\text{ClfConstruct}(\{(x,y)\},\theta,w)\); 14\(\{e_{n}\}=\text{EnvConstruct}(\{(q_{cls}^{e},q_{attr}^{e})\},\{F_{y}\})\); 15 16 end for ``` **Algorithm 3**The overall of the proposed Cogni-sanceg algorithm ## IV Experiment ### _Evaluation Protocols_ Before conducting the experiment, it is necessary to first introduce two new evaluation protocols: CLF Protocol and Fig. 6: Overall framework diagram, where different environments have different sampling strategies, where \(q_{cls}\) and \(q_{attr}\) are balancing factors for inter-class sampling and intra-class sampling, respectively. GLT Protocol, both of which were proposed in the first baseline of GLT [8]: * **Class wise Long Tail (CLT) Protocol**: The samples in the training set follow a long-tailed distribution, which means that they are normally sampled from the LT dataset, while the samples in the test set are balanced by class. Note that the issue of attribute distribution is not considered in CLT, as the training and testing sets of CLT have the same attribute distribution and different class distributions, so the effectiveness of category long tail classification can be evaluated. * **Generalized Long Tail (GLT) Protocol**: Compared with the former, the difference in attribute distribution is taken into account, that is, the attribute bias in Equation 1 is introduced. The training set in GLT is the same as that in CLT and conforms to the LT distribution, while the attribute distribution in the test set tends to be balanced. Since the train set and test set in GLT have different attribute distributions and different class distributions, it is possible to evaluate the model's ability to handle both inter-class long-tailed classification and intra-class long-tailed classification. ### _Datasets and Metrics_ In total, we evaluated and compared the LT methods on two benchmarks, MSCOCO-GLT, and ImageNet-GLT, which are proposed in the first baseline of GLT [8]. **ImageNet-GLT** is a long-tailed subset of ImageNet [39], where the train set contains 113k samples of 1k classes, where the number of each class ranges from 570 to 4. The number of test set in both CLT and GLT protocols is 60k, and the test set are divided into three subsets according to the following class frequencies: #sample \(>\) 100 for \(Many_{C}\), 100 \(\geq\) #sample \(\geq\) 20 for \(Medium_{C}\), and #sample \(<\)20 for \(Few_{C}\). Note that in constructing the test set for attribute balancing in this dataset, the images in each category were simply clustered into 6 groups using KMeans, and then 10 images were sampled for each group in each category. **MSCOCO-GLT** is a long-tailed subset of MSCOCO-Attribute [40], a dataset explicitly labeled with 196 different attributes, where each object with multiple labels is cropped as a separate image. The train set contains 144k samples from 29 classes, where the number of each class ranges from 61k to 0.3k, and the number of test set is 5.8k in both the CLT and GLT protocols, and the test set are divided into three subsets according to the following category frequencies: \(Index_{C}\leq\) 10 for \(Many_{C}\), 22 \(\geq\)\(Index_{C}\geq\) 10 for \(Medium_{C}\) and \(Index_{C}>\) 22 for \(Few_{C}\), where \(Index_{C}\) is the index of the category in ascending order. **Evaluation Metrics**. In our experiments in this paper we use two metrics to evaluate the performance of the methods: 1) Accuracy: \(\frac{\#CorrelPredictions}{\#AllSamples}\), which is the Top-1 Accuracy that has been used in traditional long-tailed methods; 2) Precision: \(\frac{1}{\#class}\cdot\sum_{class}\frac{\#CorrelPredictions}{\#SamplesPredictedAsThisClass}\), the reason why this metric is introduced is to better reveal the precision-accuracy trade-off problem [8] that has not been paid attention to in traditional inter-class long-tailed methods. ### _Comparisons with LT Line-up_ This scheme handles both the inter-class long-tailed problem and the intra-class long-tailed problem by eliminating the false correlation caused by attributes, and this scheme can be seamlessly combined with other LT methods. In the following comparison experiments we follow the classification of current long-tailed research in [1] and [8], and classify current long- tailed methods into three categories: 1) Class Re-balancing, 2) Information Augmentation, and 3) Module Improvement, and take two effective methods from each of these three categories for comparison and enhancement, in which we chose two methods, **BLSoftmax**[23] and **Logit-Adj**[6], in the Class Re-balancing category. For Information Augmentation we chose two methods, **Mixup**[34] and **RandAug**[32], and for Module Improvement category we chose two methods, **RIDE**[35] and **TADE**[36], which adopt the idea of ensemble learning and are both current SOTA methods. In addition, we have included the first strong baseline GLTv1 [8] in the GLT domain as a comparison, as shown in Table I and Table II, where the methods with an asterisk are the ones that add our component, and the bolded numbers are the optimal results in the category of the method, and it can be seen that our proposed method gets the best results in all the classifications, especially when combined with the method RandAug or the method RIDE got the best results in almost all evaluation metrics in both datasets. In addition, this method aims to deal with both intra-class long-tailed and inter-class long-tailed, although the starting point of this method is to solve the problem of attribute long-tailed within the class, but it also meets the challenge of inter-class long-tailed by eliminating the false correlation caused by long-tailed attributes [8]. From Fig. 7, we can see more clearly the improvements that Cognisance achieves over the existing LT approachs on the two protocols of the two benchmarks, and we can see that the performance of all the methods from CLT to GLT has been degraded, which also shows that the long-tailed problem is not purely inter-class long-tailed, while intra-class long-tailed is much more challenging, but Cognisance can still successfully improve all existing popular LT methods. Finally, Table III records the experimental results of various methods on the test set of long-tailed distribution, which is consistent with the distribution of the training set, and we can see that compared with other methods, our methods can still achieve the best results on all evaluation metrics. ## V Discussion and Further Analyses ### _Why not directly apply the OLeaF?_ In this scheme, CLF is more appropriate compared to OLeaF for two very important characteristics: 1) Automatic tree-splitting mechanism, the number of parameters of CLF is relatively small, and the fine degree of tree-splitting can be Fig. 7: Cognisance enhancements to existing LT methods controlled only by adjusting the parameter \(d_{max}\). The tree-splitting scheme in OLeaF is more rigorous and detailed but requires a certain degree of manual analysis, whereas the feature learner needs to be iterated in this framework, and it may be possible after each iteration to re-trigger clustering, and the dataset targeted is often large image data, so a fully automated scheme must be needed; 2) CoarseNode, each node of CLF is a coarse-grained node, the reason why this mechanism is designed mainly to deal with the long-tailed problem of attributes within a category, due to the fact that the head attribute may contain a large number of similar samples, and if left uncontrolled, the length of the path occupied by the head attribute may skyrocket, while each node in the same path is sampled with the same weight, a large number of extremely similar nodes actually lose their representativeness, and at the same time, the sampling weights of bottom nodes are compromised, so it is necessary to carry out a small range of clustering to be controlled, and this clustering process is mainly controlled by the parameter \(d_{min}\), and all the members of the nodes within a CoarseNode will equally share the node's sampling weight. ### _Why not directly apply IRM loss?_ In this scheme, there are two reasons why IRM loss is not used directly: 1) the original IRM loss has convergence problems on real-world large-scale image datasets; 2) the core in Multi-Center Loss lies in the Multi-center, which is a mechanism that can make the model's learning more robust because the samples within the same category in an artificial dataset may gap tremendously, and at this time, if we simply add the regularity of one center, it will instead impair the learning of the features. ### _About Parameter Adjustment_ In this scheme, there are two types of parameters that can be tuned: 1) overall process-related, e.g., the number of epochs for warm-up, the number of epoch steps for updating the sampling weights of samples, and the number of environments; and 2) clustering-related, the two parameters \(d_{min}\) and \(d_{max}\) in CLF clustering, the former controlling the radius of the CoarseNode and the latter controls the fineness of the tree splitting. The algorithm is preset with two relatively general default values, but they can still be adjusted according to the specific dataset. The above parameters have little impact on the overall effect within a certain range, but there is still room for further optimization. ### _About Distance Measure_ In this scheme, the distance metric is used in two places: 1) CLF construction, the distance matrix needs to be calculated when CLF clustering is carried out, and the distance metric here can be switched arbitrarily; 2) Multi-Center Loss, the distance from the samples to the center of their belonging will be calculated when optimizing the MCL, and the distance metric here is consistent with that of CLF construction, and it can also be switched freely. The Euclidean distance is used as the default distance metric in this paper, switching other metrics may give better results, due to the relatively large amount of work, only one metric is used here, but it does not affect the elaboration of the main idea. ### _About Eliminating Noise_ Any real-world dataset is imperfect, and noisy samples such as sensory glitches (e.g., low-quality or corrupted images) and human errors (e.g., mislabeling or ambiguous annotations) may impair model training. In this scheme, the proposed CLF actually has the potential to perform noise identification, as shown in Fig. 4, if there is a tree in the CLF contains only one node and the node contains only one sample, or the node is a leaf node of a very deep tree, then the sample corresponding to this node have a high probability of being noise samples, which can be further processed to finalize whether they are noise or not, and this idea can be explored more in future work. ## VI Conclusion In this study, we provide insights into the long-tailed problem at two levels of granularity: inter-class and intra-class, and propose two important components: the CLF (Coarse-Grained Leading Forest) and the MCL (Multi-Center Loss). The CLF, as an unsupervised learning methodology, aims to capture the distribution of attributes within a class in order to lead the construction of multiple environments, thus supporting the invariant feature learning. Meanwhile, MCL, as an evolutionary version of Center Loss, aims to replace the traditional IRM Loss to further enhance the robustness of the model on real-world datasets. Through extensive experiments on existing benchmarks MSCOCO-GLT and ImageNet-GLT, we exhaustively demonstrate the significant results of our method. Finally, we would also like to point out the advantages of the two components, CLF and MCL, which are designed as low-coupling plug-ins that can be organically integrated with other long-tailed classification methods, bringing new possibilities for model performance improvement. ## Acknowledgment This work has been supported by the National Natural Science Foundation of China under grants 61966005, 61936001 and 62366008, the National Key Research and Development Program of China under grant 2020YFB1713300, the Natural Science Foundation of Chongqing (cstc2019jcyjcxttX0002, cstc2021ycjh-bgzxn0013), and the Key Collaboration Project of Chongqing Municipal Education Commission(HZ2021008).
2302.07324
READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises
For many real-world applications, the user-generated inputs usually contain various noises due to speech recognition errors caused by linguistic variations1 or typographical errors (typos). Thus, it is crucial to test model performance on data with realistic input noises to ensure robustness and fairness. However, little study has been done to construct such benchmarks for Chinese, where various language-specific input noises happen in the real world. In order to fill this important gap, we construct READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises. READIN contains four diverse tasks and requests annotators to re-enter the original test data with two commonly used Chinese input methods: Pinyin input and speech input. We designed our annotation pipeline to maximize diversity, for example by instructing the annotators to use diverse input method editors (IMEs) for keyboard noises and recruiting speakers from diverse dialectical groups for speech noises. We experiment with a series of strong pretrained language models as well as robust training methods, we find that these models often suffer significant performance drops on READIN even with robustness methods like data augmentation. As the first large-scale attempt in creating a benchmark with noises geared towards user-generated inputs, we believe that READIN serves as an important complement to existing Chinese NLP benchmarks. The source code and dataset can be obtained from https://github.com/thunlp/READIN.
Chenglei Si, Zhengyan Zhang, Yingfa Chen, Xiaozhi Wang, Zhiyuan Liu, Maosong Sun
2023-02-14T20:14:39Z
http://arxiv.org/abs/2302.07324v2
# Readin: A Chinese Multi-Task Benchmark with ###### Abstract For many real-world applications, the user-generated inputs usually contain various noises due to speech recognition errors caused by linguistic variations1 or typographical errors (typos). Thus, it is crucial to test model performance on data with realistic input noises to ensure robustness and fairness. However, little study has been done to construct such benchmarks for Chinese, where various language-specific input noises happen in the real world. In order to fill this important gap, we construct **Readin**: a Chinese multi-task benchmark with **RE**alistic **A**nd **D**iverse **I**nput **N**oises. **Readin** contains four diverse tasks and requests annotators to re-enter the original test data with two commonly used Chinese input methods: Pinyin input and speech input. We designed our annotation pipeline to maximize diversity, for example by instructing the annotators to use diverse input method editors (IMEs) for keyboard noises and recruiting speakers from diverse dialectical groups for speech noises. We experiment with a series of strong pretrained language models as well as robust training methods, we find that these models often suffer significant performance drops on **Readin** even with robustness methods like data augmentation. As the first large-scale attempt in creating a benchmark with noises geared towards user-generated inputs, we believe that **Readin** serves as an important complement to existing Chinese NLP benchmarks. The source code and dataset can be obtained from [https://github.com/thunlp/READIN](https://github.com/thunlp/READIN). Footnote 1: Note that linguistic variations themselves are not noises or errors, but they can lead to noises in the data processing for example due to failure of speech recognition. ## 1 Introduction User-generated inputs in real-world applications often contain noises where wrong characters or words are used instead of the intended ones (Xu et al., 2021). This is especially true when users type fast or are using speech input in noisy environments or with less common accents that cause errors in post-processing systems. However, most benchmarks used in academic research do not explicitly try to capture such real-world input noises (Naplava et al., 2021), leaving the doubt whether models performing well on standard clean test sets can transfer well onto real-world user-generated data. To evaluate the performance on noisy data for languages like English, existing work typically generates typos via character-level perturbation such as randomly sampled or adversarial character swap or deletion (Belinkov and Bisk, 2018; Pruthi et al., 2019; Jones et al., 2020; Ma et al., 2020), automatic back-translation and speech conversion (Peskov et al., 2019; Ravichander et al., 2021). However, there are many factors not considered in the automatic approaches, for example, the keyboard design of users' devices and speakers' phonetic and phonological variations. These overlooked factors have a large impact on the types of noises possible in keyboard and speech inputs. One notable exception to the above is NoiseQA (Ravichander et al., 2021). Apart from automatic approaches, they also collected test sets with noises produced by annotators. Their dataset only considered the question answering task and is only in English. In this paper, we focus on Chinese instead and present a multi-task benchmark with **RE**alistic **A**nd **D**iverse **I**nput **N**oise, named **Readin**. Compared to the case of English, Chinese input noises have very different patterns due to the very different nature of the two languages. Chinese is a pictographic language without morphological inflections that are common in Indo-European languages. Also, the tone system is a unique and integral part of Chinese phonology but not in English. Such differences cause different types of input noises in both keyboard typing and speech input. To compre hensively study the effect of real-world noises, we cover four diverse tasks: paraphrase identification, machine reading comprehension, semantic parsing (text2SQL) and machine translation, all of which represent important real-life applications. We consider noises occurring in two widely used Chinese input methods, keyboard input and speech input, and provide an example in Table 1. For keyboard input, Chinese users need to use an input method editor (IME) to convert the raw transliteration2 sequences into Chinese characters. In such cases, noises can either occur in the transliteration input, or occur when users are choosing the intended word from the candidate list suggested by the IME. It is different from the case of English where typos and spelling variations are expected to happen on the character level. The noise patterns are further coupled with the typing habits of individual users, for example, whether they type the full Pinyin transliteration or just the abbreviations results in different noise patterns. In order to capture these nuances, we recruit annotators with different typing habits and instruct them to use different IMEs for typing. Footnote 2: There are also IME that convert radical sequences into characters. We focus on transliteration-based IME in this paper (in particular the Pinyin input method) since it’s more commonly used among Chinese users (Fong and Minett, 2012). For speech input, noises could arise when the speakers' accents or background noises lead to failures of the post-processing automatic speech recognition (ASR) systems. To capture these, we recruit 10 speakers from different regions of China to cover diverse accents and use a commonly used Chinese commercial ASR system for post-processing. For instance, in Table 1, the speech noise occurs because the speaker has different tones in their accent, leading the ASR system to produce different characters than the original ones. Ensuring that models are robust across these accent variations has important implications for fairness. We take many additional measures in the annotation process in order to capture the real-world input noise distribution, as detailed in Section 2. In Section 3, we provide more statistics and analysis of the collected data. In Section 4, we train strong baseline models on the clean training data and test the models on our Readin test sets. The results indicate that these models suffer significant performance drops on the real-world input noises, leaving ample room for future improvement. ## 2 Annotation Process Our annotation asks crowdworkers to re-enter clean test data from these existing NLP datasets. Our goal is to induce realistic and diverse input noises in the annotation. We collect data using two different types of input methods: keyboard (Pinyin) input and speech input, both are commonly used among Chinese users (Fong and Minett, 2012). All examples are annotated with both input methods and we keep two separate tracks for data collected with these two different input methods. In the following subsections, we first introduce the four tasks and the original datasets that our annotations are based on, and then introduce the annotation process for keyboard input and speech input respectively. keep the passages clean. This simulates the realistic setting where users enter their queries potentially with typos. Semantic Parsingrequires the model to convert natural language queries into logical forms. We use the CSPider dataset (Min et al., 2019) which is a dataset for the natural language to SQL query task and is the Chinese version of the Spider dataset (Yu et al., 2018). We use exact match as the metric. During annotation, we annotate the natural language questions to induce typos and use the original SQL queries as the gold reference. Machine Translationrequires the model to translate the input in the source language into the target language. We use the news translation shared task from WMT2021 (Akhbardeh et al., 2021) as our original data source. Following the standard practice of the MT community, we use SacreBLEU (Post, 2018) to compute the BLEU score as the metric. During annotation, we only annotate the Chinese sentence and preserve the original English translation as the gold reference. ### Pinyin Input Annotation We present each annotator with a set of input data and ask them to re-type with the Pinyin input method. We implement the following restrictions in the annotation.3 Footnote 3: We also record the typing interface during the annotations to facilitate future analysis. Different IMEsThere are many commercial IME softwares available for the Pinyin input method. To maximize diversity, every input sentence is annotated by three different annotators, where each annotator uses a different IME software. We specified three commonly-used commercial Pinyin IMEs: Microsoft4, QQ5, and Sogou6. The main difference among these different IMEs is that when users type the same Pinyin transliteration input, different IME softwares suggest different candidate words and in different orders, as illustrated in Figure 1. The use of different IMEs captures a wider range of possible typing noises. Footnote 4: [https://en.wikipedia.org/wiki/Microsoft_Pinyin_IME](https://en.wikipedia.org/wiki/Microsoft_Pinyin_IME) Footnote 5: [http://qq.pinyin.cn/](http://qq.pinyin.cn/) Footnote 6: [https://pinyin.sogou.com/mac/](https://pinyin.sogou.com/mac/) Speed LimitThrough our pilot run, we find that some annotators like to double-check their typed sequence. This is against our intention to collect more diverse noises for stress testing models, and we prefer to simulate cases where users may type in a much faster pace. Therefore, we set a speed limit of 40 characters per minute, which is the average rate of several runs of pilot annotation. We include a timer in the annotation pipeline and annotations with significantly slower typing speed are requested for re-annotation with a faster pace. Disallow Post-EditingIn pilot runs, we also find that some annotators like to correct their typos when they double-check their inputs, which again goes against our purpose. To complement the speed limit restriction, we also implement an additional constraint where post correction is not allowed in the annotation pipeline. ### Speech Input Annotation For speech input, we present each annotator with a set of input data and ask them to read and record them. The recordings are then converted to text data with ASR. We implement the following measures to ensure the diversity of speech input noises. SetupTo represent realistic settings, all recordings are done with mobile devices (the annotators' phones), with 16kHz sampling rate, which is high enough for ASR. We also instruct the annotators to record in environments with natural background noises, for example in their offices with some light background talking or street noises. DiversityThere are large phonetic and phonological variations among different users especially since there are many accents across Chinese speakers. To capture such variation, we recruited a total of 10 different annotators for this speech input task (4 males and 6 females). They are selected from a larger pool of annotators through our trial run to Figure 1: A screenshot of two different Pinyin IMEs. Given the exact same Pinyin input (“_shi shi”_), different IMEs suggest different words in different orders for users to select from. We use three different IMEs in keyboard annotation for wider coverage. maximally diversify accents. They come from different parts of China with different dialectic groups (more annotator details are in the appendix). Their ages range from 32 to 64. We instruct the annotators to speak Mandarin while preserving their accents. Each input sentence is annotated by 3 different annotators from different dialectic groups to maximize diversity. AsrThe collected speech data are converted to text with a commercial automatic speech recognition (ASR) software iFlyte7. We choose this commercial software because it is optimized for Mandarin and outperforms other open-source toolkit that we explored in the pilot run in terms of character-level error rates. We also release the raw audio recordings so that future work can explore using other alternative ASR choices as well. Footnote 7: [https://global.xfyun.cn/products/real-time-asr](https://global.xfyun.cn/products/real-time-asr) Throughout the paper, we report results separately for the keyboard and speech noisy test sets for more fine-grained comparisons. We introduce more details of the annotated test sets in the next section. ## 3 Dataset Overview In this section, we analyse the annotated noisy test sets, including data statistics, our proposed metrics for robustness evaluation, a manual quality assessment of the annotated data as well as a qualitative analysis of the diverse types of input noises. ### Corpus Statistics The keyboard and speech noise data have the same sizes.8 We only perform noise annotation on the test data and the training and dev sets remain clean. This serves our purpose to stress test models' robustness. Since the original datasets did not publicly release their test sets, we use their original dev splits as our test sets and we re-split the existing training data into our new train and dev splits, and we only annotate the test splits. We present the statistics of our data splits in Table 2. Footnote 8: We performed some minimal filtering on the speech noise data to remove nonsensical outputs from ASR, which only involves about 50 examples in total and is omitted in the table. To gauge the amount of noises in our annotated test sets, we report the character-level error rates for each noisy test set. Since the noise data could involve various changes like character deletion, insertion, or substitution, we use Levenshtein distance to measure the level of noise. Specifically, given a clean sentence \(s\) and its annotated noisy version \(t\), we define its error rate as: \[\mathrm{error}=\frac{\mathrm{levenshtein}(s,t)}{\mathrm{len}(s)}\] We measure the micro-average (average overall all annotations) as well as the worst-average (only consider the highest error rate annotation for each example) error rate across all three annotations over all examples. These two measures are further explained in the next section. The error rates are presented in Table 3. We find that speech noises generally incur larger error rates except on CSpider, and in all cases, the error rates are well below 50%. ### Evaluation Metrics Apart from the individual metrics as introduced in section 2.1, we introduce two other benchmark-level metrics to account for the variations across the three different annotations per test example. Suppose for the \(i\)-th example, the performance of the model (by its task-specific metric) on the three typo annotations are \(p_{1}^{i},p_{2}^{i},p_{3}^{i}\) respectively. \begin{table} \begin{tabular}{l r r r} \hline \hline Dataset & Train & Dev & Test \\ \hline AFQMC & 18,000 & 2,000 & 4,317 \\ CMRC2018 & 8,871 & 1,271 & 3,219 \\ CSpider & 7,500 & 1,159 & 1,034 \\ WMT2021 & – & – & 1,948 \\ \hline \hline \end{tabular} \end{table} Table 2: Sizes of our four datasets. For CMRC2018, we report the number of questions (multiple questions can correspond to the same passage). For WMT2021, we directly use the mBART50 model trained for multilingual translation without any additional finetuning on English-Chinese data, so there are no additional train or dev data involved. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Keyboard} & \multicolumn{2}{c}{Speech} \\ & Average & Worst & Average & Worst \\ \hline AFQMC & 18.8 & 27.5 & 30.9 & 44.1 \\ CMRC2018 & 17.4 & 26.9 & 25.1 & 38.1 \\ CSpider & 17.4 & 25.7 & 13.3 & 21.8 \\ WMT2021 & 17.7 & 25.1 & 21.6 & 30.8 \\ \hline \hline \end{tabular} \end{table} Table 3: Micro-average and worse-average error rates on our annotated test sets. Micro-average (‘Average’) is the mean of the average error rate among all three annotations for all examples. Worst-average (‘W Worst’) takes the mean of the maximum error rate among all three annotations for all examples. We define the following two measures: Micro-Averagetakes the average of all performance across the three annotations, and then averages across all examples, \[MA=\frac{1}{N}\sum_{i=1}^{N}(\frac{1}{3}\sum_{j=1}^{3}p_{j}^{i})\] \[=\frac{1}{3}(\frac{1}{N}\sum_{i=1}^{N}p_{1}^{i}+\frac{1}{N}\sum_{i=1}^{N}p_{2}^{ i}+\frac{1}{N}\sum_{i=1}^{N}p_{3}^{i}).\] In other words, this is equivalent to taking the average of the per-annotator performance. Worst-Averagetakes the minimum of the performance among all three annotations per average, and then averages across all examples, \[WA=\frac{1}{N}\sum_{i=1}^{N}\min(p_{1}^{i},p_{2}^{i},p_{3}^{i}).\] This is a more challenging setting where we examine the worst-case performance across the annotation variations for each example. ### Data Quality Analysis In order to analyze the quality of our annotated data, we design a human evaluation experiment. We compare our noisy test sets with the automatically constructed input noise test sets as in Si et al. (2023). Specifically, they replace characters in the original sentences with randomly sampled homophones based on an existing Chinese homophone dictionary (Zeng et al., 2021). We replicate their approach as a baseline and add an additional constraint that we only allow simplified Chinese characters in the character substitution process since our data focus on simplified Chinese. We aim to compare whether our crowdsourced noise data are more likely to occur in the real world. Towards this goal, we conduct a human preference selection experiment, where we present pairs of sentences to two annotators (different from the ones who did the noisy input annotation). Each pair consists of a sentence with automatic typos and another with our crowdsourced input noise, and the ordering is randomly shuffled for all pairs. We instruct the annotators to select the sentence that is more likely to occur in real user input settings (_i.e._, more plausible). We perform such annotation on 160 randomly sampled sentence pairs, for both keyboard input noises and speech input noises. We show some qualitative examples to compare our real-world noises and automatically constructed ones in Table 4, where we see that automatic noises involve substitutions that are unlikely to happen in real-world (for example only changing a single character "\(\frac{\text{\text{\tiny{\text{\tiny{\text{\text{\tiny{\text{\text{\text{\text{\text{ \ from six different annotators. We find that Readin examples cover different typing habits and noise patterns. For example, 69% of the time annotators type the full Pinyin sequences while in 31% cases annotators only type the abbreviated sequences; 56% of these noises are due to selection errors (where the Pinyin input is right but the annotators selected the wrong word from IMEs) while the other 44% are due to wrong Pinyin input. 9 Footnote 9: More details are in the Appendix. Overall, our analysis highlights that Readin covers realistic and diverse input noises, posing greater challenges for existing models. ## 4 Experiments We benchmark several pretrained language models and examine whether their performance stays strong on Readin. ### Baseline Setups We use RoBERTa-wwm Cui et al. (2021) and MacBERT Cui et al. (2020) as baselines for classification tasks. RoBERTa-wwm is a Chinese version of RoBERTa Liu et al. (2019), where whole-word-masking is used during pretraining. MacBERT is a modification to BERT Devlin et al. (2019) where replaced word correction is used as a pretraining objective. Both of these models, like the original Chinese BERT, directly use the WordPiece Wu et al. (2016) tokenizer on Chinese characters. We use the base scale checkpoint for both models. For machine translation, we adopt mBART50 Tang et al. (2020) as the baseline, which is a multilingual Transformer model that consists of 12 encoder layers and 12 decoder layers and is trained based on mBART Liu et al. (2020) for multilingual translation. For semantic parsing, we use DG-SQL Wang et al. (2021), a competitive baseline on CSPider based on multilingual BERT Devlin et al. (2019). For experiments on AFQMC, CMRC2018, and CSpider, we finetune the pretrained checkpoints on the corresponding clean training sets. For WMT2021, we directly take mBART50 for inference without additional finetuning on Chinese-English parallel data since mBART50 itself is already trained on parallel translation data including Chinese-to-English. ### Robustness Methods Apart from standard finetuning, we also experiment several robust training and data processing methods in order to assess how much can existing robustness methods solve our benchmark. We briefly introduce these methods below. Adversarial Data AugmentationADA Si et al. (2021) is commonly used to enhance robustness against adversarial examples. We perform ADA by creating synthetic noisy training examples through random homophone substitution as in Si et al. (2023) and add these examples to the original training examples. We double the number of total training examples through ADA. Typo CorrectionInspired by previous work that used a word recognition model to restore misspelled words in English Pruthi et al. (2019), we use a highly optimized commercial Chinese \begin{table} \begin{tabular}{c|c|c} \hline \hline CMRC2018 & Original & \begin{tabular}{c} \(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\(\#\)\ type correction software10 to pre-process data in READIN and then perform evaluation on the corrected data. We only perform this step on the noisy test sets, not the clean sets. Footnote 10: [https://console.xfyun.cn/services/text_check](https://console.xfyun.cn/services/text_check) SubChar Tokenization ModelsSi et al. (2023) released a series of BERT-style models trained with SubChar tokenization, which use sub-character units such as radicals and syllables to compose Chinese characters. In particular, their SubChar-Pinyin model has the advantage of being robust to homophone typos. We adopt their model and also consider performing ADA on top of the SubChar-Pinyin model. ### Results We present results of the baseline models in Table 5 (for NLU tasks) and Table 6 (for NLG tasks). We highlight several main findings below. Input Noises Cause Large DropsWe first compare performance of the same models on the clean test sets and the noisy test sets. We see a clear trend that model performance drops significantly when evaluated on the noisy test sets as compared to the clean test sets. As expected, the worst-average performance is much worse than the micro-average, \begin{table} \begin{tabular}{c|c c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{CSpider} & \multicolumn{4}{c}{WMT2021} \\ \hline & \multicolumn{3}{c}{Keyboard} & \multicolumn{3}{c|}{Speech} & \multicolumn{3}{c}{Keyboard} & \multicolumn{3}{c}{Speech} \\ & Clean & Average & Worst & Average & Worst & Clean & Average & Worst & Average & Worst \\ \hline DG-SQL / mBART50 & 44.87 & 28.85 & 11.99 & 33.40 & 24.18 & 23.19 & 16.35 & 9.37 & 16.74 & 10.82 \\ w/ Word Correction & 44.87 & 30.24 & 13.73 & 33.40 & 24.47 & 23.19 & 17.59 & 10.24 & 16.89 & 10.97 \\ \hline \hline \end{tabular} \end{table} Table 6: DG-SQL performance on CSpider and mBART50 performance WMT2021 test sets. We compare model performance on the original clean test set (‘Clean’) and our new noisy test sets. For results on noisy test sets, we report both micro-average (‘Average’) and worst-average (‘Worst’) performance. For CSpider, we report exact match with the gold reference; for WMT2021, we report BLEU. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline & \multicolumn{3}{c}{Keyboard} & \multicolumn{3}{c}{Speech} \\ & Clean & Average & Worst & Average & Worst \\ \hline Subword & 75.81 & 49.63 & 22.03 & 42.21 & 19.31 \\ w/ ADA & 69.76 & 49.39 & 25.67 & 46.35 & 22.97 \\ \hline SubChar-Pinyin & 73.99 & 50.88 & 23.42 & 45.24 & 21.21 \\ w/ ADA & 73.73 & 54.16 & 29.43 & 52.93 & 28.06 \\ \hline \hline \end{tabular} \end{table} Table 7: Finetuning results of BERT models trained with subword and SubChar tokenizers on the AFQMC (pos) subset. SubChar models are more robust than subword models, especially after performing data augmentation. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{AFQMC (pos)} & \multicolumn{3}{c}{AFQMC (neg)} & \multicolumn{3}{c}{CMRC2018} \\ & Clean & Average & Worst & Clean & Average & Worst & Clean & Average & Worst \\ \hline \multicolumn{8}{c}{_Keyboard_} \\ \hline RoBERTa-wwm & 78.92 & 42.75 & 15.17 & 65.75 & 81.87 & 65.85 & 69.78 & 60.84 & 46.69 \\ w/ ADA & 76.76 & 48.31 & 19.88 & 63.50 & 76.56 & 58.23 & 59.30 & 53.04 & 42.00 \\ w/ Word Correction & 78.92 & 39.96 & 12.78 & 65.75 & 82.91 & 67.29 & 69.78 & 60.84 & 46.69 \\ \hline MacBERT & 80.04 & 48.33 & 18.83 & 62.09 & 76.77 & 58.29 & 67.69 & 56.71 & 41.29 \\ w/ ADA & 77.88 & 53.21 & 24.66 & 64.30 & 74.41 & 55.34 & 59.24 & 54.05 & 43.99 \\ w/ Word Correction & 80.04 & 44.52 & 16.22 & 62.09 & 78.51 & 60.41 & 67.69 & 56.72 & 41.29 \\ \hline \multicolumn{8}{c}{_Speech_} \\ \hline RoBERTa-wwm & 78.92 & 27.75 & 5.68 & 65.75 & 87.80 & 73.81 & 69.78 & 55.97 & 40.73 \\ w/ ADA & 76.76 & 39.76 & 13.30 & 63.50 & 78.26 & 58.93 & 59.30 & 48.32 & 36.35 \\ w/ Word Correction & 78.92 & 27.75 & 5.68 & 65.75 & 87.80 & 73.81 & 69.78 & 55.97 & 40.73 \\ \hline MacBERT & 80.04 & 26.68 & 5.16 & 62.09 & 87.88 & 73.77 & 67.69 & 51.81 & 35.94 \\ w/ ADA & 77.88 & 45.44 & 16.59 & 64.30 & 75.68 & 54.53 & 59.24 & 48.96 & 36.63 \\ w/ Word Correction & 80.04 & 26.68 & 5.16 & 62.09 & 87.77 & 73.77 & 67.69 & 51.81 & 35.94 \\ \hline \hline \end{tabular} \end{table} Table 5: Baseline performance on AFQMC and CMRC2018 test sets. We compare model performance on the original clean test set (‘Clean’) and our new typo test sets. For results on typo test sets, we report both micro-average (‘Average’) and worst-average (‘Worst’) performance. For AFQMC, we report accuracy on positive and negative pairs separately. For CMRC2018, we report answer exact match. showing that robustness across annotator variations is challenging. Moreover, we find that speech noises cause larger performance drops than keyboard noises (except on CSpider), which corresponds to the character error rates of these different test sets (Table 3). One notable result is on AFQMC, where we observe drastic performance drop on the positive paraphrase pairs but marginal drop or even performance increase for negative pairs. The reason is that models are exploiting spurious correlation in the training data such as lexical overlap as cues for positive pairs (McCoy et al., 2019; Zhang et al., 2019). When we introduce input noises to the data, the lexical overlap decreases, thus models exploiting spurious features become more likely to predict negative labels. Better performance on the positive examples in AFQMC (without significant sacrifice on the clean tests) can be taken as a sign for better robustness. We also present results on AFQMC as measured by the F1 metric in the appendix, and the results also indicate a drop in F1 on the noisy tests. **Robustness Methods Have Inconsistent Gains** For the adversarial data augmentation (ADA) and word correction pre-processing methods, we find that they have inconsistent gains on different datasets. For example, ADA improves performance on the noisy test sets on the AFQMC (pos) set, but not on the CMRC2018 dataset. On the other hand, word correction improves performance on the keyboard noise test sets of CSpider and WMT2021, but not on the other datasets. **SubChar Tokenization Helps** Lastly, in Table 7, we show results for finetuning models with SubChar tokenization. We find that the SubChar-Pinyin model outperforms the Subword model (which uses conventional subword tokenization). Moreover, the gain is much larger after training SubChar-Pinyin with ADA. ## 5 Related Work Spelling ErrorsPrevious works have recognized the impact of spelling and grammatical errors in multiple languages. Several typo and grammatical corpora have been collected (Hagiwara and Mita, 2020), notably by tracking Wikipedia edits (Grundkiewicz and Junczys-Dowmunt, 2014; Tanaka et al., 2020). The major difference with our work, apart from the language used, is that we focus on real-world downstream applications with diverse input settings. There is also effort on spelling error correction (SEC) (Wu et al., 2013; Cheng et al., 2020). While SEC aims to restore the spelling errors, our goal is to make sure models perform well on downstream applications even in the existence of input noises. Applying an SEC model as pre-processing could be one way to improve performance on our Readin benchmark. Other alternatives for training robust models against spelling errors include noise-aware training (Namysl et al., 2020) and learning typo-resistant representation (Edizel et al., 2019; Schick and Schutze, 2020; Ma et al., 2020). We believe such modeling explorations to future work. **Linguistic Variations** Our Readin not only relates to spelling errors or typos, but also related to linguistics variations especially in terms of phonological variations. Previous works have examined linguistic variations such as non-standard English (Tan et al., 2020, 2020; Groenwold et al., 2020) and dialect disparity (Ziems et al., 2022). Such works have important implications for building equatable NLP applications especially for minority language groups in the society. Yet, such effort is absent in Chinese NLP and our benchmark is a first attempt towards incorporating linguistic variations in model evaluation. **Adversarial Robustness** Works in the adversarial robustness often involved adversarially optimized character or word perturbations in an attempt to minimize model performance (Ebrahimi et al., 2018, 2018; Jones et al., 2020). Corresponding defenses have also been proposed such as adversarial training or data augmentation (Belinkov and Bisk, 2018; Si et al., 2021, 2020). Our work differs from this adversarial robustness line of work because we are not measuring worst-case attacks, but rather more realistic input noises that would actually occur in real-world user-generated inputs. ## 6 Conclusion In this work, we present Readin - the first Chinese multi-task benchmark with realistic and diverse input noises. Our annotation is carefully designed to elicit realistic and diverse input noises for both keyboard Pinyin input and speech input. Through both quantitative and qualitative human evaluation, we show that our crowdsourced input noises are much more plausible and diverse than existing automatically created ones. Our experiments on strong pretrained language model baselines show that models suffer significant drops on our noisy test sets, indicating the need for more robust methods against input noises that would happen in the real world. ## Ethics and Broader Impact We use this additional section to discuss potential ethical considerations as well as broader impact of our work. Ethical ConsiderationThis work involves human annotation. We made sure that all annotators are properly paid. We discussed extensively with all annotators involved to set a compensation that all agree on before starting the annotation, and the total cost of annotation for the project is about 30K RMB. We also explicitly informed all annotators about how the collected data will be used and made adjustments in the data collection and release protocol to avoid any privacy concerns. Overall, we believe that there is no harm involved in this project's annotation jobs. Positive Societal ImpactThis project tackles the real-world problem of input noises. We believe that our work will have a positive societal impact because we collected test data from annotators with diverse backgrounds. Our benchmark will facilitate the development of models that can perform well across all these variations, which has important implications to ensure the accessibility of our language technologies to users from diverse backgrounds. This fairness and inclusion aspect is often under-valued in the Chinese NLP community and we hope that our work can push the community to put more work on this front. LimitationsWhile we tried our best to maximize the diversity and coverage of our benchmark, it is practically impossible to cover all possible input noises. We acknowledge aspects that we did not get to cover, for example, the impact of different input devices (phones, tablets, as compared to keyboards used in our annotation). Also, while we tried to re-construct the real-world input settings as much as possible, there may still be subtle differences between real-world input and our annotation process, for example, we posed speed limits during the keyboard input annotation and this may not capture exactly how users type in real applications. We encourage future work to consider how to increase the coverage of such benchmarks and also possible innovations in the data collection procedure to collect fully realistic user data.
2310.07135
Comparing Styles across Languages
Understanding how styles differ across languages is advantageous for training both humans and computers to generate culturally appropriate text. We introduce an explanation framework to extract stylistic differences from multilingual LMs and compare styles across languages. Our framework (1) generates comprehensive style lexica in any language and (2) consolidates feature importances from LMs into comparable lexical categories. We apply this framework to compare politeness, creating the first holistic multilingual politeness dataset and exploring how politeness varies across four languages. Our approach enables an effective evaluation of how distinct linguistic categories contribute to stylistic variations and provides interpretable insights into how people communicate differently around the world.
Shreya Havaldar, Matthew Pressimone, Eric Wong, Lyle Ungar
2023-10-11T02:16:12Z
http://arxiv.org/abs/2310.07135v2
# Comparing Styles across Languages ###### Abstract Understanding how styles differ across languages is advantageous for training both humans and computers to generate culturally appropriate text. We introduce an explanation framework to extract stylistic differences from multilingual LMs and compare styles across languages. Our framework (1) generates comprehensive style lexica in any language and (2) consolidates feature importances from LMs into comparable lexical categories. We apply this framework to compare politeness, creating the first holistic multilingual politeness dataset and exploring how politeness varies across four languages. Our approach enables an effective evaluation of how distinct linguistic categories contribute to stylistic variations and provides interpretable insights into how people communicate differently around the world. ## 1 Introduction Communication practices vary across cultures. Inherent differences in how people think and behave Lehman et al. (2004) influence cultural norms, which in turn, have a significant impact on communication Moorjani and Field (1988). One key way that cultural variation influences communication is through _linguistic style_. In this work, we introduce an explanation framework to extract stylistic differences from multilingual language models (LMs), enabling cross-cultural style comparison. Style is a complex and nuanced construction. In communication, style is heavily used to convey certain personal or social goals depending on the speakers' culture Kang and Hovy (2021). For example, cultures that are _high-context_1 tend to use more indirect language than those that are _low-context_Kim et al. (1998), and cultures that have a high _power-distance_2 tend to have more formal interactions in a workplace setting. Khatri (2009). Footnote 1: Conversations in _high-context_ cultures have a lot of subtlety and require more collective understanding. Hall (1976) Footnote 2: _Power distance_ refers to the strength of a society’s social hierarchy. Hofstede (2005) For example, consider the two conversation snippets in Figure 1. Though these two utterances are direct translations of each other, a multilingual LM fine-tuned for politeness classification outputs different labels, with the Chinese utterance labeled as impolite. We ask a bilingual English/Chinese speaker to provide further insights, and she observes that the same request to "stop editing as soon as possible" sounds more aggressive (and thus, impolite) in Chinese than in English, as cultural norms in China do not typically condone giving such harsh, direct instructions to a stranger. Stylistically appropriate language is crucial to successful communication within a certain culture. However, multilingual LMs sometimes struggle to Figure 1: Explaining how politeness differs between parallel sentences in English and Chinese. Though these two sentences are identical in _content_, they differ in _style_. Our framework provides a way to quantitatively compare politeness in English and Chinese by comparing the importance of interpretable lexical categories. generate language that is stylistically appropriate in non-English languages Hershcovich et al. (2022); Ersoy et al. (2023); Zhang et al. (2022). Standard training methods for multilingual models lead to little stylistic variation in generated text across languages Pires et al. (2019); Libovicky et al. (2020); Muller et al. (2021), and multilingual systems rarely address these socially-driven factors of language Hovy and Yang (2021). As a result, downstream applications of these systems, like chatbots, are not as usable or beneficial to a non-American audience Bawa et al. (2020); Havaldar et al. (2023). One step towards correcting this is to understand _how styles differ across languages_. Though modern multilingual LMs struggle with _generating_ stylistically appropriate language, they are generally quite successful at _classifying_ stylistic language Briakou et al. (2021); Plaza-del Arco et al. (2020); El-Alami et al. (2022); Srinivasan and Choi (2022). Psychological or linguistic studies to analyze language are resource-heavy and time-intensive; alternatively, we can utilize these trained LMs to computationally capture both the overt and subtle ways a language reflects a certain style. Most current methods to extract feature importances from multilingual LMs are at the token-level Lundberg and Lee (2017); Ribeiro et al. (2016); Sundararajan et al. (2017) and specific to each language; as a result, there is no "common language" for comparison. In addition, most explanations are not easily human-interpretable, and it is difficult to extract useful takeaways from them. In this work, we present a framework to extract differences in style from multilingual LMs and explain these differences in a functional, interpretable way. Our framework consists of two components: 1. **Multilingual Lexica Creation:** We utilize embedding-based methods to translate and expand style lexica in any language. 2. **Feature Set Aggregation:** We extract feature importances from LMs and consolidate them into comparable lexical categories. To study how styles differ across languages, we create a holistic politeness dataset that encompasses a rich set of linguistic variations in four languages. Politeness varies greatly across languages due to cultural influences, and it is necessary to understand this variation in order to build multilingual systems with culturally-appropriate politeness. Trustworthy and successful conversational agents for therapy, teaching, customer service, etc. must be able to adapt levels and expressions of politeness to properly reflect the cultural norms of a user. Previous work by Danescu-Niculescu-Mizil et al. (2013) and Srinivasan and Choi (2022) uses NLP to study politeness, but their datasets reflect only a small subset of language -- the conversational utterances they analyze consist of only questions on either extremes of the impolite/polite spectrum. Our dataset is the first _holistic_ multilingual politeness dataset that includes _all types of sentences_ (i.e. dialogue acts) across the _full impolite/polite spectrum_. We include English, Spanish, Japanese, and Chinese -- we select these languages as they are all high-resource, (and therefore well-supported by modern LMs) and each have a unique way of expressing politeness. For instance, politeness in Japan frequently arises from acknowledging the place of others Spencer-Oatey and Kadar (2016), Figure 2: Our two-part explanation framework. Note that we use standard ISO language codes: En, Ja, Es, and Zh for English, Japanese, Spanish, and Chinese respectively. while politeness in Spanish-speaking countries often relies on expressing mutual respect (Placencia and Garcia-Fernandez, 2017). This global variation in politeness (Leech, 2007; Pishghadam and Navari, 2012) makes it an important style to understand for effective cross-cultural communication. Our contributions are as follows: 1. We present an explanation framework to extract differences in styles from multilingual LMs and meaningfully compare styles across languages. 2. We provide the first holistic politeness dataset that reflects a realistic distribution of conversational data across four languages. 3. We use this framework and dataset to show differences in how politeness is expressed (e.g. words like "bro" and "mate" are rude in English, but polite in Japanese, and yes/no questions are rude in Chinese but not in English.) Figure 1 shows an example comparison of English and Chinese politeness using this framework. We explain differences in politeness using both lexical categories and dialogue acts, with Figure 1 highlighting three chosen lexical categories. We make all code and data available publicly.3 Footnote 3: [https://github.com/shreyahavaldar/multilingual_politeness](https://github.com/shreyahavaldar/multilingual_politeness) ## 2 A Framework for Multilingual Style Comparison In this section, we detail our two-part framework for multilingual style comparison. **(1) Multilingual Lexica Creation** takes a curated style lexica and uses embedding-based methods to refine lexica translation into another language, creating a set of parallel lexical categories. **(2) Feature Set Aggregation** maps extracted feature importances from trained LMs to these parallel lexical categories across languages. This framework helps us to interpret what multilingual LMs learn about style during training, and allows us to meaningfully compare how style is expressed across languages. ### Multilingual Lexica Creation (MLC) Lexica provide an interpretable grouping of words into meaningful categories. The traditional use of lexica in NLP relies on a simple bag-of-words representation, but allows humans to easily visualize which lexical categories appear in a dataset. Much work has already been done in curating theory-grounded lexica that classify style. Danescu-Niculescu-Mizil et al. (2013) curate lexical strategies that inform politeness in English, and Li et al. (2020) extend these strategies to Chinese. Though they provide insight into how politeness is expressed, lexica have limited predictive power; lexica-based models are drastically outperformed by modern LMs. Rather than relying on lexica to classify style, we instead use lexica to curate _a common language_ for interpretable multilingual comparison. We present our method, Multilingual Lexica Creation (MLC), to expand a curated style lexica into multiple languages. Motivation.When using standard 1:1 machine translation to translate a style lexica, a number of issues arise. Sometimes, there is no 1:1 mapping between words -- a word in one language may have 0 words or 2+ words that express it in another. Additionally, context and culture influence how linguistic style is expressed (Moorjani and Field, 1988) -- a word that reflects politeness in one language may not reflect politeness the same way in another. To combat these issues, MLC uses word embeddings to improve the translation process. Step 1: Expansion.In the expansion step, we tackle the flawed assumption that there always exists a 1:1 mapping between words in different languages. First, we machine translate a curated style lexica into the target language. We refer to the words in this machine translated lexica as our set of _seed words_. Next, we embed each seed word us Figure 3: Multilingual Lexica Creation (MLC). We use word embeddings to expand and purify a style lexica translated from one language into another. This allows for maximum coverage per lexical category and corrects issues with standard 1:1 machine translation. ing FastText Bojanowski et al. (2017) and perform _synonym expansion_ on each seed word and _concept expansion_ on each lexical category. Figure 3 details this two-stage expansion process. For synonym expansion, we find the nearest neighbors of each seed word in embedding space (within a tunable distance threshold) and append them to the corresponding lexical category. This adds any synonyms of seed words that may have been bypassed during machine translation. For instance, "sorry" in English is expanded to "lo siento", "perdon", and "perdona" in Spanish. For concept expansion, we average the embeddings of all seed words within a lexical category and find the centroid embedding of the category. We then append the nearest neighbors of this centroid (within a tunable distance threshold) to the overall category. This adds any additional words conceptually similar to the lexical category that were not included via machine translation. For example, the Hedge category in Figure 3 is expanded to additionally include "evidente", "aparente", and "senala" in Spanish. We choose FastText over other embedding models as it has a fixed vocabulary size, and thus, efficient nearest neighbors functionality. Additionally, FastText performs well in uncontextualized settings Laville et al. (2020) and supports 157 languages. However, embeddings from any model can be used. Step 2: Purification.In the purification step, we tackle the issue that a word reflecting a style in one language may not reflect that style in the same way when translated to another language. So, after combining the words returned from synonym and concept expansion, we ensure that each category of the expanded lexica contains words that are both _pertinent_ and _internally correlated._ We first filter out rare words (i.e. any words below a given usage frequency). This addresses issues where machine translation results in words not commonly used in day-to-day conversation. Next, we ensure that the words in each lexical category reflect a given style similarly in the target language. Style is highly influenced by culture - a word that might indicate rudeness in English (e.g. "stubborn", "bossy", etc.) may not necessarily do so in other languages. To remove uncorrelated words, we first apply our lexica on any large corpus, and use a pre-trained LM to calculate a style score (e.g. politeness level) for each utterance in the corpus. Then, for each word \(w\) within a lexical category \(C\), we correlate the style scores of all utterances containing \(w\) against the style scores of all utterances containing any word in \(C\). We then remove all words that do not correlate positively with their category (product-moment correlation \(<0.15\)). For example, "senala" does not have a similar role to other Hedge words when indicating politeness in Spanish (\(r=-0.08\)), and so, we remove it from the final lexica. This guarantees that all words within a lexical category play a similar role in determining an utterance's style score, ensuring internal correlation within each category. MLC takes a curated style lexica and creates a parallel style lexica in a target language, correcting issues with 1:1 machine translation. Though the lexical categories are parallel, the words within each category are selected to best reflect how a style is expressed in the target language. ### Feature Set Aggregation We now seek to leverage the fact that trained multilingual LMs do successfully learn to encode how multiple languages reflect a certain style. Motivation.However, traditional feature attribution methods to explain what LMs learn cannot be used for multilingual comparison, as extracted features always are specific to a single language. We bypass this limitation by aggregating extracted attributions into directly comparable categories. Specifically, we extract feature attributions from trained multilingual style LMs and aggregate them into the parallel lexical categories from MLC. This enables an interpretable _common language_ for comparison. We detail aggregation with token-level Shapley values Lundberg and Lee (2017), but any additive feature attribution method can be used. Token-to-word grouping.Given a word \(w\) consisting of tokens \(w=[x_{1},x_{2},\dots,x_{K}]\), let \([v_{1},v_{2},\dots,v_{K}]\) be the corresponding token-level Shapley values. Equation (1) first aggregates token-level Shapley values into word-level importance scores. \[\mathrm{Imp}(w)=\sum_{k\;:\;x_{k}\in w}v_{k} \tag{1}\] Category-level importances.Next, we derive category-level importance scores for each lexical category by aggregating local word-level importance scores. For each category \(C\), we iterate over all utterances and sum the word-level importance scores for all words in \(C\). Finally, we divide by \(N\), or the total number of times a word in \(C\) appears in the dataset. This process is detailed in Equation (2). Let \(w_{ij}\) denote the \(j\)th word in the \(i\)th utterance. \[\mathrm{Imp}(C)=\frac{1}{N}\sum_{ij}\mathbbm{1}_{[w_{ij}\in C]}\mathrm{Imp}(w _{ij}) \tag{2}\] This gives us an importance score for each lexical category across languages. Now, we can easily compare how important certain categories are at linguistically reflecting a style in different languages. ## 3 A Holistic Politeness Dataset Politeness, like all linguistic styles, is a complex, nuanced construction. In order to compare how politeness differs across languages, it is necessary to analyze the full distribution of conversational data, without any oversimplifying assumptions. We follow the process of the Stanford Politeness Corpus [10] and TyDiP[11] when creating and evaluating our politeness dataset, but with three key differences: * We include _all dialogue acts_ to replicate the real distribution of conversational data. (Both previous datasets only include questions.) * We include _all annotated data_ in model training and evaluation, as we want to compare language along the _full politeness spectrum_. (Both previous evaluations only consider the highest and lowest 25% of politeness utterances, eliminating all neutral utterances.) * We treat politeness as a _regression task_ rather than a binary classification task. (Both previous evaluations classify an utterance as "polite" or "impolite", destroying the nuance between slight and strong politeness.) ### Data Collection & Annotation Overview We provide a high-level overview of our dataset construction and annotation process, and give specific details in Appendix A. Dataset.Our dataset contains 22,800 conversation snippets, or utterances, scraped from Wikipedia Talk Pages4 in English, Spanish, Chinese, and Japanese (5,700 utterances per language). Each utterance is 2-3 sentences long, and randomly sampled from all 41,000 scraped talk pages in each language. Note that we only scrape talk pages of articles that exist in all four languages, ensuring a similar distribution of topics across our dataset. Footnote 4: Wikipedia Talk Pages are used by editors of the platform to communicate about proposed edits to articles. This is the same domain as the Stanford Politeness Corpus and TyDiP. Annotation.To label the dataset, we use Prolific to source annotators. Respondents are required to be native speakers of the language they annotate, as well as indicate that it is their primary spoken language day-to-day. We include attention and fluency checks to ensure our annotations are of high quality. Annotators use a 5-point scale for labeling: "Rude", "Slightly Rude", "Neutral", "Slightly Polite", and "Polite", with three annotators labeling each utterance. We observe an average annotator agreement (Fleiss' kappa) of 0.186 across languages. Given the highly subjective nature of politeness, we expect to see a score in this range. Additionally, we convert the Stanford Politeness Corpus [10] to a 5-point scale and observe a Fleiss' kappa of 0.153, indicating our agreement aligns with past work. A key distinction between our dataset and the Stanford Politeness Corpus/TyDiPis that we do _not_ normalize the scores of each annotator to be centered at neutral. Levels of politeness vary culturally [13, 14]; we therefore do not make any assumptions about the politeness level of an average utterance. Figure A6 provides a visualization of score distributions in our final annotated dataset. We observe that the average utterance from English, Spanish, and Chinese is closest to neutral, while the average Japanese utterance is closer to slightly polite. ## 4 Comparing Politeness via PoliteLex PoliteLex [11] consists of curated lexica to measure politeness in English and Chinese, based on the twenty politeness strategies introduced by Danescu-Niculescu-Mizil et al. Danescu-Mizil-2013. We use MLC to expand PoliteLex to cover all four of our languages. Chinese PoliteLex has some additional lexical categories, such as taboo words and honorifics, to account for cultural and linguistic norms in Chinese that do not have an English equivalent. As politeness is expressed more similarly within Eastern and Western cultures than between them [15], we use English PoliteLex as the seed for Spanish and Chinese PoliteLex as the seed for Japanese, to create a set of four lexica with parallel, comparable categories. When purifying our expanded lexica in Spanish and Japanese, we use the full set of 41,000 scraped talk pages to calculate internal correlation and remove uncorrelated words from each category. Next, we fine-tune XLM-RoBERTa models Conneau et al. (2020) on our holistic politeness dataset (see Appendix A.4 for training details) and use the SHAP library Lundberg and Lee (2017) to extract Shapley values for each utterance. Finally, we apply Feature Set Aggregation to calculate importance scores for each PoliteLex category. Dataset coverage.Table 1 analyzes our generated politeness lexica in Spanish and Japanese. We measure _dataset coverage_ for each language - an utterance is "covered" if it contains at least one word in a lexical category, and we define dataset coverage as the percent of covered utterances. In both cases, the lexica generated by MLC has better coverage than 1:1 machine translation using Google Translate. Results.Figure 4 details the resulting category-level importances from select PoliteLex categories, with the frequency of words in each category given in parentheses. Categories with positive importance are indicators of politeness, as the extracted Shapley values are highest for words within those categories. Similarly, categories with negative importance scores indicate rudeness. For certain categories, we see a strong similarity across languages. Apologetic expressions (e.g. "I'm sorry", "my bad") and expressions of gratitude (e.g. "thank you", "I appreciate it") tend to universally indicate politeness. Interestingly, we see Japanese speakers using expressions of gratitude and apology with the highest frequency across languages. We also notice interesting differences. The word \begin{table} \begin{tabular}{l l r} \hline \hline **Language** & **Lexica** & \begin{tabular}{c} **\% of Dataset** \\ **Covered** \\ \end{tabular} \\ \hline English & PoliteLex & 98.0\% \\ Chinese & PoliteLex & 83.7\% \\ \hline \multirow{2}{*}{Spanish} & MT & 94.4\% \\ & **MLC** & **96.9\%** \\ \hline \multirow{2}{*}{Japanese} & MT & 57.0\% \\ & **MLC** & **90.2\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Lexical coverage of our holistic politeness dataset. We define _dataset coverage_ as the percent of utterances containing a word in at least one lexical category. MLC produces lexica with better coverage than MT (machine translation). Figure 4: PoliteLex category-level importances across languages. Each importance score indicates the category’s average numerical contribution to an utterance’s politeness label, where \(-2=\mathrm{Rude}\), \(0=\mathrm{Neutral}\), and \(2=\mathrm{Polite}\). We additionally show the frequency of each category (% of total sentences that contain a word from the category.) "please" in English, Spanish, and Chinese does not indicate politeness, despite being used with similar frequency in all four languages. For example, the following English and Spanish utterances "This has been debated to death; please read the archives." "Antes de cuestionar si lo que digo es verdad, por favor trate de corroborarlo used mismo." (_"Before you question whether what I say is true, please try to verify it yourself."_) are both labeled by annotators to be quite rude, despite containing the word "please". In Japanese however, "please" strongly indicates politeness. Additionally, contrary to the findings of Li et al. (2020), expressions of in-group identity (e.g. "bro", "mate") are indicators of rudeness in English and Chinese, but indicators of politeness in Japanese. This may be because these terms are uncomfortably familiar, and so taken as rude, or due to sarcastic uses of these terms in English and Chinese. This phenomenon does not appear to be paralleled in Japanese, as terms of in-group identity are very polite. We give examples of top words in each PoliteLex category in Table A6 and our full set of results for all categories in Figure A7. ## 5 Comparing Politeness via Dialogue Acts Given our dataset is the first politeness dataset to include multiple types of sentences (i.e. dialogue acts), we additionally apply the second part of our framework to dialogue act groupings as categories. In the previous section, we compared how _linguistic expressions_ of politeness differ across languages. In this section, we seek to compare how the _linguistic form_ of politeness differs as well. To classify the dialogue acts of each utterance, we machine translate our dataset to English. We then run a trained English dialogue act classifier provided by Omitaomu et al. (2022) on the translated dataset and label each sentence of an utterance with one of 42 dialogue acts Stolcke et al. (2000). Table 2 shows examples for select dialogue acts. As dialogue acts are sentence-level, we modify Equation (1) to aggregate over all tokens in a given sentence, as opposed to all tokens in a given word. Finally, we treat each dialogue act as a unique category (analogous to a lexical category) and use our feature set aggregation method to map sentence-level SHAP values to their corresponding dialogue acts across our four languages. Results.Figure 5 shows the average importance of each dialogue act, with the frequency of each dialogue act given in parentheses. Once again, we observe some similarities across languages: conventional openings, conventional closings, and sentences of thanks are strong indicators of politeness across languages. However, statements appear to have differing roles across languages. Declarative statements Figure 5: Dialogue act importances across languages. Each dialogue act importance score indicates the act’s average numerical contribution to an utterance’s politeness label, where \(-2=\mathrm{Rude}\), \(0=\mathrm{Neutral}\), and \(2=\mathrm{Polite}\). We additionally show the frequency of each dialogue act (% of total sentences classified as that act.) mostly lean polite across languages, while statements expressing an opinion lean slightly rude in English, Spanish, and Chinese. Surprisingly, yes/no questions only indicate politeness in English, and are viewed as mildly rude in all other languages, particularly Chinese. Consider the following yes/no questions in English and Chinese: "To be pedantic, are we sure that he was born in Milton, West Dunbartonshire?" "In the last application section, is it necessary to add so many pictures?") To an English speaker, both the English sentence and the Chinese translation appear to be similar levels of politeness. However, American annotators label the English question as "Neutral" while Chinese annotators label the Chinese question as "Slightly Rude," highlighting the ways in which cultural norms influence perceptions of politeness. Interestingly, we do not observe any major differences in the frequency of dialogue acts across languages; conversations in all four languages appear to have a similar distribution of dialogue acts, though the average politeness of each dialogue act often varies based on language. Results for all dialogue acts are shown in Figure A8. ## 6 Ablation Analysis The PoliteLex category-level importances in Figure 4 and dialogue act importances in Figure 5 are dependent on the Shapley values extracted from fine-tuned XLM-RoBERTa models. In this section, we analyze the effect of using alternate models and training paradigms. Effect of model size.To investigate the role of LM size and architecture, we fine-tune Llama-2-7b [23] to analyze politeness. Comparing the results from Llama-2-7b to the results from XLM-RoBERTa, we notice stability in the _direction_ of importance score (i.e. positive, negative, and neutral lexical categories/dialog acts are stable across both LMs). Interestingly, we observe differences in the _magnitude_ of importance score (e.g. Llama-2-7b sees the "Greeting" category in Chinese to be a larger indicator of politeness than XLM-RoBERTa does). Effect of language-specific pretraining.To investigate the role of language-specific pretraining, we fully fine-tune four RoBERTa models trained on only their respective languages. Similar to Llama-2-7b, we observe high similarity in the _direction_ of the importance score compared to XLM-RoBERTa. We also notice much more similarity in the _magnitude_ of importance scores. This may be due to inherent similarities between all RoBERTa models (parameter size, training methods, training data, etc.), which do not exist between XLM-RoBERTa and Llama-2-7b. Our ablation analysis reveals that different language models pay attention to the same markers when learning to predict politeness, but learn to weigh these markers in different ways. Overall, we notice stability in which lexical categories and dialog acts indicate politeness vs. rudeness. This suggests that our groupings for feature set aggregation are both stable and successful. Section C contains additional details. ## 7 Related Work Multilingual style.Previous work on multilingual style predominantly focuses on training LMs to perform cross-lingual and multilingual style classification and style transfer. Key styles studied include formality [11, 13, 14] and emotion [15, 16, 17], with another body of work focusing on style \begin{table} \begin{tabular}{l l} \hline \hline **Dialogue Act** & **Example** \\ \hline Conventional Opening & “Hi all, this article is in desperate need of attention.” \\ Declarative Statement & “As it stands, we do not know the place or date of Shapiro’s birth.” \\ Opinion Statement & “There is no need to be combative and attack over typos.” \\ Forward-Function: Thinking & “Looks good, thanks for the help!” \\ Yes/No Question & “Is the second picture an actual representation of the BCG vaccine?” \\ Conventional Closing & “Happy to discuss anything further.” \\ \hline \hline \end{tabular} \end{table} Table 2: Examples for selected dialogue acts from our holistic politeness dataset. aware multilingual generation with any subset of chosen styles Niu et al. (2018); Garcia et al. (2021). Explaining style.One line of work builds on existing techniques Lundberg and Lee (2017); Ribeiro et al. (2016) to explain style within a single LM Aubakirova and Bansal (2016); Wang et al. (2021). Another line of work interprets style LMs by comparing learned features to those humans would consider important Hayati et al. (2021), mapping feature attributions to topics Havaldar et al. (2023), and training models to output relevant features alongside their predictions Hayati et al. (2023). Politeness.Danescu-Niculescu-Mizil et al. (2013) presents one of the earliest quantitative analyses of linguistic politeness, with Srinivasan and Choi (2022) following in a multilingual setting. Other computational work focusing on politeness uses LMs to generate or modify text with a specified politeness level Niu and Bansal (2018); Fu et al. (2020); Mishra et al. (2022); Silva et al. (2022). Previous work focused on multilingual style has little emphasis on investigating how style differs amongst languages. Additionally, most work on explaining style LMs is English-scoped, and thus, developed methods do not easily allow for cultural comparison. We are the first to present a method to compare styles across languages and provide quantitative, human-interpretable insights into how communication differs globally. ## 8 Conclusion In this work, we present a framework to extract the knowledge implicit in trained multilingual LMs. This knowledge enables comparison of styles across languages via a human-interpretable common language for explanation. We also provide the first holistic multilingual politeness dataset, which we hope will encourage future research exploring cultural differences in style. Our framework provides insights into how style is expressed differently across languages. These insights can improve multilingual LMs -- understanding _how_ and _why_ an LM generation is not stylistically appropriate can inform culturally-adaptable models. These insights can also help people learning a second language become more aware of the language's culturally-informed stylistic nuances. At a higher level, our framework provides a general methodology for explaining LMs; using feature attributions such as Shapley values to provide explanations in terms of human-interpretable categories (e.g. lexica and dialogue acts) gives explanations that are both grounded in the model and useful to humans. ## Limitations When detailing our framework, we used FastText for Multilingual Lexica Generation and Partition SHAP Lundberg and Lee (2017) for Feature Set Aggregation. Our results are dependent on these two choices. Because FastText only supports word embeddings, we could only apply our MLC framework to words in the lexica. A contextual embedding could be used to additionally expand phrases. Additionally, we did not use native speakers to further purify our final lexica, as is the case in most past work curating multilingual lexica. Individuals view politeness differently even within cultures, and as such, it is a highly subjective task. The subjectivity of politeness has been studied previously Leech (2007); Pishghadam and Navari (2012); Spencer-Oatey and Kadar (2016); Placcencia and Garcia-Fernandez (2017) and as a result, there is no "correct" label for an utterance; this subjectivity contributes heavily to our annotator agreement scores and fine-tuned LM accuracy. We also draw conclusions about politeness in various cultures from a single context. We only study Wikipedia Talk Page data, which reflects politeness in workplace communication, but expressions of politeness are likely to differ in other settings: non-work conversations, communication between friends or family, social media, etc. ## Ethics Statement When comparing styles across languages in this study, we treat language and culture as monolithic. However, we recognize different communities of people who speak the same language will use it differently. For example, politeness is likely expressed differently in Spain vs. Mexico, even though they are both Spanish-speaking countries. In studying politeness, we recognize that it is highly contextual - calling a stranger "bro" can be perceived as an insult, while calling a close friend "bro" can be viewed as an expression of bonding and closeness. Age, gender, and other demographics also play an important role in perceived po liteness in a conversation. Politeness is a deeply interpersonal act, but it is decontextualized when studied computationally; NLP studies of politeness destroy key components of it. We additionally recognize the implications, both good and bad, of working towards culturally-appropriate stylistic language generation. Our method can be used to inform more culturally attuned LMs - multilingual polite agents are important for beneficial uses like teaching and therapy, but these polite agents could potentially also be used for more effective manipulation or misinformation tactics.
2304.05865
Floquet-engineered nonlinearities and controllable pair-hopping processes: From optical Kerr cavities to correlated quantum matter
This work explores the possibility of creating and controlling unconventional nonlinearities by periodic driving, in a broad class of systems described by the nonlinear Schr\"odinger equation (NLSE). By means of a parent quantum many-body description, we demonstrate that such driven systems are well captured by an effective NLSE with emergent nonlinearities, which can be finely controlled by tuning the driving sequence. We first consider a general class of two-mode nonlinear systems - relevant to optical Kerr cavities, waveguides and Bose-Einstein condensates - where we find an emergent four-wave mixing nonlinearity, which originates from pair-hopping processes in the parent quantum picture. Tuning this drive-induced nonlinearity is shown to modify the phase-space topology, which can be detected through relative population and phase measurements. We then couple individual (two-mode) dimers in view of designing extended lattice models with unconventional nonlinearities and controllable pair-hopping processes. Following this general dimerization construction, we obtain an effective lattice model with drive-induced interactions, whose ground-state exhibits orbital order, chiral currents and emergent magnetic fluxes through the spontaneous breaking of time-reversal symmetry. We analyze these intriguing properties both in the weakly-interacting (mean-field) regime, captured by the effective NLSE, and in the strongly-correlated quantum regime. Our general approach opens a route for the engineering of unconventional optical nonlinearities in photonic devices and controllable drive-induced interactions in ultracold quantum matter.
Nathan Goldman, Oriana K. Diessel, Luca Barbiero, Maximilian Prüfer, Marco Di Liberto, Lucila Peralta Gavensky
2023-04-12T13:56:27Z
http://arxiv.org/abs/2304.05865v2
# Floquet-engineered nonlinearities and controllable pair-hopping processes: ###### Abstract This work explores the possibility of creating and controlling unconventional nonlinearities by periodic driving, in a broad class of systems described by the nonlinear Schrodinger equation (NLSE). By means of a parent quantum many-body description, we demonstrate that such driven systems are well captured by an effective NLSE with emergent nonlinearities, which can be finely controlled by tuning the driving sequence. We first consider a general class of two-mode nonlinear systems - relevant to optical Kerr cavities, waveguides and Bose-Einstein condensates - where we find an emergent four-wave mixing nonlinearity, which originates from pair-hopping processes in the parent quantum picture. Tuning this drive-induced nonlinearity is shown to modify the phase-space topology, which can be detected through relative population and phase measurements. We then couple individual (two-mode) dimers in view of designing extended lattice models with unconventional nonlinearities and controllable pair-hopping processes. Following this general dimerization construction, we obtain an effective lattice model with drive-induced interactions, whose ground-state exhibits orbital order, chiral currents and emergent magnetic fluxes through the spontaneous breaking of time-reversal symmetry. We analyze these intriguing properties both in the weakly-interacting (mean-field) regime, captured by the effective NLSE, and in the strongly-correlated quantum regime. Our general approach opens a route for the engineering of unconventional optical nonlinearities in photonic devices and controllable drive-induced interactions in ultracold quantum matter. ## I Introduction Physical systems can be controlled and enriched by subjecting them to a time-periodic drive. Widely explored in the context of atomic physics since the 70's [1; 2; 3; 4; 5], this general approach recently became the leitmotif of a vaste and pluridisciplinary program known as Floquet engineering [5; 6; 7; 8]. Today, it concerns a wide range of physical platforms, including ultracold quantum gases [5; 8], solid-state materials [6; 7], universal quantum simulators and computers [9; 10], mechanical [11] and acoustical [12] systems, and photonic devices [13; 14; 15; 16]. More specifically, Floquet engineering can be applied to modify the band structure of lattice systems [5; 7], generate artificial gauge fields [17; 18] and design complex interaction processes [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. These remarkable possibilities open new avenues for the experimental exploration of a broad range of intriguing physical phenomena, such as light-induced high-temperature superconductivity [39; 40], magnetism [41; 42; 43], topological physics [7; 8; 16], many-body localization [44; 45], chaos-assisted tunneling [46; 47], and lattice gauge theories [48; 49]. Floquet engineering has recently entered the realm of photonics, where various settings and periodic-driving scenarios have been proposed and experimentally realized. In laser-written optical waveguide arrays [50], where waveguides can be finely modulated along the propagation direction, Floquet schemes were implemented in view of generating topological band structures [51; 52; 53; 54; 13], synthetic dimensions [56] and artificial magnetic fields [57] for light. In the context of optical resonators, electro-optical modulators were used to resonantly couple different cavity modes and realize synthetic dimensions [58; 59; 60; 61; 62], while non-planar geometries were designed to create stroboscopic dynamics reflecting an effective magnetic field for photons [14]. In circuit-QED, time-modulated couplers connecting superconducting qubits were exploited to create artificial magnetic fields for strongly-interacting photons hopping on a lattice [15]. Finally, drive-induced optical nonlinearities recently emerged as an exciting avenue in the context of polaritons [63; 64], insulating materials [65], and high-Q microwave cavities coupled to transmon qubits [66]. The scope of this work is two-fold. First, we introduce a general and practical theoretical method to treat a broad class of periodically-driven nonlinear systems described by the nonlinear Schrodinger equation (NLSE). By exploiting a parent quantum many-body description, we show that such driven nonlinear systems are well captured by an effective NLSE with emergent nonlinearities, which can be finely controlled by tuning the driving sequence [Fig. 1]. This approach is first analyzed for a generic two-mode nonlinear system subjected to a repeated pulse sequence that mixes the two modes periodically in time. In this case, an emergent nonlinearity known as four-wave mixing [67; 68; 69] is shown to originate from drive-induced pair-hopping processes in the parent quantum picture. This framework captures a broad range of non linear optical settings, including two-mode optical Kerr cavities [70; 71; 72; 73], optical waveguide couplers [50; 74] and coupled superconducting microwave cavities [15], but also ultracold atomic gases trapped in double-well potentials [75; 76; 77; 78; 79] and two-component Bose-Einstein condensates (BEC) [80; 81; 82]. Building on these results, we then couple individual (two-mode) dimers in view of designing extended lattice models with unconventional nonlinearities and controllable pair-hopping processes. Following this general dimerization construction, we obtain an effective lattice model with drive-induced interactions, whose ground-state exhibits orbital order, chiral currents and emergent magnetic fluxes through the spontaneous breaking of time-reversal symmetry (TRS). This rich model is analyzed both in the weakly-interacting (mean-field) regime, captured by the NLSE, and in the strongly-interacting (quantum) regime, through various analytical and numerical methods. We discuss how the exotic properties and phase transitions of this peculiar lattice model could be detected in practice, through static and dynamical probes, in realistic settings. Our general construction leads to controllable Hubbard-type models and quantum spin models, well suited for the exploration of exotic quantum phases of matter emerging from unconventional interactions. ### Theoretical approach and outline of the article The first Sections II-III explore how unconventional nonlinearities can emerge in driven nonlinear systems described by the two-mode NLSE. Our theoretical approach uses a parent quantum many-body Hamiltonian, which describes interacting bosons subjected to a periodic drive. Within this Hamiltonian framework, we derive an effective (time-independent) quantum Hamiltonian that well describes the stroboscopic dynamics in the high-frequency regime of the periodic drive [17; 83; 84; 26; 27; 85]. We then take the classical limit of this effective quantum description [86; 87; 85; 88] to finally obtain an effective NLSE. This approach, which explicitly reveals the emergent nonlinearities generated and controlled by the drive, is summarized in Fig. 1. Effective nonlinearities can be tuned by simply adjusting the driving sequence. Section IV analyzes how this control over nonlinearities can lead to modifications of the classical phase-space topology. We describe these transitions through a "fixed-point phase diagram", which we explain using a simple pendulum analogy. Interestingly, the control over drive-induced nonlinearities is directly reflected in the phase-space topology, which can be detected through the dynamics of the relative population and phase in the two modes. Section V explores the validity of our two central approximations: the high-frequency approximation related to the drive and the mean-field approximation associated with the classical limit. Here we perform numerical simulations of the quantum and classical dynamics, comparing the full time dynamics generated by the drive to the effective descriptions. As a by-product, this numerical analysis further illustrates how effective nonlinearities can be unambiguously detected through the dynamics of the relative population and phase in the two modes. We then design lattice systems with controllable drive-induced interactions in Section VI. Using a dimerization construction, by which we couple individual (two-mode) dimers, we derive two classes of lattice models with effective pair-hopping processes. In Section VII, we set the focus on the ground-state properties of a specific dimerized lattice model with pair hopping, which gives rise to orbital order, chiral currents and emergent magnetic fluxes through the spontaneous breaking of TRS. These intriguing properties are analyzed using various analytical and numerical methods, both in the weakly-interacting (mean-field) regime captured by the NLSE and in the strongly-correlated quantum regime. As a by-product, we derive effective spin models, deep in the strongly-interacting regime, which are shown to feature peculiar (Dzyaloshinskii-Moriya-type) interactions. We conclude this work in Section VIII, by proposing possible experimental implementations and detection schemes in optics and cold atoms. ## II Two-mode nonlinear systems and drive-induced nonlinearities We start by considering a broad class of two-mode nonlinear systems, described by the nonlinear Schrodinger equation (NLSE) \[i\frac{\partial\psi_{1}}{\partial t} =\left(-\gamma\frac{\partial^{2}}{\partial x^{2}}+|\psi_{1}|^{2}+ \beta|\psi_{2}|^{2}\right)\psi_{1}-\frac{\Omega_{0}}{2}\psi_{2},\] \[i\frac{\partial\psi_{2}}{\partial t} =\left(-\gamma\frac{\partial^{2}}{\partial x^{2}}+|\psi_{2}|^{2}+ \beta|\psi_{1}|^{2}\right)\psi_{2}-\frac{\Omega_{0}}{2}\psi_{1}. \tag{1}\] Here, \(\psi_{1,2}(x,t)\) denote the complex amplitude of the fields corresponding to the two modes \(s=1,2\); they depend on the Figure 1: Schematic of the approach. We consider a general class of nonlinear systems, driven by a periodic driving sequence and described by the nonlinear Schrödinger equation (NLSE). To analyse these settings, we introduce a parent quantum many-body Hamiltonian, \(\hat{H}_{0}+\hat{V}(t)\), which describes interacting bosons subjected to a periodic drive. From this, we derive an effective quantum Hamiltonian \(\hat{H}_{\rm eff}\) in the high-frequency limit of the drive (\(\omega\rightarrow\infty\)). We then derive the effective nonlinear Schrödinger equation upon taking the classical limit, \(N\rightarrow\infty\), where \(N\) is the number of bosons, hence revealing the effective nonlinearities generated by the driving sequence; see also Fig. 8 in Section V regarding the numerical validation of this approach. evolution "time" \(t\) and the "spatial" coordinate \(x\). The focus of this work is set on the "internal" dynamics associated with the two modes, such that the "spatial" coordinate \(x\) [and the related kinetic-energy term \(\sim\gamma\) in Eq. (1)] does not play any role in the following. For the sake of generality, the equations of motion (1) contain two types of nonlinearities, which are generically present in optical cavities [70; 71; 72; 73]: the so-called self-phase modulation and the cross-phase modulation, whose respective strengths are set by the parameter \(\beta\); we have also included a static linear coupling of strength \(\Omega_{0}/2\). We point out that the nonlinear equations (1) are decoupled in the limit \(\Omega_{0}=\beta=0\), i.e. in the absence of linear coupling and cross-phase modulation. While Eq. (1) naturally describes the two polarization modes \(\psi_{1,2}\) of a light field propagating in a lossless cavity [70; 71; 72; 73], or light propagating in a pair of adjacent waveguides [50; 74], it should be noted that Eq. (1) equally captures the physics of bosonic atomic gases trapped in a double well potential, as well as two-component Bose-Einstein condensates [76; 82; 80]; see Fig. 2 for an illustration of these four possible realizations. A more detailed discussion on experiment aspects is provided in Section VIII. In order to modify the nonlinearities of a system described by Eq. (1), we now introduce a time-periodic pulse sequence of period \(T\), which mixes the two modes in a fast and two-boscopic manner. As illustrated in Fig. 3(a), the sequence is characterized by four successive steps (within each period \(T\)): * Step 1: Free evolution according to the NLSE (1) for a duration \(t=\alpha T\), where \(\alpha\) is a tunable parameter defined between \([0,1]\). * Step 2: the two components suddenly undergo the mixing operation (Pulse \(\oplus\)) \[\psi_{1} \rightarrow (1/\sqrt{2})\left(\psi_{1}+i\psi_{2}\right),\] (2) \[\psi_{2} \rightarrow (1/\sqrt{2})\left(i\psi_{1}+\psi_{2}\right).\] * Step 3: Free evolution according to the NLSE (1) for a duration \(t=(1-\alpha)T\). * Step 4: the two components undergo the reverse mixing operation (Pulse \(\ominus\)) \[\psi_{1} \rightarrow (1/\sqrt{2})\left(\psi_{1}-i\psi_{2}\right),\] (3) \[\psi_{2} \rightarrow (1/\sqrt{2})\left(\psi_{2}-i\psi_{1}\right).\] For certain devices, the mixing operations Eqs. (2)-(3) can be performed readily, on arbitrarily short time scales. For instance, in a two-mode optical cavity [70; 71; 72], these operations would correspond to a coupling between the two polarization eigenmodes of the cavity, as directly realized by means of quarter-wave plates [90; 91]; see Fig. 2(a) and Section VIII. More generally, when the mixing processes in Eqs. (2)-(3) cannot be directly performed by a device, they can be realized by activating a linear coupling between the two modes, during a short pulse duration \(\tau\ll T\), such that the equations of motion of the driven system can be written in the form \[i\frac{\partial\psi_{1}}{\partial t} = \left(-\gamma\frac{\partial^{2}}{\partial x^{2}}+|\psi_{1}|^{2}+ \beta|\psi_{2}|^{2}\right)\psi_{1}-\frac{\Omega(t)}{2}\psi_{2}, \tag{4}\] \[i\frac{\partial\psi_{2}}{\partial t} = \left(-\gamma\frac{\partial^{2}}{\partial x^{2}}+|\psi_{2}|^{2}+ \beta|\psi_{1}|^{2}\right)\psi_{2}-\frac{\Omega(t)}{2}\psi_{1}.\] Here, the function \(\Omega(t)=\Omega_{0}+f_{\text{pulse}}(t)\) includes the pulse sequence defined by the function \[f_{\text{pulse}}(t) = (+\pi/2+2\pi\mathfrak{p})/\tau t_{\text{n}}^{\ominus}-\tau\leq t \leq t_{\text{n}}^{\oplus}, \tag{5}\] \[= (-\pi/2+2\pi\mathfrak{p})/\tau t_{\text{n}}^{\ominus}-\tau\leq t \leq t_{\text{n}}^{\ominus},\] \[= 0\text{ otherwise},\] where \(t_{\text{n}}^{\oplus}=(\mathfrak{n}+\alpha)T\) and \(t_{\text{n}}^{\ominus}=(\mathfrak{n}+1)T\) denote the successive pulse activation times, with \(\mathfrak{n}=0,1,2\dots\); see Fig. 2(b) and Fig. 3(a). The pulse function in Eq. (5) also includes an arbitrary integer, \(\mathfrak{p}\in\mathbb{Z}\), which can be chosen based on practical constraints; for instance, it can be set such that the linear coupling \(\Omega(t)\) never changes sign over time, which can be convenient for certain physical realizations; see Figs. 2(b)-(d) and Section VIII. To verify that the drive in Eqs. (4)-(5) indeed realizes the mixing operations in Eqs. (2)-(3), we restrict ourselves to the (linear) driving terms in the coupled Schrodinger equations (4) and we obtain the time-evolution operators corresponding to the first and second pulses, respectively: \[\hat{U}(t_{\text{n}}^{\oplus};t_{\text{n}}^{\oplus}-\tau) = e^{i\frac{\pi}{4}\hat{\sigma}_{x}}\equiv\hat{U}_{\text{mix}}, \tag{6}\] \[\hat{U}(t_{\text{n}}^{\oplus};t_{\text{n}}^{\oplus}-\tau) = e^{-i\frac{\pi}{4}\hat{\sigma}_{x}}=\hat{U}_{\text{mix}}^{\dagger},\] Figure 2: Possible realizations in optics and cold atomic gases: (a) Two modes in an optical ring cavity (1 and 2), repeatedly undergoing mixing operations (\(\oplus\) and \(\ominus\)) along the ring. These operations correspond to a coupling between the two polarization eigenmodes of the cavity, as realized by means of quarter-wave plates; see Eqs. (2)-(3). (b) Two optical waveguides (1 and 2) with modulated inter-waveguide separation, realizing a “time-periodic” linear coupling \(\Omega(t)\) between the two optical modes [Eq. (4)]. In both cases (a)-(b), the “time” coordinate corresponds to the propagation direction [50; 89]. (c) Two-component BEC involving two atomic internal states and a time-dependent (microwave) coupling \(\Omega(t)\). (d) Bosonic gas in a double-well potential, with a time-modulated tunneling strength \(\Omega(t)\). where \(\hat{\sigma}_{x}\) is the standard Pauli matrix. The operators \(\hat{U}_{\text{mix}}\) and \(\hat{U}_{\text{mix}}^{\dagger}\) in Eq. (6) indeed realize the mixing operations in Eqs. (2)-(3), respectively. We note that these mixing operations are known as \(\pi/2\) pulses in quantum optics [92, 81, 93, 94]. In the limit of a fast pulse sequence, namely, when the period of the drive \(T\ll T_{\text{eff}}\) is much smaller than the effective "time" scale of the system (to be discussed below), we find that the stroboscopic time-evolution of the nonlinear system is well described by an _effective_ NLSE with modified nonlinearities. Following the method detailed in Section III, this effective NLSE reads \[i\frac{\partial\psi_{1}}{\partial t} =\left(-\gamma\frac{\partial^{2}}{\partial x^{2}}+U_{1}|\psi_{1}| ^{2}+U_{2}|\psi_{2}|^{2}\right)\psi_{1}\] \[\quad+U_{3}\psi_{1}^{*}\psi_{2}^{2}-\frac{\Omega_{0}}{2}\psi_{2},\] \[i\frac{\partial\psi_{2}}{\partial t} =\left(-\gamma\frac{\partial^{2}}{\partial x^{2}}+U_{1}|\psi_{2}| ^{2}+U_{2}|\psi_{1}|^{2}\right)\psi_{2}\] \[\quad+U_{3}\psi_{2}^{*}\psi_{1}^{2}-\frac{\Omega_{0}}{2}\psi_{1}, \tag{7}\] where the three types of nonlinearities are controlled by the parameters \[U_{1} =(3\alpha-1)/2,\] \[U_{2} =\beta(3\alpha-1)/2,\] \[U_{3} =(\alpha-1)(1-\beta)/2. \tag{8}\] In this framework, the system is assumed to be measured stroboscopically at times \(t\!=\!T\!\times\!\mathsf{n}\), with \(\mathsf{n}\!\in\!\mathbb{N}\). Comparing Eqs. (II.1)-(II.1) with the original Eq. (1), we find that the repeated mixing processes in Eqs. (2)-(3) effectively produce a new form of nonlinearity, commonly known in optics as four-wave mixing [67, 68, 69]. The drive also renormalizes the initial nonlinearities (self-phase and cross-phase modulations) by a same factor \((3\alpha-1)/2\). We point out that the effective four-wave mixing is induced even in the limit of two initially decoupled modes (\(\beta\!=\!\Omega_{0}\!=\!0\)). We also remark that the NLSE in Eq. (1) is recovered in the limit \(\alpha=1\), corresponding to a non-driven system. We stress that the nonlinear system described by the NLSE in Eq. (II.1) is assumed to be lossless, such that \(|\psi_{1}|^{2}\!+\!|\psi_{2}|^{2}\!=\!N\) is a constant. Under this contraint, one can add any arbitrary constant \(\mathsf{c}\) to the self-phase and cross-phase modulations, \((U_{1},U_{2})\longrightarrow(U_{1}+\mathsf{c},U_{2}+\mathsf{c})\), without affecting the physics. In particular, this implies that the pathological case \(\beta\!=\!1\) always trivializes to a linear problem. As another technical note, we point out that the mixing processes in Eqs. (2)-(3) do not modify the kinetic-energy terms in Eq. (1). For the sake of presentation, we henceforth set \(\gamma\!=\!0\) (except otherwise stated), but we do keep in mind that these terms can be readily added in the description without affecting the results [95]. It is the aim of the following Sections III-V to demonstrate the effective description displayed in Eq. (II.1) and to explore its regimes of validity, using analytical and numerical methods. Section IV analyzes how tuning the relative strengths of effective nonlinearities [Eq. (II.1)] can induce topological changes in phase space, hence leading to strong modifications of the dynamics. We then generalize our approach to lattice systems in Section VI. ## III Quantum many-body approach to drive-induced nonlinearities Our approach consists in three successive steps [Fig. 1]: * We introduce a parent quantum many-body Hamiltonian, whose semiclassical dynamics reproduces the time evolution of the driven nonlinear system in Eq. (4); * Within this quantum framework, we derive the effective (Floquet) Hamiltonian that well captures the long time dynamics in the high-frequency limit (\(2\pi/T\rightarrow\infty\)); * We then obtain the effective classical equations of motion (i.e. the effective NLSE) from the effective quantum Hamiltonian. The validity of this approach will then be verified in Section V, through numerical studies of both quantum and classical dynamics. We point out that the periodically-driven NLSE has been widely explored in optics [96, 74, 97] and in cold atoms [98, 99, 100, 101, 102, 103] using other theoretical methods. ### The parent quantum many-body system Our starting point is the quantum many-body Hamiltonian \[\hat{H}_{0}= \frac{1}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}^{\dagger}\hat{ a}_{1}\hat{a}_{1}+\hat{a}_{2}^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{2}\hat{a}_{2}\right)\] \[+\beta\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{1}\hat{ a}_{2}-\frac{\Omega_{0}}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{2}^{ \dagger}\hat{a}_{1}\right), \tag{9}\] where \(\hat{a}_{j}^{\dagger}\) (resp. \(\hat{a}_{s}\)) creates (resp. annihilates) a boson in the mode \(s=1,2\). These operators satisfy the bosonic commutation relations, \([\hat{a}_{s},\hat{a}_{s^{\prime}}^{\dagger}]=\delta_{s,s^{\prime}}\). The first line in Eq. (II.1) describes intra-mode (Hubbard) interactions, while the second line describes inter-mode (cross) interactions of strength Figure 3: (a) The pulse sequence involves free evolution, described by the NLSE in Eq. (1), interrupted by two pulses \((\oplus,\ominus)\) described by Eqs. (2)-(3). (b) The same pulse sequence, now expressed in terms of the time-evolution operator in Eq. (20), which involves the mixing operations \(\hat{U}_{\text{mix}}^{(\dagger)}\) and the “free” time evolution operator \(\hat{H}_{0}\). \(\beta\); the Hamiltonian also includes single-particle hopping processes of amplitude \(\Omega_{0}/2\); see Fig. 4(a)-(c) for a sketch of the processes and Refs. [104; 105]. Henceforth, the bare Hubbard interaction strength sets our unit of energy and time. First of all, we note that the classical equations of motion (NLSE) in Eq. (1) are readily obtained from Heisenberg's equations, \(d\hat{a}_{s}/dt=i[\hat{H}_{0},\hat{a}_{s}]\), upon taking the classical limit \(\hat{a}_{1,2}\to\psi_{1,2}\); see Refs. [85; 86; 87; 88]. Specifically, the self-phase modulation in Eq. (1) stems from the intra-mode (Hubbard) interaction terms in Eq. (9), while the cross-phase modulation stems from the inter-mode (cross) interaction term. Hence, this justifies the choice of Eq. (9) as a proper parent quantum Hamiltonian for our initial (non-driven) system. Note that we set \(\hbar\!=\!1\) throughout this work. In fact, for the sake of later convenience, it is instructive to derive the NLSE in Eq. (1) using a different approach. Indeed, this will allow us to introduce central notions and quantities, which will be used throughout this work. Let us introduce a set of angular momentum (Schwinger) operators [106], defined as \[\hat{J}_{x} =\frac{1}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{2}^{ \dagger}\hat{a}_{1}\right),\quad\hat{J}_{y}=\frac{1}{2i}\left(\hat{a}_{2}^{ \dagger}\hat{a}_{1}-\hat{a}_{1}^{\dagger}\hat{a}_{2}\right),\] \[\hat{J}_{z} =\frac{1}{2}\left(\hat{a}_{2}^{\dagger}\hat{a}_{2}-\hat{a}_{1}^{ \dagger}\hat{a}_{1}\right),\quad\hat{N}=\hat{a}_{1}^{\dagger}\hat{a}_{1}+\hat{ a}_{2}^{\dagger}\hat{a}_{2}. \tag{10}\] These angular-momentum operators satisfy the spin commutation relations \([\hat{J}_{\mu},\hat{J}_{\nu}]=i\varepsilon_{\mu\nu\lambda}\hat{J}_{\lambda}\), and the operator \(\hat{N}\) simply counts the total number of bosons in the system (assumed to be constant). We note that these operators also satisfy the sum rule \[\hat{J}_{x}^{2}+\hat{J}_{y}^{2}+\hat{J}_{z}^{2}=\frac{\hat{N}}{2}\left(\frac{ \hat{N}}{2}+1\right), \tag{11}\] which is a conserved quantity. For a single boson (\(N\!=\!1\)), we have \(\hat{J}_{\mu}\!=\!\hat{\sigma}_{\mu}/2\), where \(\hat{\sigma}_{x,y,z}\) denote the Pauli matrices. Using the operators in Eq. (10), the parent Hamiltonian in Eq. (9) simply reads \[\hat{H}_{0}=\chi\hat{J}_{z}^{2}-\Omega_{0}\hat{J}_{x}+\text{constant},\qquad \chi\!=\!1-\beta. \tag{12}\] We henceforth neglect the constant terms, which are proportional to \(\hat{N}\) and \(\hat{N}^{2}\); see Appendix A. We note that the Hamiltonian in Eq. (12) has been extensively studied in the context of the bosonic Josephson effect [76; 77; 80; 107; 108; 109; 110] and nuclear physics [111]. From Eq. (12), we also recover that the pathological case \(\beta\!=\!1\) trivializes to a non-interacting problem (\(\chi\!=\!0\)), as previously noted in Section II. The equations of motion associated with Eq. (12) are readily obtained from Heisenberg's equations \[\frac{d\hat{J}_{z}(t)}{dt} =i[\hat{H}_{0},\hat{J}_{z}(t)]=-\Omega_{0}\hat{J}_{y}(t), \tag{13}\] \[\frac{d\hat{J}_{y}(t)}{dt} =i[\hat{H}_{0},\hat{J}_{y}(t)]=\Omega_{0}\hat{J}_{z}(t)\] \[\qquad\qquad\qquad+\chi\left(\hat{J}_{z}(t)\hat{J}_{x}(t)+\hat{J} _{x}(t)\hat{J}_{z}(t)\right).\] In order to connect Eq. (13) to the classical NLSE in Eq. (1), we take the classical limit and introduce the Bloch-Poincare sphere representation \((\theta,\varphi)\) through the mapping [110] \[\hat{J}_{x} \to\frac{N}{2}\sqrt{1-z^{2}}\cos\varphi,\quad\hat{J}_{y}\to-\frac {N}{2}\sqrt{1-z^{2}}\sin\varphi,\] \[\hat{J}_{z} \to-\frac{N}{2}z,\qquad\qquad\qquad z=\cos\theta. \tag{14}\] We note that this Bloch-sphere representation relies on Eq. (11) and particle conservation. Injecting this Eq. (14) into Eq. (13), one obtains the classical equations of motion \[\dot{z} =-\Omega_{0}\sqrt{1-z^{2}}\sin\varphi,\] \[\dot{\varphi} =N\chi z+\Omega_{0}\frac{z}{\sqrt{1-z^{2}}}\cos\varphi, \tag{15}\] for the two canonical conjugate variables \(z(t)\) and \(\varphi(t)\)[76; 80; 108]. We point out that Eq. (15) is equivalent to the NLSE in Eq. (1) upon representing the complex amplitudes \(\psi_{1,2}\) on the Bloch-Poincare sphere [88; 112] \[\psi_{1} =\sqrt{N}\cos(\theta/2)=\sqrt{\frac{N}{2}+n},\] \[\psi_{2} =\sqrt{N}\sin(\theta/2)\,e^{i\varphi}=\sqrt{\frac{N}{2}-n}\,e^{i \varphi}, \tag{16}\] where we introduced the relative phase \(\varphi\) between the two modes, the relative population (or relative light intensity) \[z=\cos\theta=\frac{2n}{N}=\left(|\psi_{1}|^{2}-|\psi_{2}|^{2}\right)/N, \tag{17}\] and the total population (or total light intensity) \[N=|\psi_{1}|^{2}+|\psi_{2}|^{2}. \tag{18}\] Figure 4: Processes in the Hamiltonian in Eq. (9): (a) Intra-mode (Hubbard) interactions; (b) inter-mode (cross) interactions; and (c) single-particle hopping processes. (d) The effective Hamiltonian in Eq. (34) includes pair-hopping processes, by which two interacting particles in the same mode simultaneously change mode. In this illustration, the two modes \(1\) and \(2\) correspond to the low-energy orbitals of a double-well potential, and the bosons are represented by green spheres. We emphasize that the dynamics in phase space, i.e. the trajectories (\(z(t),\varphi(t)\)) resulting from Eq. (15), can be simply monitored in experiments by measuring the relative population (intensity) and relative phase of the two modes; see also Section VIII for a more detailed discussion. For the sake of completeness, we note that the equations of motion in Eq. (15) can be derived from Hamilton's equation, using the classical Hamiltonian [113, 76, 80] \[\mathcal{H}_{0}(z,\varphi)=\frac{\chi N}{2}z^{2}-\Omega_{0}\sqrt{1-z^{2}}\cos\varphi. \tag{19}\] The classical dynamics of the non-driven system hence relies on a competition between the "mean-field" interaction parameter \(g=\chi N\) and the linear coupling \(\Omega_{0}\). This competition is at the core of bifurcations and symmetry breaking in bosonic Josephson junctions [108, 70, 80, 76]. These concepts will be further discussed in Section IV. ### The pulse sequence and the effective Floquet Hamiltonian We now introduce the quantum-many-body analogue of the pulse sequence introduced in Section II; see Fig. 3. We write the time-evolution operator over one period \(T\) in the form [Fig. 3(b)] \[\hat{U}(T;0)=\hat{U}^{\dagger}_{\text{mix}}\,e^{-i(1-\alpha)T\hat{H}_{0}}\hat {U}_{\text{mix}}\,e^{-i\alpha T\hat{H}_{0}}, \tag{20}\] where the mixing operator is defined as \[\hat{U}_{\text{mix}}=e^{i\frac{\pi}{2}\hat{J}_{x}}, \tag{21}\] and we remind that \(\alpha\!\in\![0,1]\) is a tunable parameter. We note that the operator \(\hat{U}_{\text{mix}}\) in Eq. (21) indeed corresponds to the \(\pi/2\)-pulse operator in Eq. (6) for a single boson (\(N\!=\!1\)), which is consistent with the fact that the mixing operation is a single-particle process. When writing Eq. (20), we explicitly took the limit \(\tau\!\to\!0\), where \(\tau\) is the pulse duration; see Eq. (5). The state of the quantum many-body system at time \(t_{\text{n}}=T\!\times\!\text{n}\) is then obtained as \[|\psi(t_{\text{n}})\rangle=\hat{U}(t_{\text{n}};0)|\psi(0)\rangle=\left(\hat{U }(T;0)\right)^{\text{n}}|\psi(0)\rangle, \tag{22}\] where \(|\psi(0)\rangle\) denotes the initial state of the system. We now derive the effective (Floquet) Hamiltonian [17, 83, 26], which captures the stroboscopic dynamics of the driven system, and hence, its time evolution over long time scales \(t_{\text{n}}\gg T\). The effective Hamiltonian is defined through the time-evolution operator over one period [17, 114] \[\hat{U}(T;0)=e^{-iT\hat{H}_{\text{eff}}}, \tag{23}\] and it can be evaluated explicitly through a \(1/\omega\)-expansion, where \(\omega\!=\!2\pi/T\) denotes the drive frequency; see Refs. [17, 83, 84, 26, 27, 17]. In order to reach convergence of this infinite series expansion, we partially resum the series [17] by splitting the time-evolution operator in Eq. (20) into two parts \[\hat{U}(T;0)=e^{-i(1-\alpha)T\hat{H}_{1}}e^{-i\alpha T\hat{H}_{0}}, \tag{24}\] where we introduced the operator \(\hat{H}_{1}\) defined as \[e^{-it\hat{H}_{1}}\equiv e^{-i\frac{\pi}{2}\hat{J}_{x}}e^{-it\hat{H}_{0}}e^{i \frac{\pi}{2}\hat{J}_{x}}. \tag{25}\] The time-evolution operator in Eq. (25) has a simple interpretation: it corresponds to free time-evolution in a rotated basis. Then, assuming that \(T\omega_{\text{eff}}\!\ll\!1\), where \(\omega_{\text{eff}}\) is the characteristic frequency associated with the processes included in the Hamiltonians \(\hat{H}_{0}\) and \(\hat{H}_{1}\), we apply the Trotter approximation to Eq. (24), \[\hat{U}(T;0)\approx e^{-iT\left(\alpha\hat{H}_{0}+(1-\alpha)\hat{H}_{1}\right)}, \tag{26}\] from which we directly obtain the effective Hamiltonian [Eq. (54)] \[\hat{H}_{\text{eff}}=\alpha\hat{H}_{0}+(1-\alpha)\hat{H}_{1}+\mathcal{O}(T). \tag{27}\] Our problem of finding the effective Hamiltonian thus reduces to the calculation of \(\hat{H}_{1}\) defined in Eq. (25). This step can be performed exactly, by noting that \[\hat{H}_{1}=e^{-i\frac{\pi}{2}\hat{J}_{x}}\hat{H}_{0}\,e^{i\frac{\pi}{2}\hat{J }_{x}}=\chi\left(e^{-i\frac{\pi}{2}\hat{J}_{x}}\hat{J}_{2}^{2}\,e^{i\frac{\pi} {2}\hat{J}_{x}}\right)-\Omega_{0}\hat{J}_{x},\] where we used the definition of \(\hat{H}_{0}\) in Eq. (12). Using the Baker-Campbell-Hausdorff formula, one obtains [115] \[e^{-i\frac{\pi}{2}\hat{J}_{x}}\hat{J}_{z}^{2}\,e^{i\frac{\pi}{2}\hat{J}_{x}}= \hat{J}_{y}^{2}, \tag{28}\] such that \[\hat{H}_{1}=\chi\hat{J}_{y}^{2}-\Omega_{0}\hat{J}_{x}. \tag{29}\] The effective Hamiltonian in Eq. (27) finally reads \[\hat{H}_{\text{eff}}=\chi\left(\alpha\hat{J}_{z}^{2}+(1-\alpha)\hat{J}_{y}^{2} \right)-\Omega_{0}\hat{J}_{x}+\mathcal{O}(T). \tag{30}\] From this result, we find that the Trotter approximation [Eq. (26)] is valid for a sufficiently short driving period satisfying \(T\!\ll\!1/\chi\) and \(T\!\ll\!1/\Omega_{0}\). The Hamiltonian displayed in Eq. (30) involves unconventional interactions, which are known as two-axis twisting interactions in the context of quantum optics. They have been proposed in view of creating squeezed spin states with optimal squeezing [116, 117, 118, 119, 93, 120]. #### iii.2.1 A few limiting cases At this stage, it is insightful to analyze the effective Hamiltonian in Eq. (30) for a few limiting cases: * When \(\alpha=1\), one finds \(\hat{H}_{\text{eff}}=\hat{H}_{0}\), which reflects the triviality of the sequence in Eq. (20) in this case. * When \(\alpha\!=\!0\), one finds the effective Hamiltonian \[\hat{H}_{\text{eff}}=\chi\hat{J}_{y}^{2}-\Omega_{0}\hat{J}_{x}=e^{-i\frac{\pi}{2 }\hat{J}_{x}}(\hat{H}_{0})\,e^{i\frac{\pi}{2}\hat{J}_{x}},\] (31) which is thus strictly equivalent to the non-driven Hamiltonian \(\hat{H}_{0}\) up to a unitary transformation [Eq. (28)]: the Hamiltonians \(\hat{H}_{0}\) and \(\hat{H}_{\text{eff}}\) share the same spectrum. In this case, the driving sequence simply generates an initial and final kick [17], as can be deduced by explicitly writing the time-evolving state at some arbitrary stroboscopic time \(t\!=\!t_{\text{n}}\) [Eq. (22)] \[|\psi(t_{\text{n}})\rangle =\left(\hat{U}_{\alpha=0}(T;0)\right)^{\text{n}}|\psi(0)\rangle,\] \[=e^{-i\frac{\pi}{2}\hat{J}_{x}}e^{-it_{\text{n}}\hat{H}_{0}}e^{i \frac{\pi}{2}\hat{J}_{x}}|\psi(0)\rangle.\] (32) The long-time dynamics in Eq. (32) is indeed dictated by the static Hamiltonian \(\hat{H}_{0}\), but it is also affected by the initial and final kicks, \(e^{\pm i\frac{\pi}{2}\hat{J}_{x}}\), associated with the change of basis (rotation on the Bloch-Poincare sphere) [121]. In Section VI, we will see that this situation can nonetheless lead to intriguing phenomena upon coupling individual dimers in a time-periodic manner [Fig. 14(b)]. * When \(\alpha=1/2\), the effective Hamiltonian reads \[\hat{H}_{\text{eff}} =\frac{\chi}{2}\left(\hat{J}_{y}^{2}+\hat{J}_{z}^{2}\right)- \Omega_{0}\hat{J}_{x}+\mathcal{O}(T),\] \[=-\frac{\chi}{2}\hat{J}_{x}^{2}-\Omega_{0}\hat{J}_{x}+\mathcal{O }(T),\] (33) where we used the sum rule (11) and omitted constant terms. In this case, the system displays the special symmetry \([\hat{H}_{\text{eff}},\hat{J}_{x}]=0\), such that the many-body eigenstates and energies can be written exactly. #### iii.2.2 The effective Hamiltonian in the bosonic representation It is instructive to rewrite the effective Hamiltonian in Eq. (30) using the original bosonic operators [Appendix A], \[\hat{H}_{\text{eff}} =\frac{U_{1}}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}^{\dagger} \hat{a}_{1}\hat{a}_{1}\hat{a}_{1}+\hat{a}_{2}^{\dagger}\hat{a}_{2}^{\dagger} \hat{a}_{2}\hat{a}_{2}\right)\] \[+U_{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{1} \hat{a}_{2}\right)\] \[+\frac{U_{3}}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}^{\dagger} \hat{a}_{2}\hat{a}_{2}+\hat{a}_{2}^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{1} \hat{a}_{1}\right)\] \[-\frac{\Omega_{0}}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat {a}_{2}^{\dagger}\hat{a}_{1}\right)+\mathcal{O}(T), \tag{34}\] where the interaction strengths are given in Eq. (8). A comparison with the initial Hamiltonian \(\hat{H}_{0}\) in Eq. (9) indicates that the driving pulse sequence has effectively generated novel interaction terms; see the third line of Eq. (III.2.2). These pair-hopping terms [122, 123, 124, 125, 126, 127], which stem from the \(\hat{J}_{y}^{2}\) interactions in Eq. (30), describe processes by which two particles in mode \(s\) collide and end up in the other mode \(s^{\prime}\neq s\); see Fig. 4(d). As we now discuss below in Section III.3, these pair-hopping terms are at the origin of the four-wave mixing nonlinearity announced in Eq. (7). As a technical remark, we remind the reader that adding a constant shift to the intra-mode (Hubbard) and inter-mode interactions, \((U_{1},U_{2})\longrightarrow(U_{1}+\epsilon,U_{2}+\epsilon)\), does not modify the physics, due to the number-conserving nature of the system. ### Effective classical equations of motion First of all, we find that the effective NLSE in Eq. (7) is directly obtained from the effective Hamiltonian \(\hat{H}_{\text{eff}}\) in Eq. (III.2.2), using Heisenberg's equations \(d\hat{a}_{s}/dt=i[\hat{H}_{\text{eff}},\hat{a}_{s}]\), and upon taking the classical limit \(\hat{a}_{1,2}\rightarrow\psi_{1,2}\). In particular, the effective four-wave mixing in Eq. (7) originates from the effective pair-hopping terms in Eq. (III.2.2). In analogy with Eqs. (13)-(15), we now explicitly derive the classical equations of motion for the two canonically conjugate variables \(z(t)\) and \(\varphi(t)\), describing the relative population and phase of the two modes. Using the effective Hamiltonian in Eq. (30) and Heisenberg's equations, we find \[\frac{d\hat{J}_{z}(t)}{dt}=i[\hat{H}_{\text{eff}},\hat{J}_{z}(t)] =-\Omega_{0}\hat{J}_{y}(t)\] \[-(1-\alpha)\chi\left(\hat{J}_{y}(t)\hat{J}_{x}(t)+\hat{J}_{x}(t) \hat{J}_{y}(t)\right),\] \[\frac{d\hat{J}_{y}(t)}{dt}=i[\hat{H}_{\text{eff}},\hat{J}_{y}(t)] =\Omega_{0}\hat{J}_{z}(t)\] \[+\alpha\chi\left(\hat{J}_{z}(t)\hat{J}_{x}(t)+\hat{J}_{x}(t) \hat{J}_{z}(t)\right).\] Applying the Bloch-Poincare-sphere mapping [Eq. (14)], we obtain the classical equations of motion \[\dot{z} =-\chi N(1-\alpha)(1-z^{2})\cos\varphi\sin\varphi-\Omega_{0}\sqrt{ 1-z^{2}}\sin\varphi,\] \[\dot{\varphi} =\chi Nz\left(\alpha-(1-\alpha)\sin^{2}\varphi\right)+\Omega_{0} \frac{z}{\sqrt{1-z^{2}}}\cos\varphi. \tag{35}\] The classical equations of motion in Eq. (III.2.2) are physically equivalent to the effective NLSE announced in Eq. (7), through the mapping provided by Eq. (16). One verifies that Eq. (III.2.2) reduces to the equations of motion of the undriven system [Eq. (15)] in the limit \(\alpha\!=\!1\). Drive-induced nonlinearities, which are controlled by the parameter \(\alpha\), strongly affect the dynamics of the two-mode system, as we now analyze in the following Section IV. ## IV Drive-induced nonlinear dynamics and the pendulum analogy ### Symmetries, phase-space topology and transitions First of all, we find that the equations of motion in Eq. (III.2.2) can be derived from Hamilton's equation, using the classical Hamiltonian \[\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)=-\Omega_{0}\sqrt{1-z^{2}} \cos\varphi \tag{36}\] \[\qquad\qquad+\frac{\chi N}{2}\alpha z^{2}+\frac{\chi N}{2}(1- \alpha)(1-z^{2})\,\sin^{2}\varphi.\] It is useful to note that the Hamiltonian in Eq. (36), and the resulting equations of motions in Eq. (III.2.2) satisfy, for any value of the parameters, the following discrete symmetries: \[S_{1} :z\rightarrow-z,\] \[S_{2} :\varphi\rightarrow-\varphi. \tag{37}\] We remind that \(\varphi\) is defined modulo \(2\pi\), given its angular nature, and that \(z=\cos\theta\) is defined in the interval \([-1,1]\). We also note that additional symmetries exist for specific values of the parameters [127]. The classical Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) describes an energy landscape over the Bloch sphere, from which one can readily deduce all possible trajectories and related fixed points (\(\dot{z}=\dot{\varphi}=0\)) [80]. In order to reveal the impact of drive-induced nonlinearities on the dynamics, we determine a "fixed-point phase diagram", as a function of the dimensionless parameters \(\alpha\) and \(\tilde{\Omega}_{0}=\Omega_{0}/(\chi N)\). Here, we identify a "phase" as a region in parameter space that is characterized by a distinctive phase-space topology (fixed-point configuration); see Fig. 5. By performing a stability analysis on the classical Hamiltonian (36), we obtain the following fixed points: \[\text{FP}_{0}: z=0,\varphi=0,\] \[\text{FP}_{\pi}: z=0,\varphi=\pi,\] \[\text{FP}_{*}: z=0,\varphi=\pm\arccos\left[\widetilde{\Omega}_{0}/(1-\alpha)\right],\] \[\text{FP}_{\pm}: z=\pm\sqrt{1-\widetilde{\Omega}_{0}^{2}/\alpha^{2}},\varphi=\pi. \tag{38}\] These fixed points can be stable or unstable, depending on the values of \(\alpha\) and \(\widetilde{\Omega}_{0}\). In particular, the emergence of certain fixed points can be associated with a spontaneous breaking of the aforementioned symmetries [Eq. (37)]: the fixed points \(\text{FP}_{\pm}\) break \(S_{1}\), while the fixed points \(\text{FP}_{*}\) break \(S_{2}\). We note that neither \(\text{FP}_{0}\) nor \(\text{FP}_{\pi}\) break a symmetry. From the stability analysis, we identify five distinct phases: \[\text{Phase I}: \text{FP}_{0},\text{FP}_{\pi}\,\text{stable}\] \[\text{Phase II}: \text{FP}_{0},\text{FP}_{\pm}\,\text{stable} (\mathcal{S}_{\ref{eq:1}})\] \[\text{Phase III}: \text{FP}_{0},\text{FP}_{*}\,\text{stable} (\mathcal{S}_{\ref{eq:1}})\] \[\text{Phase IV}: \text{FP}_{0},\text{FP}_{\pi},\text{FP}_{\pm}\,\text{stable} (\mathcal{S}_{\ref{eq:1}})\] \[\text{Phase V}: \text{FP}_{0},\text{FP}_{\pi},\text{FP}_{*}\,\text{stable} (\mathcal{S}_{\ref{eq:1}}) \tag{39}\] We note that a spontaneous symmetry breaking (involving either \(S_{1}\) or \(S_{2}\)) occurs for every phase, except for Phase I. The complete phase diagram is displayed in Fig. 5. At this stage, it is worth considering three limiting cases: * When \(\alpha=1\), the classical Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) reduces to the bosonic Josephson Junction (BJJ) Hamiltonian \(\mathcal{H}_{0}(z,\varphi)\) in Eq. (19). The BJJ model features a bifurcation point at \(\widetilde{\Omega}_{0}=1\): the fixed point \(\text{FP}_{\pi}\), which is stable for \(\widetilde{\Omega}_{0}>1\) (Phase I), becomes unstable for \(\widetilde{\Omega}_{0}<1\), giving rise to two new stable fixed points \(\text{FP}_{\pm}\) (Phase II); see Fig. 5. This transition, which is associated with the spontaneous breaking of the \(S_{1}\) symmetry, was observed in cold atoms [80] and microresonators [70]. * When \(\alpha=0\), the system corresponds to the BJJ model in a rotated basis; see Eq. (31). The aforementioned bifurcation then corresponds to a transition from Phase I (\(\widetilde{\Omega}_{0}>1\)) to Phase III (\(\widetilde{\Omega}_{0}<1\)), characterized by the two stable fixed points \(\text{FP}_{*}\) and the spontaneous breaking of the \(S_{2}\) symmetry; see Fig. 5. * When \(\alpha=1/2\), the effective Hamiltonian satisfies the symmetry \([\hat{H}_{\text{eff}},\hat{J}_{x}]=0\); see Eq. (33). Classically, this corresponds to the following constant of motion \[C=\sqrt{1-z(t)^{2}}\cos\varphi(t).\] (40) One verifies that this special constant of motion implies that the system remains in Phase I for any arbitrary value of \(\widetilde{\Omega}_{0}\). For any other values of \(\alpha\), the system is allowed to enter two new phases (Phases IV and V), which are both characterized by four stable fixed points but are associated with different symmetry breaking; see Fig. 5. Importantly, one can induce transitions between these various phases by simply tuning the drive-induced nonlinearity parameter \(\alpha\). This is illustrated in Fig. 6, which shows two successive transitions in the absence of linear coupling (\(\Omega_{0}=0\)): When \(\alpha=1/2\), the system is in Phase I, with two stable fixed points (\(\text{FP}_{0,\pi}\)) satisfying both symmetries \(S_{1,2}\). Reducing the nonlinearity parameter (\(\alpha<1/2\)) stabilizes two additional fixed points \(\text{FP}_{*}\), which break \(S_{2}\) symmetry (Phase V). Instead, setting \(\alpha>1/2\) stabilizes the two fixed points \(\text{FP}_{\pm}\), associated with the breaking of \(S_{1}\) symmetry (Phase IV). From the quantum effective Hamiltonian in Eq. (30), we observe that these phase-space transitions (and related symmetry Figure 5: Phase diagram associated with the effective classical Hamiltonian in Eq. (36), as a function of the drive-induced nonlinearity parameter \(\alpha\) and the dimensionless linear coupling \(\tilde{\Omega}_{0}=\Omega_{0}/(\chi N)\). Each phase is characterized by a distinctive phase-space topology (fixed-point configuration); see Eq. (39). A few trajectories are indicated as thin blue curves (equipotential lines of the energy landscape) for each representative case. The axis used in the phase-space diagrams is shown in Phase I; the colormap is the same as in Fig. 6, where it is explicitly defined. The black circle (resp. white square) at \((\alpha,\tilde{\Omega}_{0})=(0,0)\) [resp. \((\alpha,\tilde{\Omega}_{0})=(1,0)\)] indicates the singular point at which only \(\text{FP}_{*}\) (resp. \(\text{FP}_{\pm}\)) are stable fixed points. breaking) stem from a competition between the two types of interaction terms, \(\widehat{J}_{z}^{2}\) and \(\widehat{J}_{y}^{2}\). From the microscopic point of view [Eq. (34)], this corresponds to a competition between the intra-mode (Hubbard) interaction and the drive-induced pair-hopping processes. This is different from the transition discussed in the context of the BJJ model [108; 110], which involves a competition between the Hubbard interaction and the single-particle hopping (or linear coupling) \(\Omega_{0}\), and which is associated with the breaking of a single symmetry (\(S_{1}\)). ### The pendulum analogy Tuning drive-induced nonlinearities can change the topology of phase-space, hence radically altering nonlinear dynamics. Interestingly, this phenomenon can be captured by a simple pendulum analogy [108; 110; 128], as we now explain. Let us consider a standard pendulum of mass \(m\) and length \(l\), subjected to gravity. Defining the angular-displacement variable \(\varphi\) through [Fig. 7(a)] \[x=l\sin(\varphi),\quad y=l\cos(\varphi), \tag{41}\] we write the classical Hamiltonian of this simple pendulum as \[\mathcal{H}_{\text{pendulum}}(z,\varphi)=\frac{z^{2}}{2I}-mgl\cos\varphi. \tag{42}\] Here, \(z\) denotes the angular-momentum variable and \(I\) is the moment of inertia of the simple pendulum. As pointed out in Ref. [108], the Hamiltonian \(\mathcal{H}_{0}(z,\varphi)\) describing the BJJ in Eq. (19) is precisely of the form (42), upon establishing the following dictionary: \[I\to(\chi N)^{-1},\quad mg\to\Omega_{0},\quad l\to\sqrt{1-z^{2}}. \tag{43}\] In this sense, the BJJ model can be mapped onto a non-rigid pendulum, with momentum-dependent length [108; 110]. While the stable fixed point \(\text{FP}_{0}\) of the BJJ model is naturally associated with the position at rest of a rigid pendulum, the other stable fixed points \(\text{FP}_{\pi}\) (\(\widehat{\Omega}_{0}>1\)) and \(\text{FP}_{\pm}\) (\(\widehat{\Omega}_{0}<1\)) stem from the non-rigid nature of the pendulum; see Fig. 7(b) for a sketch of a typical trajectory around \(\text{FP}_{\pm}\). In the BJJ model, the angular-momentum variable \(z\) is restricted to take values in the interval \(z\!\in\![-1,1]\); see Eq. (17). Having reviewed the pendulum analogy for the BJJ model [108; 110], we now apply this analogy to the effective Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) in Eq. (36). Compared to the BJJ Hamiltonian \(\mathcal{H}_{0}(z,\varphi)\), the effective Hamiltonian in Eq. (36) features a new term \(\sim\sin^{2}\varphi\). Using the coordinates defined in Eq. (41), we find that this term can be interpreted as an additional contribution to the potential energy of the non-rigid pendulum, given by \[V_{\text{spring}}=\frac{\chi N}{2}(1-\alpha)\,x^{2}. \tag{44}\] Consequently, the driven nonlinear system described by the effective Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) in Eq. (36) can be mapped onto a non-rigid pendulum that is horizontally attached to a spring (allowed to slide along the y axis); see Fig 7 (a) for a sketch. We note that the strength of the spring scales as \((1-\alpha)\), such that it vanishes in the limit \(\alpha\!=\!1\) (BJJ model). The spring has an intuitive effect on the trajectories of the pendulum: activating the spring (\(\alpha\lesssim 1\)) naturally reduces the amplitude of oscillations around the equilibrium point (\(z\!=\!\varphi\!=\!0\)). Besides, for a sufficiently strong spring (\(\alpha\!<\!1/2\)), full rotations around the pendulum's pivot (i.e. trajectories associated with a full scan of the \(\varphi\) axis and a well-defined chirality, \(\text{sign}(z)\)) become strictly forbidden. These intuitive effects are visible in the phase-space diagrams illustrated in Fig. 6. It is also interesting to note that the addition of the spring leads to two new fixed points \(\text{FP}_{*}\), which become stable for a sufficiently strong spring (\(\alpha\!<\!1/2\)); see Figs. 6 and 7. These new equilibrium points correspond to a finite angle \(\varphi\), set by the strength of the spring [Eq. (38)]. To gain further insight, let us focus on the case of vanishing linear coupling \(\Omega_{0}\!=\!0\), which is displayed in Fig. 6. In terms of the pendulum analogy, this corresponds to a vanishing force of gravity [Eq. (43)], implying that the pendulum is only subjected to the elastic force of the spring. For a weak spring (\(\alpha\lesssim 1\)) and small angular momentum (\(|z|\ll 1\)), one can assume that the length of the pendulum is constant, and one obtains a set of intuitive fixed points: the two stable fixed points (\(\text{FP}_{0}\) and \(\text{FP}_{\pi}\)) simply correspond to the two rest positions of the spring, while the two unstable fixed points \(\text{FP}_{*}\) correspond to the positions where the spring is maximally stretched and its force is exactly balanced by the constraint reaction of the rigid pendulum. As previously noted, the fixed points \(\text{FP}_{*}\) become stable for a sufficiently strong spring (\(\alpha\!<\!1/2\)), a peculiar effect which finds its origin in the non-rigid (momentum-dependent) length of the pendulum; see Fig. 7(b) for a sketch of a typical trajectory around \(\text{FP}_{*}\). Figure 6: Energy landscape associated with the effective classical Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) displayed in Eq. (36), for vanishing linear coupling \(\Omega_{0}\!=\!0\), and three different values of the drive-induced nonlinearity parameter: \(\alpha\!=\!0.25\), \(\alpha\!=\!0.5\) and \(\alpha\!=\!0.75\). A few trajectories are indicated as thin blue curves (equipotential lines of the energy landscape) for each case. Note the emergence and disappearance of stable fixed points on the Bloch-Poincaré sphere, as the nonlinearity parameter \(\alpha\) is varied: Phase space undergoes a transition from Phase V (\(\alpha\!=\!0.25\)) to Phase I (\(\alpha\!=\!0.5\)) to Phase IV (\(\alpha\!=\!0.75\)); see Eq. (39). ## V Numerical validation of the effective-Hamiltonian approach This Section aims at exploring the validity of the effective-Hamiltonian analysis developed in Section III.2 and its classical limit presented in Section III.3. In particular, this Section demonstrates that the stroboscopic dynamics of the driven NLSE in Eq. (4) is well described by the effective NLSE with emergent nonlinearities in Eq. (7), as announced in Section II. The outline of our numerical study is displayed in Fig. 8. We hereby set the drive-induced interaction parameter to the value \(\alpha\!=\!1/2\); we verified that our conclusions hold in general. ### Validating the effective quantum Hamiltonian First, we demonstrate that the dynamics associated with the effective Hamiltonian in Eq. (34) [or equally Eq. (30)] reproduces the stroboscopic dynamics of the driven system described by Eqs. (20)-(22). To this end, we choose a coherent spin state as an initial state [93; 110] \[|\psi(0)\rangle=|N,\theta,\varphi\rangle=\left(\hat{a}^{\dagger}_{\theta, \varphi}\right)^{N}|\emptyset\rangle, \tag{45}\] which corresponds to a macroscopic occupation of the single-particle state, \[|\theta,\varphi\rangle=\cos(\theta/2)|1\rangle+\sin(\theta/2)e^{i\varphi}|2\rangle, \tag{46}\] defined on the Bloch sphere. Here we introduced the single-particle states \(|1\rangle\!=\!\hat{a}^{\dagger}_{1}|\emptyset\rangle\) and \(|2\rangle\!=\!\hat{a}^{\dagger}_{2}|\emptyset\rangle\), associated with the two modes, as well as the creation operator \(\hat{a}^{\dagger}_{\theta,\varphi}|\emptyset\rangle=|\theta,\varphi\rangle\). We note that the chosen initial state in Eq. (45) behaves classically in the limit \(N\!\to\!\infty\)[110], which will be convenient for later purposes (i.e. when comparing quantum and classical dynamics). We analyze the quantum dynamics through the evaluation of the population imbalance \[\langle z(t_{\mathsf{n}})\rangle=(2/N)\langle\psi(t_{\mathsf{n}})|\hat{J}_{z} |\psi(t_{\mathsf{n}})\rangle,\quad t_{\mathsf{n}}\!=\!T\!\times\!\mathsf{n},\] where the time-evolved state \(|\psi(t_{\mathsf{n}})\rangle\) is obtained from: (i) the full time dynamics of the driven system [Eqs. (20)-(22)], and (ii) the effective Hamiltonian [Eq. (34)]. Figure 9 compares these two results for both \(N\!=\!10\) and \(N\!=\!50\) bosons, and the same "mean-field" interaction parameter \(g\!=\!\chi N\!=\!5\). In both cases, one obtains that the effective description well captures the stroboscopic dynamics when the driving period is sufficiently small, \(T\!\lesssim\!0.1\) in the current units [see Eq. (9)]. This analysis validates the effective Hamiltonian in Eq. (34) in the high-frequency regime. Figure 8: Outline of the numerical study, which validates the approach originally displayed in Fig. 1. Figure 7: Pendulum analogy for the effective Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) in Eq. (36): (a) Schematics of a classical pendulum of mass \(m\) and length \(l\), subjected to gravity, and horizontally attached to a spring (allowed to slide along the y axis); see Eqs. (42)-(44). The red crosses indicate the positions associated with several fixed points; see Eq. (38). (b) Due to the momentum-dependent length, \(l\!=\!\sqrt{1-z^{2}}\), two types of fixed points can be stabilized depending on the strength of the spring: FP\({}_{\pm}\) (weak spring, \(\alpha\!>\!1/2\)) and FP\({}_{\ast}\) (strong spring, \(\alpha\!<\!1/2\)). The sketch shows typical trajectories around the FP\({}_{\pm}\) and FP\({}_{\ast}\) stable fixed points. Figure inspired from [110]. Figure 9: Population imbalance \(\langle z\rangle\) as a function of time, as obtained from the quantum dynamics of the driven system (blue curve) and the effective-Hamiltonian quantum dynamics (red curve) for: (a) \(N\!=\!10\) bosons and (b) \(N\!=\!50\) bosons. For each case, the full time dynamics of the driven system is generated using the sequence in Eq. (20) with a period \(T\!=\!0.2\), \(T\!=\!0.1\) and \(T\!=\!0.05\). Here the interaction parameter is set to \(g\!=\!\chi N\!=\!5\), the static linear coupling is set to \(\Omega_{0}\!=\!0\) and \(\alpha\!=\!1/2\); the initial coherent spin state \(|N,\theta,\varphi\rangle\) corresponds to \(z\!=\!\cos\theta\!=\!0.4\) and \(\varphi\!=\!2.25\). In all plots, the time-evolved state is evaluated at stroboscopic times \(t_{\mathsf{n}}\!=\!T\!\times\!\mathsf{n}\). ### The effective semiclassical dynamics As a next step, we now show that the effective Hamiltonian \(\hat{H}_{\text{eff}}\) in Eq. (34) well captures the classical dynamics generated by the equations of motion in Eq. (35). We remind that the latter classical description is associated with the Hamiltonian function \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) displayed in Eq. (36), where \(z\) and \(\varphi\) describe the relative population and phase of the two modes; see Eqs. (16)-(17). The agreement between the quantum and classical descriptions is expected to be reached in the limit \(N\!\to\!\infty\), where quantum fluctuations become negligible [130, 131, 132, 133, 80, 110, 129]. We also remind the reader that the classical equations of motion in Eq. (35), which are analyzed in this Section, are equivalent to the effective NLSE in Eq. (7), through the mapping defined in Eq. (16). First of all, let us analyze the dynamics generated by the effective classical equations of motion in Eq. (35). In order to highlight the role of nonlinearities, we hereby set the static linear coupling to \(\Omega_{0}\!=\!0\) and we remind that \(\alpha\!=\!1/2\). In Fig. 10, we display a few representative trajectories over the energy landscape \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) defined in Eq. (36). These trajectories reflect the presence of two stable fixed points at \(\text{FP}_{0}=(z\!=\!0,\varphi\!=\!0)\) and \(\text{FP}_{\pi}\!=\!(z\!=\!0,\varphi\!=\!\pi)\). We stress that this configuration of fixed points radically differs from that associated with the non-driven system [see \(\mathcal{H}_{0}(z,\varphi)\) in Eq. (19)], which are located at \(z\!=\!\pm 1\) for the same choice of \(\Omega_{0}\!=\!0\); see Section IV. We now compare these classical predictions to the quantum dynamics associated with the effective Hamiltonian \(\hat{H}_{\text{eff}}\) in Eq. (34), using a coherent spin state \(|N,\theta,\varphi\rangle\) as an initial condition; see Eq. (45). Figure 11 shows the trajectories \(\langle z(t)\rangle\) for \(N\!=\!5,10,80,170\) bosons, while keeping the "mean-field" interaction parameter \(\chi N\!=\!5\) constant. From these results, we confirm that a good agreement between the effective classical and quantum descriptions is indeed obtained in the large \(N\) limit. In order to further appreciate the residual deviations between the quantum and classical dynamics in the small \(N\) regime, we depict the time-evolving Husimi function \(Q(z,\varphi;t)\) in Fig. 12 for the case \(N\!=\!80\). The Husimi function [131, 132, 133, 80, 134, 135] is obtained by evaluating the squared overlap of the time-evolving state \(|\psi(t)\rangle\) with the coherent spin states defined over the Bloch sphere (with same particle number \(N\)), \[Q(z,\varphi;t)=|\langle N,\theta,\varphi|\psi(t)\rangle|^{2},\quad z=\cos\theta. \tag{47}\] Here the state \(|\psi(t)\rangle\) is evolved according to the effective Hamiltonian \(\hat{H}_{\text{eff}}\) in Eq. (34), so that the evolution of the Husimi function in Fig. 12 is to be compared with the quantum dynamics displayed in Fig. 11(c) for \(N=80\) bosons. The time-evolution of the Husimi function \(Q(z,\varphi;t)\) shown in Fig. 12 indicates that the initial coherent spin state \(|\psi(0)\rangle\!=\!|N,\theta,\varphi\rangle\) becomes substantially squeezed [93] around time \(t\!\approx\!3\), which also corresponds to the time around which the classical trajectory starts deviating from the effective-Hamiltonian quantum dynamics in Fig. 11(c). At later times, \(t\!\approx\!12\), the state becomes oversqueezed and it exhibits Majorana stars in the Husimi distribution [132, 134, 135]. We find that these non-classical features are postponed to later evolution times upon increasing the number of bosons \(N\) while keeping the interaction parameter \(g\!=\!\chi N\) fixed. Despite these non-classical features, the center of mass of the Husimi function is found to approximately follow a classical orbit around the stable fixed point \(\text{FP}_{\pi}\!=\!(z\!=\!0,\varphi\!=\!\pi)\), as depicted in Fig. 10. ### The driven nonlinear Schrodinger equation and its effective description In this Section, we finally analyze the agreement between the classical dynamics associated with the driven NLSE Figure 11: Population imbalance \(\langle z\rangle\) as a function of time, as obtained from the effective-Hamiltonian quantum dynamics (red curve) and the effective classical equations of motion (blue curve). The number of bosons is: (a) \(N=5\); (b) \(N=10\); (c) \(N=80\); (d) \(N\!=\!170\). Here the interaction parameter is set to \(g\!=\!\chi N\!=\!5\), while the static linear coupling is set to \(\Omega_{0}\!=\!0\) and \(\alpha\!=\!1/2\). The initial coherent spin state \(|N,\theta,\varphi\rangle\) corresponds to \(z\!=\!\cos\theta\!=\!0\) and \(\varphi\!=\!2.7\); the same initial condition is chosen for the effective classical dynamics. The dynamics \(z(t)\) should be compared with the trajectories depicted in Fig. 10, close to the stable fixed point \(\text{FP}_{\pi}\!=\!(z\!=\!0,\varphi\!=\!\pi)\). Figure 10: Energy landscape associated with the classical Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) displayed in Eq. (36), for \(\Omega_{0}\!=\!0\) and \(\alpha\!=\!1/2\). A few trajectories are indicated as thin blue curves (equipotential lines of the energy landscape). [Eqs. (4)-(5)] and the dynamics generated by the _effective_ classical equations of motion [Eq. (35)], which derive from the Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) in Eq. (36). We remind that these effective equations of motion are equivalent to the _effective_ NLSE announced in Eq. (7). In practice, we numerically solve the following classical equations of motion [Eq. (15)] \[\dot{z} =f_{\text{pulse}}(t)\sqrt{1-z^{2}}\sin\varphi,\] \[\dot{\varphi} =N\chi z-f_{\text{pulse}}(t)\frac{z}{\sqrt{1-z^{2}}}\cos\varphi, \tag{48}\] where the pulse function \(f_{\text{pulse}}(t)\) is defined in Eq. (5); here we again set the static coupling \(\Omega_{0}\!=\!0\). The equations of motion in Eq. (48) are equivalent to the driven NLSE in Eqs. (4)-(5) through the mapping provided by Eq. (16). The resulting dynamics is displayed in Fig. 13, together with the dynamics generated from the effective classical Hamiltonian \(\mathcal{H}_{\text{eff}}(z,\varphi;\alpha)\) in Eq. (36). The results in Fig. 13 confirm that the effective classical description very well captures the dynamics of the driven nonlinear system at stroboscopic times \(t\!=\!t_{\mathbf{n}}\), while a finite micromotion is observed at intermediate times \(t\!\neq\!t_{\mathbf{n}}\). We also emphasize that the trajectories \((z(t),\varphi(t))\) generated by the effective equations of motion [Fig. 13] reflect the presence of a stable fixed point at \(\text{FP}_{\pi}=(z=0,\varphi=\pi)\); see Fig. 10. Importantly, this fixed point is _unstable_ for the non-driven system described by \(\mathcal{H}_{0}(z,\varphi)\) in Eq. (19), hence leading to drastically different dynamics. Altogether, the numerical studies presented in this Section V validate the effective description announced in Eq. (7) [see also Sections III.2 and III.3], and hence, confirm the creation of effective interactions and nonlinearities through the driving sequence. ## VI Designing lattice systems with controllable drive-induced interactions We described in Section III how activating coupling processes between two modes (or sites), according to a well-designed pulse sequence, generates effective pair-hopping processes [Eq. (34)]. We now analyze the possibility of extending this scheme to lattice systems, both in the classical (mean-field) limit and in the regime of strongly-correlated quantum matter. Here, we set the focus on two types of sequences, illustrated in Fig. 14, which lead to different classes of lattice models. In the first scenario, one applies the pulse sequence in Eq. (20) in a dimerized manner in view of generating uniform pair-hopping processes over the entire lattice; see Fig. 14(a). In the second sequence, one first applies the pulse sequence on individual dimers and then activates hopping processes between them; see Fig. 14(b). Interestingly, this second approach leads to a class of models that share similarities with p-band systems [136], without recurring to higher bands; this aspect will be explored in Section VII. We note that subjecting a lattice to a time-periodic modulation generically leads to various types of correlated tunneling processes and higher-order interaction processes [26, 27, 28, 29, Figure 12: Time-evolving Husimi function \(Q(z,\varphi;t)\) for a state \(|\psi(t)\rangle\) that evolves according to the effective Hamiltonian \(\hat{H}_{\text{eff}}\) in Eq. (34). Here, the number of bosons is \(N\!=\!80\), and the other parameters are the same as in Fig. 11(c). The initial coherent spin state \(|\psi(0)\rangle=|N,\theta,\varphi\rangle\), at \(z\!=\!\cos\theta\!=\!0\) and \(\varphi\!=\!2.7\), becomes substantially squeezed around time \(t\!\approx\!3\), hence signaling the breakdown of its classical description. An oversqueezed state, exhibiting Majorana stars, appears around \(t\!\approx\!12\). The trajectory predicted by the effective classical equations of motion [Eq. (35)] is depicted in white. Figure 13: The driven NLSE versus the effective NLSE descriptions: (a) Population imbalance \(z(t)\) as a function of time, as obtained from the driven NLSE in Eq. (48) (blue curve) and from the effective classical equations of motion in Eq. (35) (red curve). (b) Zoom in the panel (a): the blue dots highlight the stroboscopic dynamics at times \(t_{\mathbf{n}}\); note the micromotion at arbitrary times \(t\!\neq\!t_{\mathbf{n}}\). (c) Stroboscopic dynamics \(z(t_{\mathbf{n}})\) obtained from the driven NLSE (blue curve and dots), compared with the effective classical description (red curve). (d) Same as in panel (c) but for the other canonical variable \(\varphi\). In all panels, the period of the drive is set to \(T\!=\!0.1\) and the pulse duration to \(\tau\!=\!T/20\); the interaction parameter is set to \(g\!=\!\chi\!N\!=\!5\), the static linear coupling is set to \(\Omega_{0}\!=\!0\) and \(\alpha\!=\!1/2\); the initial condition corresponds to \(z\!=\!\cos\theta\!=\!0\) and \(\varphi\!=\!2.7\) as in Fig. 11. We note that the stroboscopic dynamics \((z(t_{\mathbf{n}}),\varphi(t_{\mathbf{n}}))\) reflects the _effective_ energy landscape depicted in Fig. 10, close to the stable fixed point \(\text{FP}_{\pi}\!=\!(z\!=\!0,\varphi\!=\!\pi)\). 29; 31; 37]. In our approach, tunable pair-hopping processes are generated at the level of individual dimers, hence allowing for highly controllable exotic lattice models. Possible experimental implementations will be discussed in Section VIII. ### Generating uniform pair-hopping processes on a lattice Let us consider a dimerized lattice of \(N_{s}\) sites, which we label as \((n;s)\) with \[n=1,\ldots,N_{d},\quad s=1,2. \tag{49}\] Here, \(N_{d}\) denotes the number of dimers and \(s\) labels the two modes (or orbitals) within each dimer; see Fig. 14(a). Each dimer \(n\) is assumed to be described by a (static) Hamiltonian \(\hat{H}_{0}^{(n)}\) of the form given in Eq. (9). Physical realizations include arrays of two-mode optical cavities [137; 138], two-component Bose gases in an optical lattice [139], or quantum gases in tunable (dimerized) superlattices [140; 141]. We introduce the angular momentum operators [Eq. (10)] associated with each dimer \[\hat{J}_{x}^{(n)} =\frac{1}{2}\left(\hat{a}_{n,1}^{\dagger}\hat{a}_{n,2}+\hat{a}_{n,2}^{\dagger}\hat{a}_{n,1}\right),\] \[\hat{J}_{y}^{(n)} =\frac{1}{2i}\left(\hat{a}_{n,2}^{\dagger}\hat{a}_{n,1}-\hat{a}_ {n,1}^{\dagger}\hat{a}_{n,2}\right),\] \[\hat{J}_{z}^{(n)} =\frac{1}{2}\left(\hat{a}_{n,2}^{\dagger}\hat{a}_{n,2}-\hat{a}_ {n,1}^{\dagger}\hat{a}_{n,1}\right),\] \[\hat{N}^{(n)} =\hat{a}_{n,1}^{\dagger}\hat{a}_{n,1}+\hat{a}_{n,2}^{\dagger} \hat{a}_{n,2}, \tag{50}\] where \(\hat{a}_{n,s}^{\dagger}\) creates a boson at the lattice site \((n;s)\). We then write the total (undriven) Hamiltonian as [Eq. (12)] \[\hat{H}_{0} =\sum_{n}\hat{H}_{0}^{(n)} \tag{51}\] \[=\sum_{n}\left\{\chi\left[\hat{J}_{z}^{(n)}\right]^{2}-\Omega_{0} \,\hat{J}_{x}^{(n)}+\frac{\eta}{4}\left[\hat{N}^{(n)}\right]^{2}-\frac{1}{2} \hat{N}^{(n)}\right\},\] where \(\chi=1-\beta\) and \(\eta=1+\beta\). In the following, individual dimers will be coupled such that the number of particles \(\hat{N}^{(n)}\) will no longer be conserved at the level of each dimer. As a consequence, the interaction terms \(\sim(\hat{N}^{(n)})^{2}\) in Eq. (51) cannot be ignored. We now introduce the pulse sequence, which we split into two main steps: * Step 1: We apply the pulse sequence in Eq. (20) within each individual dimer over a duration \(T\). * Step 2: We consider the complementary dimerization, \[\ldots\quad(n-1;2)-(n,1)\quad(n;2)-(n+1,1)\quad\ldots\] and we apply the pulse sequence in Eq. (20) within those new dimers over a duration \(T\). The complete sequence of period \(\mathcal{T}=2T\) is illustrated in Fig. 14(a). Following the method of Section III, we readily obtain the effective Hamiltonian describing the evolution over Step 1: \[\hat{H}_{\text{eff}}^{\text{Step 1}} =\sum_{n}\hat{H}_{\text{eff}}^{(n)} \tag{52}\] \[=\sum_{n}\left\{\chi\left(\alpha\left[\hat{J}_{z}^{(n)}\right]^{ 2}+(1-\alpha)\left[\hat{J}_{y}^{(n)}\right]^{2}\right)\right.\] \[\quad-\Omega_{0}\,\hat{J}_{x}^{(n)}+\frac{\eta}{4}\left[\hat{N}^ {(n)}\right]^{2}-\frac{1}{2}\hat{N}^{(n)}\right\}. \tag{53}\] A similar expression can be derived for the complementary dimerization considered during Step 2. The total effective Hamiltonian is then obtained through the time-evolution operator over a period \(\mathcal{T}\) of the full sequence, \[e^{-i\mathcal{T}\hat{H}_{\text{eff}}}\equiv\hat{U}(\mathcal{T};0)=e^{-iT\hat{ H}_{\text{eff}}^{\text{hop}}\,2}e^{-iT\hat{H}_{\text{eff}}^{\text{hop}}\,1}, \tag{54}\] which can be estimated using the Trotter approximation \[\hat{H}_{\text{eff}}\approx\frac{1}{2}\left(\hat{H}_{\text{eff}}^{\text{Step 1}} +\hat{H}_{\text{eff}}^{\text{Step 2}}\right). \tag{55}\] The pulse sequence strongly couples the original dimers in Eq. (49), hence, it is relevant to relabel the sites using a single index \(m=1,\ldots 2N_{d}\). We obtain the effective Hamiltonian \(\hat{H}_{\text{eff}}\) in Eq. (55) in terms of the bosonic operators \(\hat{a}_{m}^{(\dagger)}\) as \[\hat{H}_{\text{eff}} =\frac{U_{1}}{2}\sum_{m}\hat{a}_{m}^{\dagger}\hat{a}_{m}^{\dagger} \hat{a}_{m}\hat{a}_{m} \tag{56}\] \[+U_{2}\sum_{m}\left(\hat{a}_{m+1}^{\dagger}\hat{a}_{m}^{\dagger} \hat{a}_{m+1}\hat{a}_{m}+\hat{a}_{m-1}^{\dagger}\hat{a}_{m}^{\dagger}\hat{a}_{m -1}\hat{a}_{m}\right)\] \[+\frac{U_{3}}{2}\sum_{m}\left(\hat{a}_{m+1}^{\dagger}\hat{a}_{m+1 }^{\dagger}\hat{a}_{m}\hat{a}_{m}+\hat{a}_{m-1}^{\dagger}\hat{a}_{m-1}^{ \dagger}\hat{a}_{m}\hat{a}_{m}\right)\] \[-\frac{\Omega_{0}}{2}\sum_{m}\left(\hat{a}_{m+1}^{\dagger}\hat{a}_ {m}+\hat{a}_{m-1}^{\dagger}\hat{a}_{m}\right)+\mathcal{O}(T),\] Figure 14: Designing effective interactions in lattice systems using two types of sequences. (a) We apply the pulse sequence in Eq. (20) within each individual dimer over a duration \(T\), and then apply that same sequence to the complementary dimerization. The total sequence, of period \(\mathcal{T}=2T\), realizes the extended Bose-Hubbard model in Eq. (56), which is characterized by drive-induced pair-hopping processes. (b) In the second type of sequence, one preserves the dimerized structure and activates hopping between the dimers during the second step. The resulting class of models exhibit orbital ordering, in direct analogy with \(p\)-band systems. where the interaction parameters are given by \[U_{1} =\left(1-\alpha(\beta-1)+\beta\right)/2,\] \[U_{2} =\left(1+\alpha(\beta-1)\right)/2,\] \[U_{3} =(\alpha-1)(1-\beta)/4. \tag{57}\] The effective Hamiltonian in Eq. (56) contains three types of tunable interaction terms: Hubbard (on-site) interactions of strength \(U_{1}\), nearest-neighbor interactions of strength \(U_{2}\) and pair-hopping processes of strength \(U_{3}\). We point out that all interaction terms are uniformly defined over the entire lattice. Such an extended Bose-Hubbard model is known to exhibit a rich phase diagram [124], which displays time-reversal-symmetry-broken superfluid phases, pair superfluid and supersolid phases, and unconventional Mott insulators. The driven setting described in this Section thus offers a realistic platform for the fine exploration of these intriguing phases of quantum matter. We note that the two-step sequence presented in this Section can be generalized in multiple ways. For instance, one could modulate the strength of the bare interactions between the various steps of the sequence, and possibly exploit this feature within additional Trotter steps. Such schemes would allow for independent control over all interaction processes in the effective Hamiltonian (56). ### Lattice system with drive-induced four-wave mixing In the classical limit, \(\hat{a}_{m}\rightarrow\psi_{m}\), the driven lattice system described above [Eq. (56) and Fig. 14(a)] is effectively described by the coupled NLSE \[i\frac{\partial\psi_{m}}{\partial t}=U_{1}|\psi_{m}|^{2}\psi_{m} +U_{2}\left(|\psi_{m+1}|^{2}+|\psi_{m-1}|^{2}\right)\psi_{m}\] \[+U_{3}\psi_{m}^{*}\left(\psi_{m+1}^{2}+\psi_{m-1}^{2}\right)- \frac{\Omega_{0}}{2}\left(\psi_{m+1}+\psi_{m-1}\right), \tag{58}\] where the strength of the various nonlinearities \(U_{1,2,3}\) (self-phase modulation, cross-phase modulation and four-wave mixing) are provided in Eq. (57). This driven nonlinear setting is well-suited to explore the impact of exotic nonlinearities on discrete solitons. In particular, preliminary studies suggest that the drive-induced four-wave mixing in Eq. (58) can stabilize inter-site solitons [142], which are generically unstable for on-site nonlinearities [143]. ### Dimerized lattice with effective pair hopping In this Section, we consider a slightly different driving sequence, which preserves the dimerized structure of Eqs. (49)-(51). As in the previous Section VI.1, the pulse sequence is split into two main steps: * Step 1: We apply the pulse sequence in Eq. (20) within each individual dimer over a duration \(T\). * Step 2: We activate single-particle hopping processes between neighboring dimers (to be specified), over a duration \(T\). The complete sequence of period \(\mathcal{T}=2T\) is illustrated in Fig. 14(b). A broad class of models can be designed using this driving sequence. For the sake of concreteness, we focus our study on a specific model obtained by setting the parameters \(\alpha=\Omega_{0}=0\) at Step 1, such that the effective Hamiltonian describing the time-evolution over Step 1 reduces to \[\hat{H}_{\text{eff}}^{\text{Step 1}} =\sum_{n}\hat{H}_{\text{eff}}^{(n)}\] \[=U\sum_{n}\left(\left[\hat{N}^{(n)}\right]^{2}-\xi\left[\hat{J}_ {y}^{(n)}\right]^{2}\right)-\frac{1}{2}\sum_{n}\hat{N}^{(n)}, \tag{59}\] where \(U=(\beta+1)/4\) and \(\xi=4(\beta-1)/(\beta+1)\). Moreover, we consider the single-particle processes activated in Step 2 to be of the form \[\hat{H}^{\text{Step 2}}= -2\Omega\sum_{n}\left(\hat{a}_{n+1,1}^{\dagger}\hat{a}_{n,1}+\hat {a}_{n+1,2}^{\dagger}\hat{a}_{n,2}+\text{h.c.}\right)\] \[-2\Omega_{12}\sum_{n}\left(\hat{a}_{n+1,1}^{\dagger}\hat{a}_{n,2 }+\text{h.c.}\right), \tag{60}\] as we illustrate in Fig. 15(a); see also Section VIII on possible realizations. Altogether, the total effective Hamiltonian is obtained through the time-evolution operator over a period \(\mathcal{T}\) of the full sequence [Eq. (54)], and it reads \[\hat{H}_{\text{eff}} =\frac{U}{2}\sum_{n}\left(\left[\hat{N}^{(n)}\right]^{2}-\xi \left[\hat{J}_{y}^{(n)}\right]^{2}\right) \tag{61}\] \[-\Omega\sum_{n}\left(\hat{a}_{n+1,1}^{\dagger}\hat{a}_{n,1}+\hat {a}_{n+1,2}^{\dagger}\hat{a}_{n,2}+\text{h.c.}\right)\] \[-\Omega_{12}\sum_{n}\left(\hat{a}_{n+1,1}^{\dagger}\hat{a}_{n,2}+ \text{h.c.}\right)\] \[-\left[\frac{U}{2}\left(1-\frac{\xi}{4}\right)+\mu\right]\sum_{n} \hat{N}^{(n)}+\mathcal{O}(T),\] where we introduced the chemical potential \(\mu\) in the last line for later convenience. This effective dimerized lattice model is illustrated in Fig. 15(a). The Hamiltonian in Eq. (61) features interaction terms of the form \(-\left[\hat{J}_{y}^{(n)}\right]^{2}\), which reminds the models describing interacting bosons in \(p\)-bands [144; 136]; see also Ref. [145]. Indeed, repulsive bosons in \(p_{x,y}\) orbitals experience a characteristic orbital-type coupling of the form \(-\hat{L}_{z}^{2}\), where \(\hat{L}_{z}=i(\hat{p}_{x}^{\dagger}\hat{p}_{y}-\hat{p}_{y}^{\dagger}\hat{p}_{z})\) is the angular-momentum operator. The operator \(\hat{J}_{y}^{(n)}=(i/2)(\hat{a}_{n,1}^{\dagger}\hat{a}_{n,2}-\hat{a}_{n,2}^{ \dagger}\hat{a}_{n,1})\) entering the first line of Eq. (61) can thus be interpreted as a local angular momentum, with the two modes \(\hat{a}_{1,2}\) playing the role of \(p_{x,y}\) orbitals. It is the aim of the next Section VII to explore the consequences of these unconventional interactions and orbital structure on the ground-state properties of the dimerized lattice in Eq. (61). ## VII Bosonic phases in a dimerized lattice with effective pair hopping ### Orbital order and emergent magnetic fluxes When setting \(\xi\!>\!0\), the peculiar interaction term \(-\xi\left[\hat{J}_{y}^{(n)}\right]^{2}\) in Eq. (61) favors an orbital-ordered ground state, which exhibits finite "angular momentum" at the level of each dimer: \(|J_{y}^{(n)}|\!\neq\!0\) is maximized in the ground state, hence leading to a spontaneous breaking of time-reversal symmetry (TRS). Indeed, the angular-momentum states \(|b_{\pm}^{(n)}\rangle\), which diagonalize the \(\hat{J}_{y}^{(n)}\) operator, \[\hat{J}_{y}^{(n)}|b_{\pm}^{(n)}\rangle=(\pm)|b_{\pm}^{(n)}\rangle,\qquad|b_{\pm }^{(n)}\rangle=\hat{b}_{n,\pm}^{\dagger}|\emptyset\rangle, \tag{62}\] have a complex structure given by \[\hat{b}_{n,\sigma}^{\dagger}=\frac{1}{\sqrt{2}}\left(\hat{a}_{n, 1}^{\dagger}+i\sigma\hat{a}_{n,2}^{\dagger}\right),\qquad\sigma\!=\!\pm, \tag{63}\] \[\mathsf{T}\,\hat{b}_{n,\sigma}^{\dagger}\,\mathsf{T}^{-1}\!=\! \hat{b}_{n,\overline{\sigma}}^{\dagger},\qquad\qquad\qquad\overline{\sigma}\!= \!-\sigma,\] where \(\mathsf{T}\) is the TRS operator. The spontaneous breaking of TRS leads to rich phases and chiral dynamics in the ground state, as we describe below. Henceforth, we set \(\xi\!>\!0\) except otherwise stated. It is instructive to write the effective Hamiltonian (61) in the angular-momentum-state basis (\(b_{\pm}\)), \[\hat{H}_{\text{eff}} =\frac{1}{2}\sum_{n\sigma}\left[U_{\xi}\,\hat{n}_{n,\sigma}(\hat{ n}_{n,\sigma}-1)+W_{\xi}\,\hat{n}_{n,\sigma}\hat{n}_{n,\overline{\sigma}}\right]\] \[-\sum_{n\sigma}\left(t_{\sigma}\,\hat{b}_{n+1,\sigma}^{\dagger} \hat{b}_{n,\sigma}+\text{h.c.}\right)\] \[-\sum_{n\sigma}\left(t_{\sigma\overline{\sigma}}\,\hat{b}_{n+1, \sigma}^{\dagger}\hat{b}_{n,\overline{\sigma}}+\text{h.c.}\right)-\mu\sum_{n \sigma}\hat{n}_{n,\sigma}, \tag{64}\] where \(\hat{n}_{n,\sigma}\!=\!\hat{b}_{n,\sigma}^{\dagger}\hat{b}_{n,\sigma}\). The Hamiltonian in Eq. (VI.2) features intra-species and inter-species interactions of strength \[U_{\xi}=U(1-\xi/4),\qquad W_{\xi}=U(1+\xi/4), \tag{65}\] as well as _complex_ tunneling matrix elements given by \[t_{\sigma}=\left(\Omega+\frac{\Omega_{12}}{2}e^{i\sigma\pi/2} \right)=|t_{\sigma}|e^{i\sigma\Phi}, \tag{66}\] \[t_{\sigma\overline{\sigma}}=\frac{\Omega_{12}}{2}e^{-i\sigma \frac{\pi}{2}}. \tag{67}\] In this picture, the problem can be interpreted as a fictitious Creutz-Hubbard ladder [146, 147, 148], where each leg is entirely composed of either \(b_{+}\) or \(b_{-}\) orbitals; see the sketch in Fig. 15(b). For \(\xi\!>\!0\), the inter-species interaction \(W_{\xi}\) always dominates over the intra-species interaction \(U_{\xi}\), hence favoring the stabilization of an orbital ordering in the system. To capture this orbital order, we introduce the local orbital polarization, which is defined at each dimer as \[m_{0}^{(n)}\equiv-\frac{2}{\rho_{n}}\langle\hat{J}_{y}^{(n)}\rangle_{0}=\frac {1}{\rho_{n}}\left(\langle\hat{n}_{n,+}\rangle_{0}-\langle\hat{n}_{n,-}\rangle _{0}\right). \tag{68}\] Here, \(\rho_{n}\!=\!\langle\hat{N}^{(n)}\rangle_{0}\) denotes the local density of bosons, and we henceforth use the notation \(\langle\ldots\rangle_{0}\) to express the mean value of an operator in the ground state. The effective Hamiltonian \(\hat{H}_{\text{eff}}\) [Eqs. (61),(VI.2)] displays two types of hopping terms. As we will demonstrate later in this Section, the kinetic terms \(\sim\Omega\) stabilize a uniform "ferromagnetic" ordering in the chain of dimers, while the terms \(\sim\Omega_{12}\) (which are absent in p-band models [136]) are responsible for the emergence of an effective magnetic flux and chiral currents. This can be intuitively grasped from the complex tunneling matrix elements in Eq. (66), and which are illustrated in Fig. 15(b): Each leg of the fictitious ladder (\(\sigma\!=\!\pm\)) is associated with a magnetic flux, \[\Phi_{\sigma}=\sigma\times\Phi,\qquad\Phi=\text{atan}(\Omega_{12}/2\Omega), \tag{69}\] such that a macroscopic occupation of a single leg (through spontaneous orbital ordering) leads to the emergence of a chiral persistent current: a clear signature of TRS breaking in the system. This simple picture illustrates how the emergent chirality of the system is directly determined by the interaction-induced orbital polarization \(m_{0}\) in Eq. (68). These peculiar properties will now be explored in detail, both in the mean-field limit (relevant for nonlinear optics and weakly-interacting bosonic gases) and in the quantum (strongly-correlated) regime. We will also present a practical quench protocol, which dynamically reveals the presence of orbital polarization in this unconventional lattice system. Figure 15: (a) Illustration of the dimerized lattice model in Eq. (61), with inter-dimer couplings \(\Omega\) and \(\Omega_{12}\). (b) Sketch of the fictitious Creutz-Hubbard ladder in Eq. (VI.2), as obtained when considering the angular-momentum-state representation (\(b_{\pm}\)). The arrows depict hopping processes, and their color reflect the complex phase acquired upon tunneling (\(+\Phi\) in red, \(-\Phi\) in blue). When projected onto a single leg (i.e. when spontaneous orbital ordering occurs), the lattice exhibits an emergent flux \(\Phi\!=\!\pm\text{atan}(\Omega_{12}/2\Omega)\), giving rise to persistent currents. ### Mean-field regime: orbital polarization and chiral currents We start by analyzing the mean-field (classical) regime of the Creutz-Hubbard ladder in Eq. (64), which is obtained by performing the substitution \[\hat{b}_{n,\sigma}\to\langle\hat{b}_{n,\sigma}\rangle\equiv\psi_{n,\sigma}. \tag{70}\] The corresponding NLSE (expressed in the \(b_{\pm}\) basis) reads \[i\frac{\partial\psi_{n,\sigma}}{\partial t}\!\!=\!\!\left[U_{ \xi}|\psi_{n,\sigma}|^{2}+W_{\xi}|\psi_{n,\overline{\sigma}}|^{2}-\left(\frac{ U_{\xi}}{2}+\mu\right)\right]\!\psi_{n,\sigma}\] \[-t_{\sigma}\,\psi_{n-1,\sigma}-t_{\sigma\overline{\sigma}}\,\psi _{n-1,\overline{\sigma}}-t_{\sigma}^{*}\,\psi_{n+1,\sigma}-t_{\sigma\overline {\sigma}}^{*}\,\psi_{n+1,\overline{\sigma}}. \tag{71}\] We aim at determining the ground-state properties of this system, setting the focus on the emergence of orbital order in the regime \(\xi>0\). Following a self-consistent mean-field approach, we obtain analytical predictions for the orbital polarization and the chiral persistent current in terms of the system parameters. We hereby summarize our findings, and refer the reader to Appendix B for a detailed analysis. #### iv.2.1 The orbital polarization First of all, we find that the ground-state orbital polarization \(m_{0}\) is directly related to the relative phase \(\varphi\) between the two components of the condensate within each dimer [see Eq. (16)], \[\varphi=\text{atan}\left(\frac{m_{0}}{\sqrt{1-m_{0}^{2}}}\right). \tag{72}\] Furthermore, we find that these local ground-state properties (\(m_{0}\), \(\varphi\)) are uniform over the entire dimerized lattice. Hence, a ground state with finite polarization \(m_{0}\neq 0\) defines a 'chiral' superfluid phase (CSF), which is characterized by a uniform twisting of the phase \(\varphi\) over the dimerized lattice. We note that similar twisted superfluid phases have been identified in other classes of models supporting pair-tunneling processes [124]. When the coupling \(\Omega_{12}=0\), condensation occurs at quasi-momentum \(k_{0}=0\) and the system exhibits two degenerate ground states with opposite orbital polarizations \(m_{0}=\pm 1\); according to Eq. (72), this corresponds to a relative phase \(\varphi=\pm\pi/2\) within each dimer. In this way, the ground state maximizes the "angular momentum" \(|J_{y}^{(n)}|\) at the level of each dimer and it spontaneously develops a "ferromagnetic" ordering throughout the dimerized lattice: orbital order emerges through the spontaneous breaking of TRS [Eq. (63)]. For a small finite coupling \(\Omega_{12}\), the ground-state polarization is found to decrease as \[m_{0}\approx\pm\left(1\mp\frac{1}{2}\left(\frac{\Omega_{12}}{\Omega_{c}} \right)^{2}\right),\qquad\text{for}\ \ \Omega_{12}\ll\Omega_{c}, \tag{73}\] where we introduced the critical value \[\Omega_{c}=\frac{\rho}{2}(W_{\xi}-U_{\xi})=U\rho\xi/4. \tag{74}\] Here, \(\rho=\sum_{\sigma}|\psi_{n,\sigma}|^{2}=N/N_{d}\) denotes the particle density, with \(N\) the total number of bosons and \(N_{d}\) the number of dimers, and we considered periodic boundary conditions. Furthermore, condensation is found to occur at finite quasi-momentum, which for \(\Omega_{12}\ll\Omega_{c}\) reads \[k_{0}\approx(\Omega_{12}/2\Omega)\,\text{sgn}(m_{0})\approx\Phi\,\text{sgn}( m_{0}), \tag{75}\] hence reflecting the emergence of an effective flux \(\Phi\) [Eq. (69)] dictated by orbital order. For small dimerized lattices, such that \(N_{d}\!<\!2\pi\Omega/|\Omega_{12}|\), condensation occurs at \(k_{0}=0\) for any value of the coupling \(\Omega_{12}\). In this case, the ground-state orbital polarization can be obtained analytically as \[m_{0}=\begin{cases}\pm\sqrt{1-\left(\frac{\Omega_{12}}{\Omega_{c}}\right)^{2 }}&\Omega_{12}<\Omega_{c}\\ 0&\text{otherwise}.\end{cases} \tag{76}\] We have validated these predictions by numerically solving the coupled NLSE (71) and performing imaginary-time evolution to reach the ground state. In order to favor one of the two degenerate TRS-broken ground states, we used an initial seed privileging the \(+\) orbitals. Figure 16(a) shows the obtained orbital polarization \(m_{0}\) and relative phase \(\varphi\) as a function of \(\Omega_{12}/\Omega\). The analytical curve with \(m_{0}>0\) [Eq. (76)] is plotted with a solid red line, presenting an excellent agreement with the numerical results (red dots). We note that the angle \(\varphi\) evolves continuously from \(\pi/2\) to zero, as described by Eq. (72). We emphasize that the sharp transition displayed in Fig. 16(a), from \(m_{0}\neq 0\) to \(m_{0}=0\), is due to the condensation at \(k_{0}=0\), which is imposed by the small system size (\(N_{d}\!=\!30\) dimers). For sufficiently large lattices, we find that condensation occurs at a finite quasi-momentum \(k_{0}\) [even beyond the limit of validity of Eq. (75)], leading to a smoother behavior of the orbital polarization; see the inset in Fig. 17. This surprising behavior, which points to a crossover rather than a genuine phase transition, can be traced back to the peculiar form of the underlying mean-field functional; see Appendix B. Finally, it is worth noticing that the transition displayed in Fig. 16(a), by which the relative phase changes from \(\varphi\!\neq\!0\) to \(\varphi\!=\!0\), is analogous to the transition from Phase III to Phase I discussed in Section IV for a single dimer; see Fig. 5. In the present case, the fixed points \(\text{FP}_{*}\) are described by Eq. (72), the discrete \(S_{2}\) symmetry corresponds to TRS, and the role of the dimensionless coupling \(\tilde{\Omega}_{0}\) is played by the ratio \(\Omega_{12}/\Omega_{c}\). #### iv.2.2 Chiral persistent currents The interplay of local orbital polarization and hopping processes gives rise to a chiral ground-state current on a ring geometry. In the mean-field regime, this chiral persistent current can be expressed in terms of the condensate's momentum and orbital polarization \(m_{0}\) according to \[J_{\text{MF}}(k_{0})=\rho\Bigg{[} \sin(k_{0})\left(2\Omega+\Omega_{12}\sqrt{1-m_{0}^{2}}\right)\] \[-2\Omega_{12}\cos(k_{0})m_{0}\Bigg{]}. \tag{77}\] For small system sizes, condensation occurs at zero momentum and we find a simple relation for the chiral current \[J_{\text{MF}}(k_{0}\!=\!0)\!=\!-2\rho\Omega_{12}m_{0}. \tag{78}\] We have validated this analytical prediction for the chiral current by numerically solving the coupled NLSE (71), as we show in Fig. 16(b). #### iv.2.3 Numerical validation beyond mean field In order to validate the existence of the transition predicted by mean-field theory, we solved the full quantum many-body Hamiltonian in Eq. (64) using density-matrix renormalization group (DMRG) methods [149]. In practice, we select one of the TRS-broken ground states (with \(+\) polarization) by adding a small polarizing field (\(0.001\,\Omega\)), and we keep up to 512 DMRG states to ensure a truncation error \(\leq 10^{-6}\). Here, we consider a lattice containing \(N_{d}\!=\!100\) dimers, with open boundary conditions, and we calculate the average ground-state orbital polarization, \[\overline{m}_{0}=\frac{1}{\rho N_{d}}\sum_{n=1}^{N_{d}}\langle\hat{n}_{n,+}- \hat{n}_{n,-}\rangle_{0}, \tag{79}\] for various values of the ratio \(\Omega_{12}/\Omega_{c}\). This calculation is performed deep in the quantum regime (but still within the chiral superfluid phase), by setting \(\Omega/U_{\xi}\!=\!0.25\) and a filling \(\rho\!=\!2\). The resulting curve \(\overline{m}_{0}(\Omega_{12})\) is depicted in the inset of Fig. (17), together with the mean-field prediction. Interestingly, the transition from the chiral superfluid (\(\overline{m}_{0}\neq 0\)) to the conventional superfluid (\(\overline{m}_{0}=0\)) is still observed deep into the quantum regime. We note that the transition is qualitatively similar in that regime, although the transition point is slightly below the mean-field prediction (\(\Omega_{12}\!=\!\Omega_{c}\)). ### A quench protocol to measure the orbital polarization We have seen that the ground state of the system spontaneously breaks TRS by developing a finite orbital polarization \(m_{0}\), whose sign reflects the privileged orbital order. This order Figure 16: (a) Ground-state orbital polarization \(m_{0}\) and relative phase \(\varphi\) as a function of the hopping amplitude \(\Omega_{12}\), measured in units of the critical value \(\Omega_{c}=U\xi\rho/4\). The shaded region depicts the chiral superfluid (CSF) phase with \(\varphi\neq 0\), while the non-shaded region represents a more conventional superfluid (SF) phase. Note that the transition from \(\varphi\!\neq\!0\) to \(\varphi\!=\!0\) is analogous to the transition from Phase III to Phase I in Fig. 5; see Section IV. (b) Mean-field current as a function of \(\Omega_{12}\). In both panels, the points where obtained through imaginary-time evolution of the NLSE in Eq. (71), evaluating quantities in the ground state with \(m_{0}\!>\!0\). The solid lines represent the analytical mean-field predictions given by Eqs. (72), (76) and (78). The system contains \(N_{d}\!=\!30\) dimers at filling \(\rho\!=\!2\), and the interaction parameters are set to \(U\!=\!0.2\,\Omega\) and \(\xi\!=\!4/3\). Figure 17: Phase diagram of the fictitious Creutz-Hubbard ladder in Eq. (64) as a function of the chemical potential and the ratio \(\Omega/U_{\xi}\). Blue shaded areas represent the Mott insulating phases, obtained within a strong-coupling perturbation (SCP) theory for \(\Omega_{12}=\Omega\) and \(\rho\geq 2\). Black solid lines show the Mott-superfluid boundaries for \(\Omega_{12}=0\). Filled points were obtained using a DMRG algorithm for \(\Omega_{12}=0\), with a filling fraction \(\rho=2\). The inset shows the evolution of the averaged ground-state orbital polarization \(\overline{m}_{0}\) as a function of \(\Omega_{12}\), for \(\Omega/U_{\xi}=0.25\) and filling \(\rho=2\), in a chain with open boundary conditions (\(N_{d}\!=\!100\) dimers). The DMRG result is compared to that obtained from the NLSE in Eq. (71). parameter can be measured through a simple quench protocol, as we now explain. We assume that the system is initialized in the ground state. At \(t=0\), all the dimers are suddenly decoupled from each other, so that the post-quench Hamiltonian (\(t>0\)) is of the form \[\hat{H}_{Q} =\frac{U_{\xi}}{2}\sum_{n\sigma}\hat{n}_{n,\sigma}(\hat{n}_{n, \sigma}\!-\!1)\!+\!W_{\xi}\sum_{n}\hat{n}_{n,+}\hat{n}_{n,-}\!-\!\mu\sum_{n \sigma}\hat{n}_{n,\sigma}. \tag{80}\] The Heisenberg equations of motion for the bosonic creation and annihilation operators can be simply written as \[\frac{d\hat{b}_{n,\sigma}}{dt} =-\frac{i}{\hbar}[\hat{b}_{n,\sigma}(t),\hat{H}_{Q}] \tag{81}\] \[=-\frac{i}{\hbar}\big{(}U_{\xi}\hat{n}_{n,\sigma}+W_{\xi}\hat{n}_ {n,\overline{\sigma}}-\mu\big{)}\hat{b}_{n,\sigma}(t).\] We point out that the operators \(\hat{n}_{n,\sigma}(t)\!=\!\hat{n}_{n,\sigma}\) do not depend on time due to the fact that these quantities are conserved by the Hamiltonian \(\hat{H}_{Q}\). Consequently, one can readily integrate these equations and find \[\hat{b}_{n,\sigma}(t)=e^{-\frac{i}{\hbar}\big{(}U_{\xi}\hat{n}_{n,\sigma}+W_{ \xi}\hat{n}_{n,\,\overline{\sigma}}-\mu\big{)}t}\hat{b}_{n,\sigma}(t=0). \tag{82}\] Taking the classical (mean-field) limit, and considering a ring geometry with a homogeneous density distribution, this translates into \[\psi_{n,\sigma}(t)=e^{-\frac{i}{\hbar}\big{(}U_{\rho}-\sigma\Omega_{c}m_{0}- \mu\big{)}t}\psi_{n,\sigma}(t=0), \tag{83}\] where \(m_{0}\) is the ground-state orbital polarization (uniformly defined throughout the system). After the quench, the number of particles in the original orbitals \(a_{1,2}\), defined at each dimer, evolves according to \[|\psi_{n,1}(t)|^{2} \!\!=\!\!\frac{1}{2}\left(\rho+\psi_{n,+}^{*}(t)\psi_{n,-}(t)+ \psi_{n,-}^{*}(t)\psi_{n,+}(t)\right),\] \[|\psi_{n,2}(t)|^{2} \!\!=\!\!\frac{1}{2}\left(\rho-\psi_{n,+}^{*}(t)\psi_{n,-}(t)- \psi_{n,-}^{*}(t)\psi_{n,+}(t)\right). \tag{84}\] Inserting Eq. (83) into Eq. (84), and using Eq. (23) to parameterize the ground state at \(t\!=\!0\), we find that the time evolution of the particle number at each dimer is described by \[|\psi_{n,1}(t)|^{2} =\frac{\rho}{2}\left[1+\sqrt{1-m_{0}^{2}}\sin\left(\frac{2\Omega_ {c}m_{0}t}{\hbar}\right)\right], \tag{85}\] \[|\psi_{n,2}(t)|^{2} =\frac{\rho}{2}\left[1-\sqrt{1-m_{0}^{2}}\sin\left(\frac{2\Omega _{c}m_{0}t}{\hbar}\right)\right].\] As a consequence, the amplitude and frequency of these oscillations give direct information on the orbital polarization of the ground state, \(m_{0}\), which is also straightforwardly related to the twisted superfluid angle \(\varphi\) through Eq. (72). Figure 18 shows the population imbalance measured at each dimer, \[z_{n}(t)=\frac{|\psi_{n,1}(t)|^{2}-|\psi_{n,2}(t)|^{2}}{\rho}, \tag{86}\] after numerically performing the quench protocol for different values of \(\Omega_{12}\). For each value of the hopping amplitude, the initial ground state corresponds to that used in Fig. 16, i.e. an ordered state with \(m_{0}\geq 0\). We find that the dynamics obtained from these numerical simulations is perfectly described by the analytical prediction in Eq. (85). In the conventional superfluid phase, where the system is completely depolarized, the particle number at each dimer remains unaltered. ### Strong-coupling regime and the transition to the chiral Mott phase In the limit of strong interactions, \(U\gg\Omega,\Omega_{12}\), and for a commensurate filling factor, \(\rho=N/N_{d}>1\), the bosonic system described by Eq. (64) is found to form a "chiral" Mott insulating phase, characterized by an orbital ordering. As in the mean-field regime, this orbital order relies on having the interaction parameter \(\xi\!>\!0\). We hereby set \(\rho\!>\!1\), and treat the unit-filling case \(\rho\!=\!1\) in the next Section VII.5. When the hopping parameters are strictly zero, and when setting \(\xi\!>\!0\), the particles either occupy the \(b_{+}\) or the \(b_{-}\) orbitals, so as to maximize the local angular momentum \(|\hat{y}_{y}^{(n)}|\) at the level of each dimer. For \(\rho\!=\!N/N_{d}\!>\!1\) bosons in each dimer, these Fock states are described by \[|\sigma_{n}\rangle=\frac{(\hat{b}_{n,\sigma}^{\dagger})^{\rho}}{\sqrt{\rho!}}| 0\rangle, \tag{87}\] where \(\hat{b}_{n,\sigma}\) is defined in Eq. (63). In the absence of kinetic terms in the Hamiltonian (64), there is a macroscopic degeneracy of \(2^{N_{d}}\) possible ground-state configurations \(|\{\sigma_{n}\}\rangle\) Figure 18: Time evolution of the relative population at each dimer \(z_{n}(t)\), as a function of the hopping amplitude \(\Omega_{12}\), upon performing the quench protocol described in the main text. For each value of the hopping parameter, the initially evolved ground state corresponds to the one obtained in Fig. 16, i.e. a state with \(m_{0}\geq 0\). which may be written as product states \[|\{\sigma_{n}\}\rangle=\prod_{n=1}^{N_{d}}|\sigma_{n}\rangle. \tag{88}\] The tunneling terms in the Hamiltonian (64) do not couple these states at first order, but they do lift their degeneracy in second-order perturbation theory. Following the perturbative approach detailed in Appendix C, we obtain an effective Ising spin model \[\hat{H}_{\text{Ising}}^{\text{eff}}=K_{yy}\sum_{n}\hat{J}_{y}^{(n)}\hat{J}_{y}^ {(n+1)}, \tag{89}\] with the exchange coupling \[K_{yy}=-\frac{4\Omega^{2}[W_{\xi}+\rho(W_{\xi}-U_{\xi})]}{U_{\xi}[U_{\xi}+ \rho(W_{\xi}-U_{\xi})]}<0, \tag{90}\] where we assumed repulsive intraspecies interactions \(U_{\xi}>0\) in Eq. (64), i.e. \(U>0\) and \(\xi<4\) in the original Hamiltonian (61). We point out that the effective Hamiltonian in Eq. (89) only acts on the projected subspace spanned by the states in Eq. (88). Importantly, the exchange coupling \(K_{yy}<0\) in Eq. (90) favors a uniform "ferromagnetic" angular-momentum ordering. Moreover, the exchange coupling is found to be independent of the hopping parameter \(\Omega_{12}\), at this order of perturbation theory. This analysis suggests that the orbital order identified in the superfluid phase (mean-field regime) should be preserved in the strongly-interacting regime. In analogy with \(p\)-band systems, we refer to this ordered (TRS-broken) Mott insulator as a "chiral" Mott phase. We remark that \(K_{yy}\!>\!0\) in \(p\)-bands [136], such that the Mott phase is instead associated with a staggered ordering in that context. The perturbative expansion described above can be further exploited to elucidate the boundaries between the chiral Mott and superfluid phases; see Appendix C. Indeed, applying a strong-coupling perturbation (SCP) theory up to second order in the hopping amplitudes, we obtained approximate boundaries for particle-type (\(\mu_{+}\)) and hole-type (\(\mu_{-}\)) excitations in the Mott phase \[\frac{\mu_{+}}{U_{\xi}} \!\!=\!\!\rho-2(\rho+1)\frac{|t_{\sigma}|}{U_{\xi}}+\frac{|t_{ \sigma}|^{2}}{U_{\xi}^{2}}\rho^{2}-\frac{4(\rho+1)|t_{\sigma\overline{\sigma}} |^{2}\cos^{2}(\Phi)}{\rho U_{\xi}(W_{\xi}\!-\!U_{\xi})}\] \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! interactions \(W_{\xi}\) in Eq. (64) and the single-particle coupling \(\Omega_{12}\), which effectively produces a Rashba-type spin-orbit coupling [154; 155]. We also note that the coupling constant \(K_{yy}\) entering the Ising model in Eq. (90) reduces to that in Eq. (93) in the limit \(\rho\!=\!1\). In the limit \(\Omega_{12}\!=\!0\), we obtain an XYX quantum spin-1/2 Heisenberg model \[\hat{H}_{\rm XYX} = \frac{K}{2}\sum_{n}\left(\hat{J}_{+}^{(n)}\hat{J}_{-}^{(n+1)}+ \hat{J}_{-}^{(n)}\hat{J}_{+}^{(n+1)}\right)\] \[+\Delta K\sum_{n}\hat{J}_{y}^{(n)}\hat{J}_{y}^{(n+1)},\] where we defined the operators \(\hat{J}_{\pm}^{(n)}\!=\!\hat{J}_{x}^{(n)}\!\pm\!i\hat{J}_{z}^{(n)}\), and where we introduced the ferromagnetic coupling \[K\!\equiv\!K_{xx}\!=\!K_{zz}\!=\!-4\Omega^{2}/W_{\xi}, \tag{95}\] and the anisotropy parameter \[\Delta=2(W_{\xi}/U_{\xi})-1. \tag{96}\] We note that a similar effective Hamiltonian was obtained for p-band bosons [158]. In the present context, the anisotropy parameter can satisfy \(\Delta>1\) upon setting \(W_{\xi}>U_{\xi}\). In this case, our system privileges ferromagnetic order along the \(y\)-axis, hence forming a chiral Mott insulating phase with one boson per dimer. We expect that a small finite value of \(\Omega_{12}\) will slightly depolarize this chiral Mott phase. Last but not least, we note that similar Heisenberg models can be mapped onto an interacting Kitaev chain [158; 159], which suggests an interesting route towards Floquet-engineered topological superconductors [160]. ## VIII Experimental implementations and concluding remarks ### Optical cavities and photonic lattices This work introduces a method to engineer and tune nonlinearities in optical devices, using a designed pulse sequence that couples the optical modes in a fast and periodic manner. These repeated mixing operations simply correspond to the pulsed activation of a linear coupling between two optical modes, and they can thus be implemented in a broad range of two-mode nonlinear systems, ranging from optical resonators [70; 71; 72; 88] and waveguide arrays [50; 74] to circuit-QED platforms [15]. In a two-mode optical cavity [70; 71; 72], the pulsed operations could correspond to a coupling between the two polarization eigenmodes of the cavity, which can be directly realized by means of quarter-wave plates [90; 91]; see the sketch in Fig. 2(a). In optical-waveguide arrays [50], the two modes (\(1\) and \(2\)) would describe light propagating in two adjacent waveguides. In this case, the pulsed linear couplings in Eqs. (4)-(5) can be realized by abruptly changing the spatial separation between the two waveguides; see Fig. 2(b) for a sketch and Refs. [51; 52; 54; 55] for experimental realizations using ultrafast-laser-inscribed waveguides. Such optical-waveguide settings could benefit from the state-recycling technique of Refs. [53; 161], where light is neglected into the waveguides (and possibly modified) at every roundtrip; see also Refs. [162; 163; 164] regarding setups based on recirculating fiber loops. While we considered a generic setting that includes both self-phase and cross-phase modulations in the absence of the periodic drive [Eq. (1)], we found that effective nonlinearities emerge even when a single type of bare nonlinearity is present. Importantly, we demonstrated that the strength (and sign) of effective nonlinearities can be tuned by simply adjusting the pulse sequence; see Eqs. (7)-(8) and Eq. (57). We also emphasize that the parameter \(\beta\) [i.e. the relative strength and sign of the bare self-phase and cross-phase modulations in Eq. (1)] can vary across a large number of experimental configurations [68; 71; 165]. To detect the emergence of drive-induced nonlinearities, we proposed to study changes in the phase space's topology [80], which can be explored by monitoring the dynamics of the relative intensity \(z(t)\) and phase \(\varphi(t)\) of the two optical modes. According to our numerical studies, these properties could already be revealed over "time" scales of the order of \(5-10T\), where \(T\) denotes the period of the driving sequence. This is particularly appealing for waveguide settings [50], where the "evolution time" associated with the propagation distance - and hence the number of driving periods - is limited. In this context, it would be interesting to combine such driving schemes with a state-recycling protocol [53]. While we considered a simple pulse sequence, characterized by the alternance of linear mixing operations and "free" evolution [Fig. 3(a)], we note that more complicated protocols and configurations could be envisaged. For instance, different types of mixing processes could be activated within each period of the drive, including nonlinear processes. The lattice models explored in Sections VI-VII could be implemented in nonlinear optics, by engineering appropriate couplings between photonic dimers. In optical-cavity implementations, each dimer would be represented by a two-mode cavity; one would then couple many of such dimers using mode-dependent couplings [166; 167; 168; 137], hence realizing the models illustrated in Fig. 14. These lattice models could also be realized in arrays of ultrafast-laser-inscribed waveguides [50], where the couplings between individual waveguides can be adjusted with high precision [51; 52; 53]. In this context, it would be exciting to study the interplay of drive-induced nonlinearities, solitons and topological band structures; see for instance Ref. [170], where edge solitons were studied in the presence of four-wave mixing. It would also be intriguing to explore the applicability of our scheme in the context of superconducting microwave cavities [171], where optical nonlinearities originate from the coupling to transmon ancillas. Indeed, it was recently shown that such optical nonlinearities can be modified by applying an off-resonant drive on the transmon ancillas [66]. Moreover, in circuit-QED platforms, the linear coupling between neighboring qubits can be modulated in a time-periodic manner [15]; applying our pulse protocol to such settings could be used to modify the nonlinearity of the qubits, and hence, the interaction between microwave photons. In general, we anticipate that drive-induced nonlinearities, such as the effective four-wave mixing studied in this work, could be useful for nonlinear optics applications [67, 69]. We remark that the present work relies on a non-dissipative theoretical framework. Our scheme could nevertheless be applied to driven-dissipative optical devices [86], such as fiber ring cavities or microresonators described by the Lugiato-Lefever equation [72, 73, 174, 175, 176, 177], upon treating dissipation within the Floquet analysis [176, 103, 177]. ### Ultracold atomic gases The bosonic Josephson junction (BJJ) Hamiltonian in Eq. (12) can be experimentally realized by manipulating ultracold gases of bosonic atoms [78, 79, 80, 81, 82, 80, 82]. In the following paragraphs, we discuss possible implementations of the driving pulse sequence in Eqs. (20)-(21), for systems of cold atoms that either employ their internal or external degrees of freedom. We also propose ways to probe the effects associated with drive-induced nonlinearities through various observables. #### vi.2.1 Two-mode systems using atomic internal states When using two internal states of an atom (e.g. \({}^{87}\)Rb) as a pseudo-spin, the interaction term \(\sim\chi\tilde{J}_{z}^{2}\) entering the BJJ Hamiltonian in Eq. (12) directly reflects atomic collisions in the two internal states; see Fig. 2(c). In this context, the linear coupling \(\sim\Omega_{0}\tilde{J}_{x}\) can be generated with high control, using coherent coupling with oscillatory (microwave) magnetic fields [80]. The mixing operator in Eq. (21) is implemented using the same microwave drive, with a Rabi frequency \(\Omega_{\tau}\) chosen such that \(\Omega_{\tau}\tau=\pi/2\), where \(\tau\) is the pulse duration [Fig. (3)]. The Rabi frequency \(\Omega_{\tau}\) can be made much larger than other frequency scales in the system, such that the Floquet pulses and the internal dynamics have well separated time scales. The strength of the non-linearity \(\chi\) is typically limited by the atomic properties, however it can be tuned with the help of a Feshbach resonance [179]. The readout of the relevant observables (i.e. the relative population \(z\) and relative phase \(\varphi\) in the two internal states) is routinely performed using state-selective imaging of the atomic densities. To extract the relative phase, the imaging is combined with a \(\pi/2\)-rotation around the \(y-\)direction in order to map the phase on measurable atomic densities. #### vi.2.2 Atoms in double-well potentials Two-mode atomic systems can also be implemented using external (spatial) degrees of freedom, namely, by loading the atoms into optical [77] or magnetic [79] double-well potentials; see Fig. 2(d). Here, the Hubbard interaction term in Eq. (9) [which is equivalent to \(\sim\chi\tilde{J}_{z}^{2}\) in the BJJ Hamiltonian in Eq. (12)] is directly generated by the on-site atomic interactions. The tunneling between the wells provides the coherent coupling term and can be tuned by either changing the separation of the wells or the height of the barrier. To implement a pulse of the driving sequence [Eqs. (20)-(21)], the tunneling term has to be made dominant during the pulse duration. We note that a different Floquet scheme has been recently applied in a double-well experiment to control the amplitude and phase of the tunneling matrix elements [180]. In the double-well system, the population imbalance \(z\) can be directly evaluated by measuring the number of atoms in the two wells. Moreover, the relative phase \(\varphi\) can be accessed by interference measurements [77, 178]. The Hamiltonian in Eq. (9), and the derivation that leads to the effective Hamiltonian in Eq. (34), assumes that each well contains a single orbital: this is equivalent to the single-mode approximation in spinor condensates [181, 108]. This scheme thus requires very limited excitations within each well over the driving pulse sequence. This can be achieved by using sequences that are slow compared to trapping frequencies; we note that the high degree of experimental control over designed potentials allows for the implementation of optimal-control schemes to optimize the performance [182, 183] #### vi.2.3 Arrays of dimers and engineered lattice models The lattice models introduced in Section VI, and represented in Fig. (14), could be designed by assembling an array of dimers, e.g. using optical tweezer setups [184]. Alternatively, one could trap two internal states of an atom at each site of an optical lattice (a "dimer") and then activate state-dependent hopping over the lattice using laser-assisted tunneling methods [185, 186, 187, 126]. This scheme would allow for a fine control over the inter-dimer couplings (i.e. the parameters \(\Omega\), \(\Omega_{12}\)), but also, on the inter-particle interactions (i.e. the parameter \(\beta\)). #### vi.2.4 Probing pair-hopping processes and orbital order The phase space associated with the effective classical Hamiltonian in Eq. (36), which was analyzed in Section IV, can be finely studied using atomic Bose gases [108, 80, 110]. This can be readily performed by measuring the mean values of the relative population \(z(t)\) and phase \(\varphi(t)\) for different times and initial conditions. This would allow for the characterization of the effective Hamiltonian on a "classical" level. Ultracold atomic systems offer the possibility to access genuine quantum properties, such as coherent spin squeezing [188, 131, 133, 134, 189, 81]. In particular, generalized measurements can be used to evaluate non-commuting observables (such as the imbalance \(z\) and relative phase \(\varphi\)) within the same experimental realization [189]. Besides, the Husimi distribution [Fig. (12)] can be reconstructed from projective measurements [134]. Drive-induced pair-hopping processes are a striking feature of the effective Hamiltonian in Eq. (34). To detect this effect, we propose to exploit an additional spatial degree of freedom ("tube" geometry), as we illustrate in Fig. 19(a). Specifically, we apply an energy offset \(\Delta_{0}\) to one of the wells (colored in blue), and assume that atoms are initially prepared at momentum \(k\approx 0\). When activating the driving sequence, pair-hopping processes are effectively generated, and atoms would then be allowed to hop by pairs to the other well (colored in red), where they would acquire a finite momentum \(\pm k_{0}\); see Figs. 19(a)-(b). The momentum correlation could then be revealed experimentally by letting the cloud expand for a long time-of-flight [183]: the finite momentum leads to a separation of the atom pairs, such that counting the number of atoms at \(\pm k_{0}\) would reveal the correlation in the reduced variance (compared to a binomial distribution) of the population imbalance. Finally, we note that the quench protocol in Section VII.3 could be directly implemented in a quantum-gas experiment, in view of revealing the orbital order and TRSR-broken nature of the chiral superfluids and Mott phases analyzed in Section VII. As illustrated in Fig. 18, the finite orbital polarization in the ground state can be unambiguously detected by monitoring the time-evolving population imbalance \(z_{n}(t)\), locally defined at the level of each dimer, upon performing the quench protocol. ###### Acknowledgements. This work was initiated through discussions with J. Fatome and S. Coen, who are warmly acknowledged. The authors also thank M. Bukov, I. Carusotto, N. R. Cooper, J. Dalibard, A. Eckardt, N. Englebert, M. Jurgensen, Yun Li, B. Mera, F. Petiziol, M. C. Rechtsman, J. Schmiedmayer, A. Schnell and H. Strobel for various discussions. We are very grateful to S. Coen, N. Englebert, J. Fatome, P. Kockaert and S. Mukherjee for their comments on an early version of this manuscript. N. G and L. P. G. are supported by the FRS-FNRS (Belgium), the ERC Starting Grants TopoCold and LATIS and the EOS project CHEQS. O. K. D. acknowledges funding from the International Max Planck Research School for Quantum Science and Technology (IMPRS - QST). L. B. acknowledges funding from Politecnico di Torino, starting package Grant No. 54 RSG21BL01. The work in Vienna was performed under the QUANTERA project MENTA (FWF: I-6006) and M. P. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 101032523. M. D. L. is supported by the Rita Levi Montalcini Program. ## Appendix A Useful formulas The Section III uses two families of operators: the bosonic operators \(\hat{a}^{(\dagger)}_{1}\) and \(\hat{a}^{(\dagger)}_{2}\) associated with the two modes, and which satisfy the canonical bosonic commutation relations, \([\hat{a}_{s},\hat{a}^{\dagger}_{s^{\prime}}]=\delta_{s,s^{\prime}}\), where \(s\!=\!1,2\); and the angular momentum (Schwinger) operators, defined as \[\hat{J}_{x} =\frac{1}{2}\left(\hat{a}^{\dagger}_{1}\hat{a}_{2}+\hat{a}^{ \dagger}_{2}\hat{a}_{1}\right),\quad\hat{J}_{y}=\frac{1}{2i}\left(\hat{a}^{ \dagger}_{2}\hat{a}_{1}-\hat{a}^{\dagger}_{1}\hat{a}_{2}\right),\] \[\hat{J}_{z} =\frac{1}{2}\left(\hat{a}^{\dagger}_{2}\hat{a}_{2}-\hat{a}^{ \dagger}_{1}\hat{a}_{1}\right),\quad\hat{N}=\hat{a}^{\dagger}_{1}\hat{a}_{1}+ \hat{a}^{\dagger}_{2}\hat{a}_{2}. \tag{101}\] These operators satisfy the spin commutation relations \([\hat{J}_{\mu},\hat{J}_{\nu}]=i\varepsilon_{\mu\nu\lambda}\hat{J}_{\lambda}\), and the operator \(\hat{N}\) counts the total number of bosons in the system (assumed to be constant). In view of expressing interaction processes with Schwinger operators, it is useful to note that \[\hat{J}^{2}_{z} =\frac{1}{4}\left(\hat{a}^{\dagger}_{1}\hat{a}^{\dagger}_{1}\hat {a}_{1}\hat{a}_{1}+\hat{a}^{\dagger}_{2}\hat{a}^{\dagger}_{2}\hat{a}_{2}\hat{ a}_{2}-2\hat{a}^{\dagger}_{1}\hat{a}^{\dagger}_{2}\hat{a}_{1}\hat{a}_{2}+\hat{N} \right),\] \[\hat{J}^{2}_{y} =\frac{1}{4}\left(2\hat{a}^{\dagger}_{1}\hat{a}^{\dagger}_{2} \hat{a}_{1}\hat{a}_{2}-\hat{a}^{\dagger}_{1}\hat{a}^{\dagger}_{1}\hat{a}_{2} \hat{a}_{2}-\hat{a}^{\dagger}_{2}\hat{a}^{\dagger}_{2}\hat{a}_{1}\hat{a}_{1}+ \hat{N}\right),\] \[\hat{N}^{2} =\left(\hat{a}^{\dagger}_{1}\hat{a}^{\dagger}_{1}\hat{a}_{1}\hat {a}_{1}+\hat{a}^{\dagger}_{2}\hat{a}^{\dagger}_{2}\hat{a}_{2}\hat{a}_{2}+2\hat{ a}^{\dagger}_{1}\hat{a}^{\dagger}_{2}\hat{a}_{1}\hat{a}_{2}+\hat{N}\right). \tag{102}\] Hence, both \(\hat{J}^{2}_{z}\) and \(\hat{N}^{2}\) contain intra-mode (Hubbard) and inter-mode (cross) interactions, while \(\hat{J}^{2}_{y}\) contains a combination of inter-mode interactions and pair-hopping processes [Fig. 4]. We point out that \(\hat{J}^{2}_{z}\) is related to \(\hat{J}^{2}_{y}\) through a unitary transformation; see Eq. (28). From Eq. (102), we can express the intra-mode (Hubbard) interaction terms as \[\frac{1}{2}\left(\hat{a}^{\dagger}_{1}\hat{a}^{\dagger}_{1}\hat{a}_{1}\hat{a}_ {1}+\hat{a}^{\dagger}_{2}\hat{a}^{\dagger}_{2}\hat{a}_{2}\hat{a}_{2}\right)= \hat{J}^{2}_{z}+\text{constant}, \tag{103}\] where the irrelevant constant term reads \(\hat{N}(\hat{N}-2)/4\). Similarly, the inter-mode (cross) interaction term reads \[\hat{a}^{\dagger}_{1}\hat{a}^{\dagger}_{2}\hat{a}_{1}\hat{a}_{2}=-\hat{J}^{2}_ {z}+\text{constant}, \tag{104}\] with the irrelevant constant term \(\hat{N}^{2}/4\). These expressions were used to derive the Hamiltonian in Eq. (12) from Eq. (9). Finally, it is useful to note that a combination of intra-mode (Hubbard) interactions and pair-hopping processes can be ex Figure 19: Detecting pair-hopping processes in a driven double well potential. (a) An additional spatial degree of freedom, combined with an energy offset \(\Delta_{0}\) between the two wells, allows for specific pair-hopping processes, which result in pairs of atoms with opposite momentum \(\pm k_{0}\). (b) Dispersion relation associated with the two wells, which are shifted in energy by an amount \(\Delta_{0}\). A pair-hopping process converts two atoms in the ‘blue’ well with momentum \(k=0\), into a pair of atoms in the ‘red’ well with momentum \(\pm k_{0}\). These finite-momentum pairs can be detected after a long TOF, hence revealing the effective pair-hopping processes generated by the driving sequence. pressed as \[\hat{J}_{z}^{2}+\hat{J}_{y}^{2} =\frac{1}{4}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}^{\dagger}\hat{a}_ {1}\hat{a}_{1}+\hat{a}_{2}^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{2}\hat{a}_{2} \right)\] \[-\frac{1}{4}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}^{\dagger}\hat{a}_ {2}\hat{a}_{2}+\hat{a}_{2}^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{1}\hat{a}_{1 }\right)+\text{constant}. \tag{100}\] ## Appendix B Orbital order and phase transitions from a mean-field analysis In this Appendix, we provide a detailed mean-field analysis of the extended Bose-Hubbard model in Eq. (64). Upon performing the mean-field substitution \[\hat{b}_{n,\sigma}\rightarrow\langle\hat{b}_{n,\sigma}\rangle\equiv\psi_{n, \sigma}, \tag{101}\] we obtain the mean-field functional \[\mathcal{F} = \frac{1}{2}\sum_{n\sigma}\left[U_{\xi}|\psi_{n,\sigma}|^{2}(|\psi _{n,\sigma}|^{2}-1)+W_{\xi}|\psi_{n,\sigma}|^{2}|\psi_{n,\overline{\sigma}}|^{ 2}\right]\] \[- \sum_{n\sigma}\left[\left(\Omega+\frac{\Omega_{12}}{2}e^{i\sigma \pi/2}\right)\psi_{n+1,\sigma}^{*}\psi_{n,\sigma}+h.c.\right]\] \[- \frac{\Omega_{12}}{2}\sum_{n\sigma}\left(e^{-i\sigma\frac{\pi}{2} }\psi_{n+1,\sigma}^{*}\psi_{n,\overline{\sigma}}+h.c.\right)-\mu\sum_{n\sigma }|\psi_{n,\sigma}|^{2},\] from which we derive the coupled NLSE in Eq. (71). When imposing periodic boundary conditions (ring geometry), and in the limit of weak interactions \(U\!\ll\!\Omega\), the mean-field ground state is expected to have a uniform density distribution over the entire chain. We can then propose Bloch states as stationary solutions of Eq. (71), namely \[\psi_{n,\sigma}(t)=e^{-i(\varepsilon(k)-\mu)t/\hbar}\,e^{ikn}\, \phi_{k,\sigma},\] \[\phi_{k,\sigma}=\sqrt{\frac{\rho}{2}[1+\sigma m(k)]}e^{i\Theta_{ k,\sigma}}. \tag{102}\] In this way, the local density \(\rho_{n}=\rho=\sum_{\sigma}|\psi_{\sigma}|^{2}=N/N_{d}\) is constant, with \(N\) the total number of bosons and \(N_{d}\) the number of unit cells (dimers) in the ring. We note that the orbital polarization entering Eq. (B) is given by \[m(k)=\frac{1}{\rho}(|\phi_{k,+}|^{2}-|\phi_{k,-}|^{2}). \tag{103}\] Inserting the ansatz (B) into the NLSE in Eq. (71), we find that \(\phi_{k}=(\phi_{k,+},\phi_{k,-})^{T}\) should be an eigenstate of the Gross-Pitaevskii Hamiltonian written in Bloch representation, \[\hat{H}_{\text{GP}}(k) =\left[\frac{U_{\xi}}{2}(\rho-1)+\frac{W_{\xi}}{2}\rho-2\Omega \cos(k)\right]\hat{1} \tag{104}\] \[-\left[\Omega_{c}m(k)+\Omega_{12}\sin(k)\right]\hat{\sigma}_{z}- \Omega_{12}\cos(k)\hat{\sigma}_{y},\] where \(m(k)\) should be determined self-consistently [Eq. (103)], and where \(\Omega_{c}\) is given by Eq. (74). Note that, for each value of \(k\), this Hamiltonian exactly maps to the mean-field Hamiltonian of a transverse spin-\(1/2\) Ising model with an additional longitudinal magnetic field. Indeed, \(m(k)\) represents the self-consistent magnetization along the \(z\)-direction with \(\Omega_{c}\) the magnitude of the ferromagnetic coupling constant. The magnitudes of the transverse field along the \(y\)-direction and the longitudinal field along the \(z\)-direction are respectively given by \(\Omega_{12}\cos(k)\) and \(\Omega_{12}\sin(k)\). The solution with lowest eigenenergy determines the mean-field energy functional \[\varepsilon_{\text{MF}}(k) =-\sqrt{\Omega_{12}^{2}+\Omega_{c}^{2}m(k)^{2}+2m(k)\Omega_{12} \Omega_{c}\sin(k)}\] \[-2\Omega\cos(k)+\frac{U_{\xi}}{2}(\rho-1)+\frac{W_{\xi}}{2}\rho, \tag{105}\] which should be minimized. We will denote by \(k_{0}\) the value of \(k\) that achieves this minimization (to be specified below). By inserting the eigenstate of this low-energy branch into Eq. (103), we obtain that the orbital polarization \(m(k)\) should satisfy the self-consistent condition \[m(k)=\frac{m(k)\Omega_{c}+\Omega_{12}\sin(k)}{\sqrt{\Omega_{12}^{2}+\Omega_{c}^ {2}m(k)^{2}+2m(k)\Omega_{12}\Omega_{c}\sin(k)}}. \tag{106}\] Moreover, we find that the phases \(\Theta_{k,\sigma}\!=\!\Theta_{\sigma}\) are independent of the wavevector, and that they are determined by \[\Theta=\Theta_{+}-\Theta_{-}=-(\pi/2)\text{sgn}(\Omega_{12}). \tag{107}\] It is insightful to analyze how these conditions on the nature of the ground-state translate in the original basis of Eq. (61). In this representation, the mean-fields are given by \(\langle\hat{a}_{n,s}\rangle_{0}\equiv\psi_{n,s}\), with \(s\!=\!1,2\), and they read [Eqs. (63) and (B)] \[\psi_{n,1} =e^{ik_{0}n}\sqrt{\rho}\left(\frac{\sqrt{1+m_{0}}e^{i\Theta_{+}} +\sqrt{1-m_{0}}e^{i\Theta_{-}}}{2}\right)\] \[\psi_{n,2} =ie^{ik_{0}n}\sqrt{\rho}\left(\frac{\sqrt{1+m_{0}}e^{i\Theta_{+}} -\sqrt{1-m_{0}}e^{i\Theta_{-}}}{2}\right). \tag{108}\] Here, we explicitly evaluated the fields at \(k\!=\!k_{0}\) and we introduced the notation \(m(k_{0})\!=\!m_{0}\); we also omitted the trivial dynamical phase. The condition in Eq. (107) then simply corresponds to having the \(a_{1,2}\) orbitals equally populated in the ground state, i.e. \(|\psi_{n,1}|^{2}=|\psi_{n,2}|^{2}=\rho/2\). Without loss of generality, we henceforth set \(\Omega_{12}>0\), and we express the relative phase \(\varphi\) between the components \(\psi_{n,2}\) and \(\psi_{n,1}\) according to \[\frac{\psi_{n,2}}{\psi_{n,1}}=e^{i\varphi}=\sqrt{1-m_{0}^{2}}+im_{0}, \tag{109}\] where we used Eqs. (107)-(108). We thus obtain a simple relation between the local relative phase (internal angle) and the ground-state orbital polarization given in Eq. (72). The interplay of local orbital polarization and hopping processes gives rise to a ground-state current on a ring geometry. This can be obtained by evaluating the current operator derived from Eq. (64), \[\hat{J} =i\frac{\Omega}{\hbar N_{d}}\sum_{n\sigma}\left(\hat{b}_{n+1, \sigma}^{\dagger}\hat{b}_{n,\sigma}-\text{h.c.}\right) \tag{110}\] \[+\frac{\Omega_{12}}{2\hbar N_{d}}\sum_{n\sigma}\left(\sigma\hat{b} _{n+1,\sigma}^{\dagger}\hat{b}_{n,\overline{\sigma}}-\sigma\hat{b}_{n+1,\sigma}^{ \dagger}\hat{b}_{n,\sigma}+\text{h.c.}\right).\] In the mean-field solution, the current flowing through the ring is given by Eq. (77), where \(k_{0}\) and \(m_{0}\) are still to be determined below. Finding the minimum of Eq. (101) by imposing the self-consistent condition given by Eq. (102) can be cumbersome from an analytical point of view. Hence, it is useful to consider particular limits. When \(\Omega_{12}=0\), the minimum energy is precisely reached at \(k_{0}=0\), which leads to two possible degenerate polarizations \(m_{0}=\pm 1\); this situation corresponds to a relative phase \(\varphi=\pm\pi/2\). A finite coupling \(\Omega_{12}\ll\Omega_{c}\) leads to a non-zero ground-state quasi-momentum \(k_{0}\approx(\Omega_{12}/2\Omega)\operatorname{sgn}(m_{0})\approx\Phi \operatorname{sgn}(m_{0})\) and to a small depolarization of the system [see Eq. (73)]. We recall that \(\Phi\) is the effective flux generated by the complex tunneling in Eq. (64); see Eq. (69) and Fig. 15. Importantly, condensation at a finite quasi-momentum activates an effective longitudinal field in the Ising picture--recall that this field scales as \(\Omega_{12}\sin(k)\)--leading to a smoothing of the transition to the unpolarized state and an eventual absence of critical behavior. As a technical note, we remark that reaching this finite \(k_{0}\) requires sufficiently large lattices satisfying \(N_{d}>2\pi\Omega/|\Omega_{12}|\). In any case, to lowest order in \(\Omega_{12}\), the ground-state polarization decreases according to Eq. (73). The chiral current flowing through the ring is activated by \(\Omega_{12}\); see Eq. (103). In the mean-field ground state, the leading-order contribution is given by \[J_{\text{MF}}(k_{0})\approx-\text{sgn}(m_{0})\rho\Omega_{12}. \tag{104}\] Importantly, the sign of the persistent current in Eq. (104) depends on the orbital order that spontaneously emerges in the system. This emergent chirality is a striking signature of the spontaneous breaking of TRS; see also the main text. ## Appendix C Orbital order in the strongly-correlated regime In this Appendix, we derive the effective Ising spin model in Eq. (89) and we obtain the chiral superfluid-to-Mott phase diagram in the strong-coupling regime [Fig. 17]. In the absence of kinetic terms in the Hamiltonian (64), there is a macroscopic degeneracy of \(2^{N_{d}}\) possible ground-state configurations \(|\{\sigma_{n}\}\rangle\), which may be written as product states \[|\{\sigma_{n}\}\rangle=\prod_{n=1}^{N_{d}}|\sigma_{n}\rangle, \tag{105}\] with \(|\sigma_{n}\rangle\) the states having well defined angular momentum along the \(y\)-direction, namely \(J_{y}^{(n)}=\sigma_{n}\rho/2\) with \(\sigma_{n}=\pm 1\); see Eq. (87) in the main text. The corresponding ground-state energy reads \[E_{N}^{(0)}(|\{\sigma_{n}\}\rangle)=U_{\xi}N(\rho-1)/2-\mu N. \tag{106}\] The tunneling terms in the Hamiltonian (64) do not couple these states at first order, but they do lift their degeneracy in second-order perturbation theory. Indeed, the first non-trivial correction to the energy of these \(N\)-particle states is given by \[\Delta E(|\{\sigma_{n}\}\rangle)=\sum_{1}\frac{|\langle l|\hat{H}_{T}|\{ \sigma_{n}\}\rangle|^{2}}{E_{N}^{(0)}(|\{\sigma_{n}\}\rangle)-E_{N}^{(0)}(|l \rangle)}, \tag{107}\] where \(\hat{H}_{T}\) contains all the tunneling terms of Eq. (64), and where \(|l\rangle\) is an excited state. Since the Hamiltonian (64) only couples first nearest neighbors, this expression can be further simplified as a sum of pair contributions, \[\Delta E(|\{\sigma_{n}\}\rangle)=\sum_{n}\Delta E(|\sigma_{n}\rangle|\sigma_{ n+1}\rangle). \tag{108}\] The energy corrections for each pair are readily obtained as \[\Delta E(|+\rangle|+\rangle)=\Delta E(|-\rangle|-\rangle)\] \[\qquad=-\frac{2|t_{\sigma}|^{2}\rho(\rho+1)}{U_{\xi}}-\frac{2|t_ {\sigma\overline{\sigma}}|^{2}\rho}{[\rho(W_{\xi}-U_{\xi})+U_{\xi}]},\] \[\Delta E(|+\rangle|-\rangle)=\Delta E(|+\rangle|-\rangle)\] \[\qquad=-\frac{2|t_{\sigma\overline{\sigma}}|^{2}\rho(\rho+1)}{U_ {\xi}}-\frac{2|t_{\sigma}|^{2}\rho}{[\rho(W_{\xi}-U_{\xi})+U_{\xi}]}.\] We note that this approach is valid whenever \(\xi<4\), which ensures repulsive intraspecies interactions \(U_{\xi}>0\) in Eq. (64). The correction to the energy of the manifold of states given by Eq. (105), up to second order, can hence be expressed as a constant shift (which is independent of the configuration \(\{\sigma_{n}\}\)) plus an orbital exchange interaction, \[\Delta E(|\{\sigma_{n}\}\rangle)=\sum_{n}E_{0}+K_{yy}\sum_{n}J_{y}^{(n)}J_{y}^ {(n+1)}, \tag{109}\] with \(J_{y}^{(n)}=\sigma_{n}\rho/2\). The shift and exchange coupling are given by \[E_{0} =-\frac{\left(|t_{\sigma}|^{2}+|t_{\sigma\overline{\sigma}}|^{2} \right)\rho[W_{\xi}\rho(\rho+1)-U_{\xi}(\rho^{2}-2)]}{U_{\xi}[U_{\xi}+\rho(W_{ \xi}-U_{\xi})]}\] \[K_{yy} =-\frac{4\left(|t_{\sigma}|^{2}-|t_{\sigma\overline{\sigma}}|^{2 }\right)\left[W_{\xi}+\rho(W_{\xi}-U_{\xi})\right]}{U_{\xi}[U_{\xi}+\rho(W_{ \xi}-U_{\xi})]}<0. \tag{110}\] These results lead to the effective Ising spin model displayed in Eq. (89). We remark that the exchange coupling at this order only depends on the tunneling \(\Omega\) and that it favors a uniform 'ferromagnetic' ordering. In a one-dimensional ring geometry, the approximated ground-state energy can then be expressed as \[E_{\text{GS}}(N)=E_{N}^{(0)}+N_{d}\left(E_{0}+K_{yy}\frac{\rho^{2}}{4}\right). \tag{111}\] With the aim of comparing this analytical model with a more accurate numerical tool, we performed DMRG simulations to analyze the evolution of the ground-state orbital polarization within the chiral-Mott phase. The results are presented in Fig. 20 for both \(\Omega_{12}=0\) and \(\Omega_{12}/\Omega=1\). The orbital order is practically unaltered by the existence of a finite \(\Omega_{12}\), in agreement with what we expect from the effective spin model: the exchange coupling constant \(K_{yy}\) in Eq. (101) does not depend on \(\Omega_{12}\). On the other hand, as the system gets closer to the superfluid phase, the presence of this hopping term destabilizes the angular momentum ordering. The perturbative expansion described above can be further used to elucidate the boundaries between the (chiral) Mott and superfluid phases, by additionally considering how the ground state energy changes by adding or removing a particle in the system: the phase boundary precisely occurs when this particle-hole excitation gap vanishes. Since we are interested in the regime \(W_{\xi}>U_{\xi}\), the relevant low-energy manifold to consider upon adding one extra particle is given by \(N_{d}\) states of the form \[|n_{N+1}\rangle=\frac{\left(\hat{b}^{\dagger}_{1,\sigma}\right)^{\rho}\dots \left(\hat{b}^{\dagger}_{n,\sigma}\right)^{\rho+1}\dots\left(\hat{b}^{\dagger }_{N_{d},\sigma}\right)^{\rho}}{\sqrt{(\rho!)^{N_{d}-1}(\rho+1)!}}|0\rangle, \tag{102}\] so that the state \(|n_{N+1}\rangle\) has one more boson in the \(n-\)th dimer of a ferromagnetic state with \(\sigma\)-order. The unperturbed energy of these states is \[E^{(0)}_{N+1}(\{|n_{N+1}\rangle\})=U_{\xi}N(\rho-1)/2+U_{\xi}\rho-\mu(N+1). \tag{103}\] The ground-state energy can then be approximately found via a canonical transformation procedure [190], in which an effective Hamiltonian \(\hat{H}^{{}^{\prime}}\) that takes into account up to second order processes in the tunneling amplitudes is defined within the manifold of the original set of \(\{|n\rangle_{N+1}\}\)-states. The matrix elements of \(\hat{H}^{\prime}\) are determined as \[\langle n_{N+1}|\hat{H}^{{}^{\prime}}|n_{N+1}^{\prime}\rangle = E^{(0)}_{N+1}+\langle n_{N+1}|\hat{H}_{T}|n_{N+1}^{\prime}\rangle\] \[+ \sum_{l}\frac{\langle n_{N+1}|\hat{H}_{T}|l\rangle\langle l|\hat{ H}_{T}|n_{N+1}^{\prime}\rangle}{E^{(0)}_{N+1}-E^{(0)}_{N+1}(|l)\rangle},\] with \(|l\rangle\) the excited states and \(E^{(0)}_{N+1}(|l\rangle)\) their corresponding unperturbed energy. The \(N_{d}\times N_{d}\) matrix has a tridiagonal form and can be analytically diagonalized, revealing the splitting of the degenerate states of Eq. (102) into a band described by \[E^{\prime}_{N+1}(k) = E^{(0)}_{N+1}+\Sigma_{N+1}-2|t_{\sigma}|(\rho+1)\cos(k-\Phi_{ \sigma}) \tag{104}\] \[- \frac{2(\rho+1)|t_{\sigma\overline{\sigma}}|^{2}}{(W_{\xi}-U_{ \xi})\rho}\cos(2k)\] \[- \frac{2\rho(\rho+1)|t_{\sigma}|^{2}}{U_{\xi}}\cos(2(k-\Phi_{ \sigma})),\] with \[\Sigma_{N+1} = -\frac{|t_{\sigma}|^{2}\rho(\rho+2)}{U_{\xi}}-\frac{2\rho|t_{ \sigma\overline{\sigma}}|^{2}}{\rho(W_{\xi}-U_{\xi})+W_{\xi}+U_{\xi}} \tag{105}\] \[- \frac{2(\rho+1)|t_{\sigma\overline{\sigma}}|^{2}}{\rho(W_{\xi}-U _{\xi})}-\frac{2(N_{d}-2)\rho(\rho+1)|t_{\sigma}|^{2}}{U_{\xi}}\] \[- \frac{2(N_{d}-2)\rho|t_{\sigma\overline{\sigma}}|^{2}}{\rho(W_{ \xi}-U_{\xi})+U_{\xi}}.\] In Eq. (104), we introduced the flux \(\Phi_{\sigma}\!=\!\sigma\arctan(\Omega_{12}/2\Omega)\) and the quasi-momentum \(k=2\pi j/N_{d}\), with \(j=0,\dots,N_{d}-1\). Interestingly, at this order, the extra boson feels the presence of the flux \(\Phi_{\sigma}\) in the lattice via effective hopping processes at first and second nearest neighbors. In the thermodynamic limit, the minimum energy of this band will be precisely at \(k=\Phi_{\sigma}\), so that the ground-state energy with one extra boson is obtained as \[E_{\text{GS}}(N+1)=E^{\prime}_{N+1}(\Phi_{\sigma}). \tag{106}\] We follow a similar procedure to consider the effect of the hopping terms in the low-energy manifold with one boson less. When removing one particle from the \(\sigma\)-ordered \(N\)-particle ground state, the relevant manifold of states where the perturbation theory should be applied will be given by \[|n_{N-1}\rangle=\frac{\left(\hat{b}^{\dagger}_{1,\sigma}\right)^{\rho}\dots \left(\hat{b}^{\dagger}_{n,\sigma}\right)^{\rho-1}\dots\left(\hat{b}^{\dagger} _{N_{d},\sigma}\right)^{\rho}}{\sqrt{(\rho!)^{N_{d}-1}(\rho-1)!}}|0\rangle, \tag{107}\] which have a zero-th order energy of \[E^{(0)}_{N-1}(\{|n_{N-1}\rangle\})=U_{\xi}N(\rho-1)/2-U_{\xi}(\rho-1)-\mu(N-1). \tag{108}\] By diagonalizing the corresponding canonically transformed Hamiltonian, we obtain the broadening of these states into a band described by \[E^{\prime}_{N-1}(k) = E^{(0)}_{N-1}+\Sigma_{N-1}-2|t_{\sigma}|\rho\cos(k+\Phi_{\sigma}) \tag{109}\] \[- \frac{2\rho|t_{\sigma\overline{\sigma}}|^{2}}{(W_{\xi}-U_{\xi}) \rho+U_{\xi}}\cos(2k)\] \[- \frac{2\rho(\rho+1)|t_{\sigma}|^{2}}{U_{\xi}}\cos(2(k+\Phi_{ \sigma})),\] Figure 20: Evolution of the averaged ground-state orbital polarization \(\overline{m}_{0}\), within the Mott regime, for \(\rho\!=\!2\) and \(\Omega_{12}=0\) (red) and \(\Omega_{12}/\Omega=1\) (blue). The points were obtained using a DMRG algorithm in a chain with open boundary conditions and \(N_{d}=100\) dimers. with \[\Sigma_{N-1} = -\frac{|t_{\sigma}|^{2}(\rho^{2}-1)}{U_{\xi}}-\frac{2\rho|t_{\sigma \boldsymbol{\sigma}}|^{2}}{\rho(W_{\xi}-U_{\xi})+U_{\xi}-W_{\xi}} \tag{107}\] \[- \frac{2(\rho-1)|t_{\sigma\boldsymbol{\sigma}}|^{2}}{\rho(W_{\xi}- U_{\xi})+2U_{\xi}}-\frac{2(N_{d}-2)\rho(\rho+1)|t_{\sigma}|^{2}}{U_{\xi}}\] \[- \frac{2(N_{d}-2)\rho|t_{\sigma\boldsymbol{\sigma}}|^{2}}{\rho(W_{ \xi}-U_{\xi})+U_{\xi}}.\] Note that the hole-like excitation feels the opposite flux (\(-\Phi_{\sigma}\)). We then find that, in the thermodynamic limit, the ground-state energy with one boson less is given by \[E_{\text{GS}}(N-1)=E^{\prime}_{N-1}(-\Phi_{\sigma}). \tag{108}\] The phase boundary between the (chiral) Mott insulator and superfluid phases is determined by the conditions \[E_{\text{GS}}(N+1)\!=\!E_{\text{GS}}(N),\] \[E_{\text{GS}}(N)\!=\!E_{\text{GS}}(N-1).\] Solving these equations separately for \(\mu\), we find the boundaries for the particle sector (\(\mu_{+}\)) and hole sector (\(\mu_{-}\)), which are displayed in Eq. (91). The difference \(\mu_{+}-\mu_{-}\) determines the charge gap in the Mott phase. The resulting Mott-SF phase diagram is depicted in Fig. 17, in terms of the chemical potential \(\mu/U_{\xi}\) and ratio \(\Omega/U_{\xi}\). ## Appendix D Strong-coupling expansion and the effective Hamiltonian for filling \(\rho=1\) At unit filling, the spin-states in Eq. (87) are coupled to each other via spin-flip processes, which also scale as the square of the tunneling amplitudes. In order to describe the corresponding low-energy manifold with an effective theory, we must perform a canonical transformation procedure and project the resulting effective Hamiltonian into the subspace of unit filling. Since the Hamiltonian only couples nearest neighbors, we can focus on the effective theory for just two dimers (\(n\) and \(n+1\)) and then sum over all the lattice links connecting them. For simplicity, we will work in the basis of the original orbitals \(a_{n,s}\) in each dimer (with \(s=1,2\)). In the absence of hopping terms, there are four possible degenerate states with 2 particles in the nearest neighbor dimer configuration that satisfy the unit filling condition. Their corresponding energy is given by \(E_{2}^{(0)}=-2\mu\) and they can be expressed as: \[|s_{n}s^{\prime}_{n+1}\rangle=\hat{a}^{\dagger}_{n,s}\hat{a}^{\dagger}_{n+1,s^ {\prime}}|0\rangle, \tag{109}\] where we introduced the notation \(|s_{n}\rangle=\hat{a}^{\dagger}_{n,s}|0\rangle\) with \(s=1,2\). The matrix elements of the canonically transformed Hamiltonian \(\hat{H}^{\prime}\) within this subspace are given by \[\langle s_{n}s^{\prime}_{n+1}|\hat{H}^{\prime}|s^{\prime\prime}_{ n}s^{\prime\prime\prime}_{n+1}\rangle \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! at each link can be expressed as \[\hat{H}^{\prime}_{n,n+1} =\sum_{\nu=x,y,z}K_{\nu\nu}\hat{J}^{(n)}_{\nu}\hat{J}^{(n+1)}_{\nu} \tag{111}\] \[+\left(\frac{\gamma_{22}-\gamma_{11}}{2}\right)\left(\hat{J}^{(n)} _{z}+\hat{J}^{(n+1)}_{z}\right)\] \[+\left(\frac{\gamma_{21}-\gamma_{12}}{2}\right)\left(\hat{J}^{(n )}_{z}-\hat{J}^{(n+1)}_{z}\right)\] \[-D\left(\hat{J}^{(n)}_{z}\hat{J}^{(n+1)}_{x}-\hat{J}^{(n)}_{x} \hat{J}^{(n+1)}_{z}\right)\] \[+(\Gamma_{3}+\Gamma_{4})\hat{J}^{(n)}_{x}+(\Gamma_{5}+\Gamma_{6} )\hat{J}^{(n+1)}_{x},\] with \(K_{xx}=2(\Gamma_{1}+\Gamma_{2})\), \(K_{yy}=2(\Gamma_{2}-\Gamma_{1})\), \(K_{zz}=\gamma_{11}+\gamma_{22}-\gamma_{12}-\gamma_{21}\) and \(D=2(\Gamma_{4}-\Gamma_{3})\). The magnitudes of these couplings, expressed in terms of the original hopping and interaction parameters of the model, are provided in Eq. (93) in the main text. The effective spin-\(1/2\) Hamiltonian in the one-dimensional lattice is finally written as \[\hat{H}^{\text{eff}}_{1/2} =\sum_{n}\hat{H}^{\prime}_{n,n+1}\] \[=\sum_{n}\sum_{\nu=x,y,z}K_{\nu\nu}\hat{J}^{(n)}_{\nu}\hat{J}^{(n +1)}_{\nu}+h_{x}\sum_{n}\hat{J}^{(n)}_{x}\] \[-\mathbf{D}\sum_{n}\left(\hat{\mathbf{J}}^{(n)}\times\hat{\mathbf{J}}^{(n+1)} \right), \tag{112}\] where \(\mathbf{D}=(0,D,0)\) and \(h_{x}=\Gamma_{3}+\Gamma_{4}+\Gamma_{5}+\Gamma_{6}\).
2308.02207
Ultrafast nonadiabatic phonon renormalization in photoexcited single-layer MoS$_2$
Comprehending nonequilibrium electron-phonon dynamics at the microscopic level and at the short time scales is one of the main goals in condensed matter physics. Effective temperature models and time-dependent Boltzmann equations are standard techniques for exploring and understanding nonequilibrium state and the corresponding scattering channels. However, these methods consider only the time evolution of carrier occupation function, while the self-consistent phonon dressing in each time instant coming from the nonequilibrium population is ignored, which makes them less suitable for studying ultrafast phenomena where softening of the phonon modes plays an active role. Here, we combine ab-initio time-dependent Boltzmann equations and many-body phonon self-energy calculations to investigate the full momentum- and mode-resolved nonadiabatic phonon renormalization picture in the MoS$_2$ monolayer under nonequilibrium conditions. Our results show that the nonequilibrium state of photoexcited MoS$_2$ is governed by multi-valley topology of valence and conduction bands that brings about characteristic anisotropic electron-phonon thermalization paths and the corresponding phonon renormalization of strongly-coupled modes around high-symmetry points of the Brillouin zone. As the carrier population is thermalized towards its equilibrium state, we track in time the evolution of the remarkable phonon anomalies induced by nonequilibrium and the overall enhancement of the phonon relaxation rates. This work shows potential guidelines to tailor the electron-phonon relaxation channels and control the phonon dynamics under extreme photoexcited conditions.
Nina Girotto, Fabio Caruso, Dino Novko
2023-08-04T08:48:59Z
http://arxiv.org/abs/2308.02207v1
# Ultrafast nonadiabatic phonon renormalization in photoexcited single-layer MoS\({}_{2}\) ###### Abstract Comprehending nonequilibrium electron-phonon dynamics at the microscopic level and at the short time scales is one of the main goals in condensed matter physics. Effective temperature models and time-dependent Boltzmann equations are standard techniques for exploring and understanding nonequilibrium state and the corresponding scattering channels. However, these methods consider only the time evolution of carrier occupation function, while the self-consistent phonon dressing in each time instant coming from the nonequilibrium population is ignored, which makes them less suitable for studying ultrafast phenomena where softening of the phonon modes plays an active role. Here, we combine _ab-initio_ time-dependent Boltzmann equations and many-body phonon self-energy calculations to investigate the full momentum- and mode-resolved nonadiabatic phonon renormalization picture in the MoS\({}_{2}\) monolayer under nonequilibrium conditions. Our results show that the nonequilibrium state of photoexcited MoS\({}_{2}\) is governed by multi-valley topology of valence and conduction bands that brings about characteristic anisotropic electron-phonon thermalization paths and the corresponding phonon renormalization of strongly-coupled modes around high-symmetry points of the Brillouin zone. As the carrier population is thermalized towards its equilibrium state, we track in time the evolution of the remarkable phonon anomalies induced by nonequilibrium and the overall enhancement of the phonon relaxation rates. This work shows potential guidelines to tailor the electron-phonon relaxation channels and control the phonon dynamics under extreme photoexcited conditions. ultrafast dynamics, phonon renormalization, electron-phonon coupling, transition metal dichalcogenides, density functional theory + Footnote †: Corresponding author ## 1 Introduction Recent advancements of ultrafast spectroscopy techniques have opened many avenues for controlling and understanding fundamental interactions in quantum materials [1, 2]. Due to ultrashort duration, usually below the characteristic thermalization timescale, the corresponding laser sources are not only able to probe and disentangle the relaxation pathways of electrons, phonons, spin, and other degrees of freedom [3, 4], but can reveal new physical phenomena and phases of matter beyond the thermodynamical equilibrium [1]. Namely, photo-induced nonequilibrium carrier distribution and the accompanying modifications of the potential energy landscape was shown to promote nonlinear lattice control [5, 6, 7], elevate or quench existing superconducting phase [8, 9, 10, 11, 12], alter the transition tem perature [13, 14] or dimensionality [15, 16] of the known charge-density-wave (CDW) order or even induce new ordered phases of matter [17, 18, 19, 20, 21, 22, 23], as well as switch ferroelectric [24, 25, 26, 27, 28, 29, 30] and ferromagnetic [31] properties. Electron-phonon coupling (EPC) plays a crucial role in the aforesaid ultrafast phenomena [1, 32, 33, 34] and thus it is of utmost importance to master and comprehend microscopic channels ruling phonon dynamics in extreme nonequilibrium conditions. Complementary to the time-resolved photoemission methods that provide an important access to the electron-hole thermalization process [3, 4, 35, 36, 37, 38] and electronic structure changes [39, 40], there are several ultrafast techniques, such as ultrafast electron diffraction scattering [41, 42, 43, 44, 45, 46, 47, 48], coherent phonon spectroscopy [49, 50, 51, 52, 53], and time-resolved Raman spectroscopy [54, 55, 56, 57], that can precisely track the phonon relaxation channels following the photoexcitation and corresponding EPC strength [50, 51]. For instance, ultrafast electron diffraction had uncovered highly anisotropic non-thermal phonon relaxation in black phosphorus [46], and mapped momentum-resolved electron-phonon scattering channels and strengths in various transition metal dichalcogenides (TMDs) [41, 45, 47, 48]. Intriguingly, these methods are able to analyze photo-induced phonon frequency modifications and uncover the relevant microscopic processes, as it was done, for example, for zone-center strongly-coupled \(E_{2g}\) optical mode in graphite with coherent phonon [49] and time-resolved Raman spectroscopies [54], as well as for the amplitude CDW mode in TiSe\({}_{2}\) by means of ultrafast electron diffraction [45]. In combination with other time-resolved spectroscopy approaches, the latter technique allowed to pinpoint the phonon modes that play an active role in unconventional superconductivity of FeSe thin films on SrTiO\({}_{3}\), and to extract the correlation-induced EPC constants [50]. Further, recent study on graphite reported dynamics of coherent vibrations for both zone-center and zone-edge phonon modes with unprecedented energy and time resolution allowed by attosecond core-level spectroscopy [52]. In order to obtain a complete insight into the ultrafast phonon dynamics and unveil the corresponding electron-phonon scattering paths, the above studies need to be complemented with microscopic theoretical methods, preferably based on quantitative description of nonequilibrium state beyond simple phenomenological approaches. Probably the most widespread theoretical description of electron-lattice energy flow is the two temperature model (TTM) and its extensions [58, 59, 60, 61, 62, 63, 64], where it is assumed that electron and lattice subsystems are already thermalized. It is commonly used in its phenomenological form to supplement the experimental observations of the heat transfer [59, 60, 61, 65], and was further useful to address the hot phonon dynamics and phonon bottleneck in materials with highly anisotropic EPC, such as graphene [61, 64, 66], graphite [67], MgB\({}_{2}\)[68, 69], MoS\({}_{2}\)[70], and lead halide perovskites [71]. It was also used as a basis to study laser-induced energy renormalization of hot phonons in MgB\({}_{2}\) as a function of time [68]. However, the TTM is likely to fail in describing femtosecond lattice dynamics below the electron thermalization time, where full information of energy-momentum phase space in nonequilibrium state is required [64]. A considerable improvement in describing nonequilibrium scattering events is met with time-dependent Boltzmann equations (TDBE), especially with its first-principles implementations [46, 48, 72, 73, 74, 75]. When both electron-phonon and phonon-phonon scattering channels are included, the TDBE can accurately describe energy-, momentum-, and time-dependent modifications to electron and phonon population in both sub-picosecond and picosecond regimes [76]. Despite these strengths, common TDBE studies do not account for the self-consistent renormalization of phonon energies in each time instant coming from the updated carrier populations [64], and are, therefore, not fully suited for exploration of the photo-induced soft phonon dynamics, structural phase transitions, ferroelectricity, and charge density waves. An alternative _ab-initio_ method to track carrier dynamics upon laser excitation is based on real-time time-dependent density functional theory (rt-TDDFT) and Ehrenfest dynamics, and along the EPC allows to include many-body electron-electron and electron-hole interactions [77, 78, 79, 80, 81, 82, 83]. In combination with molecular dynamics and real-space lattice distortions, the latter method can account for time-resolved self-consistent renormalization of phonon energies and EPC strengths [81, 82], providing a microscopic information on laser-induced structural transitions [83] and enhanced superconducting prop erties [81, 82]. Since it relies on real-space distortions and supercell approaches to account for electron-lattice interactions, it is numerically challenging for the rt-TDDFT to provide full momentum-resolved analysis on phonon dynamics and, in practice, usually considers only very few coherent optical phonons, such as zone-center \(E_{2g}\) and zone-edge \(A_{1}^{\prime}\) modes in graphene [81], and zone-center \(A_{1g}\) mode in MoS\({}_{2}\)[82]. On the other hand, the non-thermal renormalization of phonons in the whole Brillouin zone can be acquired from the adiabatic density functional and density functional perturbation theories by constraining the occupation of electronic states to a nonequilibrium distribution (i.e., from cDFT and cDFPT) [84, 85, 86], simulating thus laser-excited nonequilibrium states fixed in time, such as population inversion in semiconductors and semimetals. For instance, these methods were used to study photo-induced phonon frequency modifications in tellurium [84] and bismuth [85, 86], phase transition in ferroelectrics [87] and MoTe\({}_{2}\)[88], as well as structural rippling of hBN monolayer [89]. However, the cDFT and cDFPT approaches are time independent and lack information on carrier population dynamics and its temporal evolution towards thermal equilibrium caused by various scattering events. Here we combine the TDBE and many-body phonon self-energy calculations in order to obtain first-principles information on the time-dependent phonon rernmalization process and electron-phonon scattering channels following laser excitation with a full momentum and frequency resolution. With this methodology we investigate a photoexcited MoS\({}_{2}\) monolayer, a prototypical 2D semiconductor with exceptional electronic [90] and optoelectronic [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107] properties for which vibrational, electronic, valley, spin, and other degrees of freedom play an active and important role. Thermalization of carrier distribution function with band and momentum resolution \(f_{n\mathbf{k}}\) is obtained as a function of time by means of TDBE where electron-phonon and phonon-phonon scatterings are included [48, 64, 74]. Thus acquired distribution functions \(f_{n\mathbf{k}}\) are then utilized to construct the time-resolved phonon self-energy \(\pi_{\mathbf{q}\mathbf{v}^{\prime}}(\omega;t)\) and the full nonadiabatic phonon spectral functions \(B_{\mathbf{q}\mathbf{v}^{\prime}}(\omega;t)\), which enables analysis of phonon frequency and linewidth (i.e., relaxation rate) modifications. Characteristic multi-valley landscape of MoS\({}_{2}\) electronic states in momentum space permits only selective population dynamics and anisotropic electron-phonon scatterings. Namely, photo-holes in \(\mathbf{k}=\Gamma\) and \(\mathbf{k}=\mathrm{K}\) valance valleys promote specifically \(\mathbf{q}=\Gamma\) and \(\mathbf{q}=\mathrm{K}\), while photo-electrons in \(\mathbf{k}=\mathrm{K}\) and \(\mathbf{k}=\mathrm{Q}\) conduction valleys promote dominantly \(\mathbf{q}=\Gamma\) and \(\mathbf{q}=\mathrm{M}\) electron-phonon scatterings of optical and acoustic phonons. This in turn influences considerably phonon frequency and linewidths, and results in remarkable anisotropic nonequilibrium phonon softening and dynamical Kohn anomalies at aforesaid phonon momenta. For instance, large dynamical Kohn anomaly of the \(A_{1g}\) optical mode appears close to \(\mathbf{q}=\Gamma\), surpassing the strength of the corresponding phonon anomaly in equilibrium state of doped MoS\({}_{2}\) samples [95, 96, 97]. Also, sizeable phonon softening is induced for the longitudinal acoustic (LA) phonon at \(\mathbf{q}=\mathrm{M}\), which is considered relevant for the appearance of the superconductivity [98] and CDW [99] in doped MoS\({}_{2}\). Importantly, we show that overall phonon scattering rate is significantly increased in nonequilibrium, opening a possibility for enhancing the total EPC strength. These findings demonstrate that photo-induced nonequilibrium state is a promising route for tailoring vibrational properties of quantum matter, especially for MoS\({}_{2}\) where phonons play a primary role in the emergence of novel quantum phenomena, such as in exciton dynamics [100, 101, 102, 103] as well as formation of Holstein polaron [104], CDW [99] and superconductivity [105, 106, 107, 98]. ## Theoretical methods The dynamics of non-thermal electron-lattice system in the TDBE is described by modifications of electron and phonon occupation functions \(f_{n\mathbf{k}}(t)\) and \(n_{\mathbf{q}\mathbf{v}}(t)\), while electron and phonon energies, as well as the corresponding coupling functions are unaltered and fixed to their equilibrium values. The time evolution of \(f_{n\mathbf{k}}(t)\) and \(n_{\mathbf{q}\mathbf{v}}(t)\) is dictated by electron-phonon and phonon-phonon scattering processes and can be described with the following coupled integro-differential equations [64, 74]: \[\partial_{t}f_{n\mathbf{k}}(t) =\Gamma_{n\mathbf{k}}^{\mathrm{ep}}[f_{n\mathbf{k}}(t),n_{\mathbf{ q}\mathbf{v}}(t)], \tag{1}\] \[\partial_{t}n_{\mathbf{q}\mathbf{v}}(t) =\Gamma_{\mathbf{q}\mathbf{v}}^{\mathrm{pe}}[f_{n\mathbf{k}}(t),n _{\mathbf{q}\mathbf{v}}(t)]+\Gamma_{\mathbf{q}\mathbf{v}}^{\mathrm{pp}}[n_{ \mathbf{q}\mathbf{v}}(t)], \tag{2}\] where \(\partial_{t}=\partial/\partial t\), while \(\Gamma_{n\mathbf{k}}\) and \(\Gamma_{\mathbf{q}\nu}\) denote the collision integrals for electrons and phonons, for which electron-phonon, phonon-electron, and anharmonic phonon-phonon scatterings are accounted for. The corresponding expressions for the collision rates are derived from the standard first-order Fermi's golden rule and the explicit forms can be found in Refs. [48, 64, 74]. The time dependence of the collision integrals is explicitly accounted for in the calculations by reevaluating \(\Gamma_{n\mathbf{k}}^{\text{ep}}\) and \(\Gamma_{\mathbf{q}\nu}^{\text{pe}}\) at each time step of the time propagation with new occupation functions. The phonon-phonon collision integral is treated with the relaxation-time approximation. Electron-electron scattering channel governs the thermalization of photoexcited carriers when \(f_{n\mathbf{k}}(t)\) deviates significantly from the equilibrium distribution. Here we consider exclusively electronic excited state characterized by a weak deviation from a Fermi-Dirac function. In this regime, electron-electron scattering plays only a minor role in the carrier dynamics and it is therefore neglected. At thermal equilibrium, \(f_{n\mathbf{k}}\) and \(n_{\mathbf{q}\nu}\) are time independent and they coincide with the Fermi-Dirac and the Bose-Einstein occupations: \(f_{n\mathbf{k}}^{\text{FD}}=\left[e^{(\varepsilon_{n\mathbf{k}}-\varepsilon_{ \text{F}})/k_{\text{B}}T}+1\right]^{-1},n_{\mathbf{q}\nu}^{\text{BE}}=\left[e^{ \hbar\omega_{\mathbf{q}\nu}/k_{\text{B}}T}-1\right]^{-1}.\) Here, \(\varepsilon_{\text{F}}\) is the Fermi energy, \(\varepsilon_{n\mathbf{k}}\) is the single-particle energy of a Bloch electron, and \(\hbar\omega_{\mathbf{q}\nu}\) the phonon energy. The initial photoexcited concentration of electrons is taken to be \(n=10^{14}\,\text{cm}^{-2}\). The corresponding initial photo-holes and photoelectrons are defined with two separate chemical potentials and high carrier temperature, while keeping in mind the conservation of carrier number. Namely, we define \(f_{n\mathbf{k}}(t=0)=f_{n\mathbf{k}}^{\text{FD}}(\mu_{\text{e/h}},T_{e}^{0})\), with \(\mu_{\text{e}}\) (\(\mu_{\text{h}}\)) being the electron (hole) chemical potential, \(T_{\text{e}}^{0}=2000\,\text{K}\), while phonon temperature is set to \(T_{\text{p}}^{0}=100\,\text{K}\). Equations (1) and (2) are solved by time-stepping the derivatives with small time step of \(1\,\text{fs}\) up to \(40\,\text{ps}\). In order to have a full time-dependent information of electron-phonon dynamics an important step forward is to update the phonon frequencies coming from the modified electron occupation functions \(f_{n\mathbf{k}}(t)\). This can be calculated by means of the time-resolved phonon spectral function defined as \[B_{\mathbf{q}\nu}(\omega;t)=-\frac{1}{\pi}\text{Im}\left[\frac{2\omega_{ \mathbf{q}\nu}}{\omega^{2}-\omega_{\mathbf{q}\nu}^{2}-2\omega_{\mathbf{q}\nu} \overline{\pi}_{\mathbf{q}\nu}(\omega;t)}\right]. \tag{3}\] The crucial ingredient to the above expression is the time-resolved NA phonon self-energy [108] defined as \(\overline{\pi}_{\mathbf{q}\nu}(\omega;t)=\pi_{\mathbf{q}\nu}(\omega;t)-\pi_{ \mathbf{q}\nu}(0)\), where the adiabatic part at \(t\rightarrow-\infty\), i.e., \(\pi_{\mathbf{q}\nu}(0)\), is subtracted, and where \[\pi_{\mathbf{q}\nu}(\omega;t)=\sum_{\mathbf{k}nm}\left|g_{\nu}^{nm}(\mathbf{k },\mathbf{q})\right|^{2}\frac{f_{n\mathbf{k}}(t)-f_{m\mathbf{k}+\mathbf{q}}( t)}{\omega+\varepsilon_{n\mathbf{k}}-\varepsilon_{m\mathbf{k}+\mathbf{q}}+i\eta}. \tag{4}\] The electron-phonon matrix elements are denoted by \(g_{\nu}^{nm}(\mathbf{k},\mathbf{q})\) and \(\eta\) is an infinitesimal parameter. In our approach, the electron occupation functions \(f_{n\mathbf{k}}(t)\) entering the phonon spectral function (3) and phonon self-energy (4) are no longer Fermi-Dirac distributions as in the standard thermal case [108], but are extracted from TDBE Eq. (1) and, therefore, represent nonequilibrium occupations and redistribution of charge carriers at each time instant after the laser excitation. The corresponding photo-induced renormalization of phonon frequency and modifications to the phonon linewidth (relaxation rate) can be tracked as a function of time by means of the following expressions [108] \[\Omega_{\mathbf{q}\nu}^{2}(t) =\omega_{\mathbf{q}\nu}^{2}+2\omega_{\mathbf{q}\nu}\text{Re} \,\overline{\pi}_{\mathbf{q}\nu}(\Omega_{\mathbf{q}\nu}(t);t), \tag{5}\] \[\gamma_{\mathbf{q}\nu}(t) =-\text{Im}\,\widetilde{\pi}_{\mathbf{q}\nu}(\Omega_{\mathbf{q} \nu}(t);t). \tag{6}\] A similar idea was adopted recently in Refs. [46, 48] where occupation functions as obtained from TDBE were introduced into dynamic structure factors to complement ultrafast electron diffraction experiments. We also define the spectral representation of the phonon scattering rate \(\gamma F(\omega)\) in order to quantify the modifications to the electron-phonon scattering channels relevant for phonon dynamics and to discuss possible enhancements to the total EPC strength upon the laser excitation \[\gamma F(\omega;t)=\sum_{\mathbf{q}\nu}\gamma_{\mathbf{q}\nu}(t)\delta\left( \omega-\Omega_{\mathbf{q}\nu}(t)\right), \tag{7}\] where \(\delta(x)\) is the Dirac delta function. Note that the above spectral function is defined in a similar manner as the Eliashberg function or electron-phonon spectral function \(\alpha^{2}F(\omega)\)[109]. The cumulative scattering rate of phonons can then be written as \[\gamma(\omega;t)=\int_{0}^{\omega}d\omega^{\prime}\gamma F(\omega^{\prime};t), \tag{8}\] while the total phonon scattering rate is \(\gamma(\omega\rightarrow\infty;t)\). All of the above equations and the corresponding input parameters are calculated in this work by means of DFPT [110] and Wannier interpolation [111] of EPC matrix elements \(g_{\nu}^{nm}\)[112]. We use Quantum ESPRESSO [113, 114, 115] for the DFT calculations, and for EPC we use the EPW code [116, 117, 118]. All calculations are performed with the norm-conserving Perdew-Burke-Ernzerhof pseudopotential with a kinetic energy cutoff of 120 Ry. The lattice constant is set to the value of 3.188 A, while the neighboring MoS\({}_{2}\) sheets are separated by 12.7 A. The self-consistent electron density calculation is done on a \(20\times 20\times 1\) k-point grid and the phonon calculation on a \(6\times 6\times 1\) q-point grid. Both are done with the equilibrium electron occupation functions for pristine MoS\({}_{2}\), with the valence band occupied and the conduction band unoccupied. To interpolate electron-phonon quantities, we use 11 maximally localized Wannier functions [119] with the initial projections of d-orbitals on the Mo sites and p-orbitals on the S atom sites. All electronic structure parameters for the self-consistent cycle and the Wannierization match the ones used to solve the TDBE. The fine sampling of the Brillouin zone for the electron-phonon interpolation is done on a \(200\times 200\times 1\) grid. The fine q-point grid is always extracted among these 40000 points, whether it is a path in q-space or a full Brillouin zone calculation. Smearing in the EPW calculation is set to 40 meV. ## 3 Results and discussion Figure 1 depicts the crystal structure of single-layer MoS\({}_{2}\), the corresponding Brillouin zone and electronic band structure along high-symmetry points. MoS\({}_{2}\) is a semiconducting TMD with a direct band gap at the K point of the Brillouin zone, and interesting multi-valley topology of valence and conduction bands. The latter is considered to be instrumental for various physical phenomena in MoS\({}_{2}\), such as enhanced EPC of the \(A_{1g}\) phonon mode as observed with Raman spectroscopy [95], exciton-phonon coupling [102, 103], anisotropic electron-phonon scattering following laser excitation [48], and multi-valley superconductivity [105, 106, 107, 98]. As it will be shown below, the rich phase space for electron-phonon scatterings come from the \(\Gamma\) and K valleys in the valence band and the K and Q valleys in the conduction band. Additionally, in Fig. 1(c) we show occupation functions \(f_{n\mathbf{k}}(t)\) for several time instants up to \(t=2\) ps as obtained from the solution of the TDBE (1)-(2). Initial hot distribution of photo-holes and photo-electrons (grey lines) thermalize to smaller effective electron temperatures (about 180 K) with different time scales. Namely, holes are almost thermalized at \(t=0.5\) ps, while it takes around 2 ps for excited electrons to equilibrate. The reason for this is the difference in the phase space of valence and conduction bands, where the Q valley acts as a sort of bottleneck for electron-phonon scattering in the conduction band and slows down the process (see the accumulated charge for \(t=0.5\) ps that forms non-Fermi-Dirac distribution). On the other hand, the obtained thermalization time of a nonequilibrium phonon distribution is somewhere between 5 and Figure 1: (a) Crystal structure of MoS\({}_{2}\) single layer. (b) Corresponding Brillouin zone and high-symmetry points. (c) Electronic band structure of MoS\({}_{2}\) with time evolution of the photoexcited occupation functions as obtained from the time-dependent Boltzmann equations. 10 ps [74]. Note that while at the initial step and after thermalization time the population of electrons and phonons are well described with quasi-equilibrium distribution functions having corresponding temperatures, the TDBE allows for large deviations from the equilibrium in the between, as it is for instance the case for the time period between 0.1 and 1 ps for electrons, and between 0.1 and 5 ps for phonons. The impact of the nonequilibrium electron and hole distributions on phonons is shown in Fig. 2 where we report the full phonon spectral functions for several time delays. Considerable and time-varying nonequilibrium phonon renormalizations are observed for both acoustic and optical branches, especially around the high-symmetry points of the BZ. In addition, these renormalizations are accompanied by a remarkable enhancement of the phonon broadenings. Particularly strong modifications of phonon frequency is observed for \(A_{1g}\) optical phonon at \(\mathbf{q}=\Gamma\), where a much larger dynamical Kohn anomaly is formed by excited carriers compared to the case of doped MoS\({}_{2}\) in equilibrium state [96]. Kohn anomalies are distinct softenings of phonon dispersion coming from singularities in static phonon self-energy \(\pi_{\mathbf{q}\nu}(0)\), which in turn come from highly anisotropic electron-phonon matrix elements \(g_{\nu}^{nm}(\mathbf{k},\mathbf{q})\) as well as from the intense and anisotropic density of states of electron-hole pair excitations [108]. If the Kohn anomaly calculated in nonadiabatic regime [i.e., with \(\pi_{\mathbf{q}\nu}(\omega)\)] is different from the static one, it is usually dubbed dynamical. The dynamical Kohn anomaly of optical phonons at \(\mathbf{q}=\Gamma\) was reported for various single-layer and bulk materials (such as graphene, hole-doped diamond, MgB\({}_{2}\), and doped TMDs) in thermal equilibrium [120, 121, 122, 76, 123]. Here we illustrate to which extent a photoexcited carrier distribution can provide a route to trigger and control the emergence of Kohn anomalies over transient timescales. The \(A_{1g}\) mode was shown to be significantly coupled to electrons once multiple valleys are occupied [95], as it is the case here for photoexcited MoS\({}_{2}\). The same phonon mode is as well quite affected at the M point, while energetically lower \(E_{2g}\) optical mode is softened and broadened at the K point. Also, both longitudinal and transverse acoustic modes are modified, especially around the M and K points. Softening of the LA mode at M point particularly stands out, where at time \(t=2\) ps its frequency is decreased by about 10 meV compared to the equilibrium value. Since the LA mode at \(\mathbf{q}=\) M is instrumental for phonon-mediated superconductivity [98] as well as the CDW formation [99], these results suggest a possibility to tune these ordered states by laser excitations. In the following we study the phase space arguments and analyze the electron-phonon scattering events out of equilibrium that lead to these remarkable phonon renormalizations and phonon linewidth enhancements. In the top panels of Figs. 3(a) and 3(b) we show the momentum-resolved occupation functions \(f_{n\mathbf{k}}(t)\) of valence and conduction bands for \(t=0\) and 2 ps, respectively. The corresponding bottom panels show the contributions to the phonon dispersions and linewidths coming only from the electron-phonon scatterings in the valence and con Figure 2: Phonon spectral functions \(B_{\nu}(\mathbf{q},\omega)\) of photoexcited MoS\({}_{2}\) shown along high-symmetry points in the first Brillouin zone and for several time frames after excitation: (a) \(t=0\) ps, (b) \(t=0.5\) ps, (c) \(t=1\) ps, and (d) \(t=2\) ps. The dashed lines are phonon dispersions for pristine equilibrium MoS\({}_{2}\) as obtained from the adiabatic DFPT. Significant photo-induced phonon broadening enhancements and dynamical Kohn anomalies can be observed around the high-symmetry points both for optical and acoustic branches. duction bands. Both time frames are characterized by the similar phase space for the valence and conduction bands, i.e., depopulated \(\mathbf{k}=\Gamma\) and \(\mathbf{k}=\mathrm{K}\) valleys in the valence band and populated \(\mathbf{k}=\mathrm{K}\) and \(\mathbf{k}=\mathrm{Q}\) valleys in the conduction band. Such occupations promote \(\mathbf{q}=\Gamma\) and \(\mathbf{q}=\mathrm{K}\) electron-phonon scatterings in the former, while \(\mathbf{q}=\Gamma\), \(\mathbf{q}=\mathrm{K}\) and \(\mathbf{q}=\mathrm{M}\) electron-phonon scatterings within the latter band. Consequently, these intra- and inter-valley scatterings in the valence band lead to nonequilibrium renormalization of the optical \(A_{1g}\) and \(E_{2g}\) phonons at \(\Gamma\) and \(\mathrm{K}\) points, respectively, as well as of the LA mode at the \(\mathrm{K}\) point. Similarly, the scattering channels in the conduction band produce phonon softenings of the optical \(A_{1g}\) phonon at \(\Gamma\) and \(\mathrm{M}\) points, and the LA mode at the \(\mathrm{K}\) and \(\mathrm{M}\) points. As the initially hot distribution of electrons and holes at \(t=0\,\mathrm{ps}\) is distributed into more sharp occupations at the top and bottom of valence and conduction bands at \(t=2\,\mathrm{ps}\), some alterations of the phonon bands and broadenings become more pronounced while others are reduced. For instance, as the \(\mathrm{K}\) valley becomes more populated, while the occupation in the \(\mathrm{Q}\) valley is reduced in the conduction band, the corresponding modifications of the \(\mathbf{q}=\mathrm{K}\) (\(\mathbf{q}=\mathrm{M}\)) phonons coming only from conduction-band scatterings are reduced (enhanced). Femtosecond electron diffraction experiment revealed that scatterings in multi-layer MoTe\({}_{2}\) are dominated by the zone-center \(A_{1g}\) and \(E_{2g}\) optical phonons, as well as by the LA phonons at the \(\mathrm{M}\) point of the BZ [47]. Momentum-resolved picture of the energy transfer between excited electrons and phonons in thin bulk-like films of WSe\({}_{2}\) reveals importance of inter-valley scattering between two \(\mathrm{Q}\) points followed by emission of the acoustic M-point phonons [41, 93]. On the other hand, coherent phonon dynamics extracted from femtosecond pump-probe spectroscopy had shown that ultrafast intervalley scattering in monolayer MoSe\({}_{2}\) is dictated dominantly by the LA phonons, but at \(\mathrm{K}\) point of the BZ [53]. Multi-layer films of MoTe\({}_{2}\) and WSe\({}_{2}\) are indirect-band-gap semiconductors, and are therefore characterized by different scattering Figure 3: Nonequilibrium electron-phonon scattering channels coming from valence- (VB) and conduction-band (CB) distributions. The VB- and CB-resolved contributions are presented for two time instants (a) \(t=0\,\mathrm{ps}\) and (b) \(t=2\,\mathrm{ps}\). Upper panels depict the momentum-resolved occupation functions \(f_{n\mathbf{k}}(t)\) for the CB and VB within the first Brillouin zone. White arrows represent the dominating inter-valley scatterings channels. The lower panels show the VB- and CB-resolved contributions to the phonon dispersion and linewidth \(\gamma_{\mathbf{q}\nu}\). In addition, the overall result of dynamical phonon renormalization, coming from both VB and CB nonequilibrium channels, is shown. The dashed lines again show the phonon dispersions for pristine equilibrium MoS\({}_{2}\) as obtained from the adiabatic DFPT phase space for excited carriers compared to monolayer MoSe\({}_{2}\), which has a direct band gap at the K point. Consequently, monolayer and thin films of TMDs have distinct ultrafast phonon dynamics, coming from different shapes of conduction and valence valleys. For instance, in bulk-like semiconducting TMDs the Q conduction valley has lower energy than the K valley, and therefore the Q \(\leftrightarrow\) Q\({}^{\prime}\) inter-valley scatterings are dominant with emission of the **q** = M phonons. For corresponding single layers the situation is different, where the K conduction valley has the lowest energy, and more dominant are the K \(\leftrightarrow\) K\({}^{\prime}\) scatterings (actually in the both conduction and valence bands) and the concomitant **q** = K phonon emission. This opens many possibilities to tailor ultrafast phonon scatterings, e.g., by strain, pressure, doping, and other techniques that can significantly alter the energy positions of valleys [124]. To further demonstrate the implications of nonequilibrium carrier distribution on phonon dynamics, we show in Fig. 4 the total phonon scattering rate (summed over all branches) as a function of time along the high-symmetry points and spectral representation of phonon scattering rates \(\gamma F(\omega)\) with the corresponding cumulative scattering rate \(\gamma(\omega)\) [see Eqs. (7) and (8)]. The momentum resolved scattering rate \(\gamma_{\textbf{q}}\) reveals important role of the **q** = \(\Gamma\) and **q** = K phonons and their dominance in the overall phonon relaxation dynamics. Importance of these specific phonon modes are in line with the ultrafast coherent phonon dynamics in single-layer MoSe\({}_{2}\)[53], while anisotropic phonon response is in accordance with recent results obtained with ultrafast electron diffraction spectroscopy in MoS\({}_{2}\) monolayer [48]. Certain discrepancies in timescales between experiments and theoretical results obtained here could be attributed to the screening of the matrix element induced by the substrate [48]. Note that the recent theoretical study based on nonequilibrium Green's functions also obtained that the \(\Gamma\)- and K-point phonons are dominantly involved in nonequilibrium carrier dynamics in monolayer MoS\({}_{2}\)[125]. Interestingly, they also show that the optical phonons participate more in the relaxation dynamics compared to the acoustic modes. The obtained relaxation rates around \(\Gamma\), K, as well as M points are also significantly increased in time, where values at \(t\) = 2 ps are \(4-5\) times larger compared to the corresponding values at the initial time. The frequency-resolved scattering rate is presented in Fig. 4(b) via \(\gamma F(\omega)\), where it shown that nonequilibrium phonon scattering rates is larger for optical modes above 40 meV. The results for cumulative rate \(\gamma(\omega)\) confirms the gradual increase of the total rate as a function of time. This result seems surprising at first, since one expects that vibrational energy exchange rate decreases towards the thermalization instant, i.e., when the effective electronic temperature is decreased [41]. Nevertheless, in some cases (like nickel and platinum as well as photo-doped MoS\({}_{2}\) as shown here) due to specific Fermi surface and density of states, the opposite is possible [126]. In addition, we want to explore the difference in phonon dynamics between the photo-doped (i.e., photoexcited) scenario investigated here and the standard case of the electron-doped material via field-effect techniques or atom adsorption (i.e., via Figure 4: (a) Total phonon scattering rate (summed over all branches) along the high-symmetry points and as a function of time. (b) Spectral representation of phonon scattering rates \(\gamma F(\omega)\) (phonon density of states weighted with phonon linewidth contributions) as a function of frequency \(\omega\) shown for several time delays. Right axis shows the cumulative scattering rate \(\gamma(\omega)\). dopants). Figure 5 compares the total and mode-resolved phonon scattering rates along the high-symmetry points for photoexcited MoS\({}_{2}\) at \(t=2\) ps [panel (a)] and MoS\({}_{2}\) doped with electron carrier concentration of \(n_{\mathrm{eff}}=2\times 10^{14}\) cm\({}^{-2}\), which corresponds to the effective photo-induced carrier (both electron and hole) density at \(t=2\) ps [panel (b)]. For the electron-doped case, the active phase space for electron-phonon scatterings consists only of the K and Q valleys in the conduction band. This in turn promotes dominantly intra-valley **q** = \(\Gamma\) and intervalley **q** = M scatterings and consequently phonon renormalizations and broadening around these symmetry points. Furthermore, in Fig. 5(c) we show spectral representation of phonon scattering rates \(\gamma F(\omega)\) and the cumulative scattering rate \(\gamma(\omega)\) for these two cases, where it is clear that photo-doping induces larger and richer phonon-electron scatterings, since the corresponding scattering phase space includes both photo-holes and photo-electrons, i.e., both valence and conduction valleys. Overall, the present results clearly demonstrate a notable increase of phonon-electron scattering rate with photoexcitation, which points to the potential of enhancing the total EPC strength out of equilibrium [82] as well as inducing and modifying the concomitant superconducting properties. Note that in order to have a more conclusive answer to the intriguing physical problem of photo-induced superconductivity, one would need to go beyond the present consideration and adopt more rigorous time-dependent methodology of superconductivity, such as nonequilibrium Green's function techniques [17, 127, 33, 128, 33]. As a final remark, we want to note that, besides on the phonon dynamics as studied here, the laser-induced nonequilibrium carrier and phonon distributions could potentially have a significant impact on the electron dynamics, such as ultrafast band renormalizations. Within the present theoretical framework, these ultrafast features could be captured by updating the electronic energies and linewidths with electron self-energy due to EPC, which are in turn based on the TDBE results. This future direction might provide some interesting microscopic insights on the observed ultrafast band gap renormalizations in TMDs [39, 40] and other layered materials [129], as well as on the intriguing Floquet physics, such as phonon-driven Floquet matter [130] and ultrafast quasiparticle dressing by light [131]. ## 4 Conclusions We have explored the phonon relaxation pathways and the ensuing nonequilibrium phonon renormalization in the photoexcited MoS\({}_{2}\) monolayer by combining the _ab-initio_ time-dependent Boltzmann equation and the phonon self-energy calculations. Our findings show how population and depopulation of conduction and valence valleys promote anisotropic electron-phonon scatterings and triggers strong Kohn anomalies, softenings, and increase of relaxation rate for strongly-coupled optical and acoustic phonon modes. Nonequilibrium of the electronic energy levels induces strong Kohn anomaly of the \(E_{2g}\) mode close to the center and edge of the Brillouin zone, and strongly softens the longitudinal acoustic phonon at the M point. In accordance to the recent ultrafast experiments, our momentum-resolved analysis demonstrates that the **q** = \(\Gamma\) and **q** = K phonon modes play a key role in intra- and inter-valley scattering channels and they are thus characterized by large relaxation rates. It is also Figure 5: (a) Total and mode-resolved nonequilibrium phonon scattering rates along the high-symmetry points for \(t=2\) ps. (b) Same as (a) but for the case of electron-doped equilibrium MoS\({}_{2}\) for the effective carrier concentration \(n_{\mathrm{eff}}=2\times 10^{14}\) cm\({}^{-2}\) that matches the photoexcited carrier (i.e., both photo-hole and photo-electron) density at \(t=2\) ps. (c) Spectral representation of phonon scattering rates \(\gamma F(\omega)\) as a function of frequency \(\omega\) and the cumulative scattering rate \(\gamma(\omega)\) for the photo-doped and doped cases presented in (a) and (b) panels. shown that as the effective electron temperature decreases, i.e., as the photo-holes and photo-electrons are scattered towards the top and bottom of valence and conduction valleys, the overall phonon relaxation rate is significantly enhanced. The richness of the phase space for the photo-carriers and the corresponding impact on phonon dynamics was further demonstrated in comparison to the electron-doped MoS\({}_{2}\) in equilibrium, where, instead of K-point, M-point modes are ruling the phonon relaxation and renormalization, and where phonon relaxation rates have less intensity. In general, we believe that present results and methodology might be instrumental for gaining crucial microscopic insights of photoexcited states in multi-valley systems, such as transition metal dichalcogenides, as well as discovering new photo-induced ordered phases (e.g., superconductivity and charge density waves) and structural transformations in condensed matter. **Acknowledgement** Useful discussions with Jan Berges, Samuel Ponce, Yiming Pan are gratefully acknowledged. We acknowledge financial support from the Croatian Science Foundation (Grant no. UIP-2019-04-6869) and from the European Regional Development Fund for the "Center of Excellence for Advanced Materials and Sensing Devices" (Grant No. KK.01.1.1.01.0001). F.C. acknowledges funding from the Deutsche Forschungsgemeinschaft Grant No. 443988403 and 499426961.
2307.03944
Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States
Light-matter interaction is crucial to both understanding fundamental phenomena and developing versatile applications. Strong coupling, robustness, and controllability are the three most important aspects in realizing light-matter interactions. Topological and non-Hermitian photonics, have provided frameworks for robustness and extensive control freedom, respectively. How to engineer the properties of the edge state such as photonic density of state, scattering parameters by using non-Hermitian engineering while ensuring topological protection has not been fully studied. Here we construct a parity-time-symmetric dimerized photonic lattice and generate complex-valued edge states via spontaneous PT-symmetry breaking. The enhanced strong coupling between the topological photonic edge mode and magnon mode in a ferromagnetic spin ensemble is demonstrated. Our research reveals the subtle non-Hermitian topological edge states and provides strategies for realizing and engineering topological light-matter interactions.
Jie Qian, Jie Li, Shi-Yao Zhu, J. Q. You, Yi-Pu Wang
2023-07-08T09:51:04Z
http://arxiv.org/abs/2307.03944v1
# Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States ###### Abstract Light-matter interaction is crucial to both understanding fundamental phenomena and developing versatile applications. Strong coupling, robustness, and controllability are the three most important aspects in realizing light-matter interactions. Topological and non-Hermitian photonics, have provided frameworks for robustness and extensive control freedom, respectively. How to engineer the properties of the edge state such as photonic density of state, scattering parameters by using non-Hermitian engineering while ensuring topological protection has not been fully studied. Here we construct a parity-time-symmetric dimerized photonic lattice and generate complex-valued edge states via spontaneous PT-symmetry breaking. The enhanced strong coupling between the topological photonic edge mode and magnon mode in a ferromagnetic spin ensemble is demonstrated. Our research reveals the subtle non-Hermitian topological edge states and provides strategies for realizing and engineering topological light-matter interactions. _Introduction.--_Topology has evolved as a powerful governing principle for predicting and harnessing the robust propagation of currents in various systems, including condensed matter system [1; 2], acoustics [3; 4; 5], mechanics [6] and photonics [7; 8; 9; 10]. In topological photonics, a topological invariant ensures robust localization or propagation of electromagnetic waves [11; 12; 13]. On the other hand, non-Hermitian photonics [14; 15; 16] has also flourished in recent years, not only due to the ubiquitous non-Hermiticity in nature [17], but also because the non-Hermiticity provides additional degrees of freedom to manipulate the wave behaviors. In pursuit of the simultaneous robustness and greater control flexibility, as well as the interest in fundamental research, non-Hermitian topological physics [18; 19; 20] has received considerable attention and substantial development. Scientists investigate new paradigms [21; 22; 23; 24; 25] and explore potential applications in this interdisciplinary territory [26; 27; 28; 29]. A coupled system can have two forms of non-Hermiticity. One kind is generated when there is asymmetric interaction between the sites, which leads to the non-Hermitian skin effect [21; 30]. The other type, which is caused by on-site loss, can lead to intriguing phenomena associated with the parity-time (PT) symmetry. The PT-symmetric systems have received special attention, because they were proved to have real spectra [32]. A sequence of studies have studied the topologically protected bound (defect) states in PT-symmetric topological systems [33; 34; 35; 36], where the defect states are real in the PT-symmetry unbroken phase. Moreover, a number of studies have investigated whether topological edge states exist in the PT-symmetric systems [37; 38; 39; 40], concluding that since the edge state is not an eigenstate of the PT operator, an imaginary eigenvalue is obtained along with the spontaneous PT-symmetry breaking. In this case, a non-Hermitian edge state is obtained. We find that these imaginary edge states in the PT-symmetric system are actually topologically protected by the particle-hole symmetry [41]. In the one-dimensional (1D) non-Hermitian PT-symmetric Su-Schrieffer-Heeger (SSH) model [42], the chiral symmetry of the system is broken, losing its topological \(\mathbb{Z}\) invariant, but the particle-hole symmetry of the system is preserved and the system owns a topological \(\mathbb{Z}_{2}\) invariant. In the presence of perturbations that do not violate the particle-hole symmetry, the real parts of the eigenvalues of the edge modes remain 0, reflecting the topologically protected characteristics. Under this situation, the topological photonic mode w Figure 1: (a)(b) Schematic diagram of the Hermitian and non-Hermitian SSH chains. (c) Eigenmodes of the Hermitian SSH chain are plotted in the complex energy plane. The zero-energy modes exist in the band gap. (d)(f) Transmission spectra of the Hermitian and non-Hermitian SSH chains. (e) Eigenmodes of the non-Hermitian SSH chain are plotted in the complex energy plane. The alternated on-site losses result in spontaneous PT-symmetry breaking of the edge modes. be further manipulated by non-Hermiticity, which is highly desirable for investigating light-matter interactions [43, 44, 45]. To investigate the interaction between topological photonic modes and matters [46], we employ the photon-magnon coupling system [47, 48, 49, 50, 51, 52, 53, 54, 55], which has benefits including the flexible tunability and experimental demonstration at room temperature. In this Letter, we use a set of lossy microwave resonators to build 1D non-Hermitian SSH photonic lattices. By coupling a ferromagnetic spin ensemble (FSE) to Hermitian and non-Hermitian SSH chains and monitoring the strength of the coupling between the photonic modes and the magnon mode in the FSE, we verify the topological edge states and bulk states. Non-Hermiticity introduced by the on-site alternating losses breaks the passive PT-symmetry of zero-energy modes and results in two complex-valued edge states, which localize exponentially at the opposite ends of the chain [Fig. 1(b)]. Further, the photonic density of state (PDOS) at boundaries is larger than that in the Hermitian case [Fig. 1(a)], which strengthens the coupling between the topological photonic mode and the magnon mode. Our experiment demonstrates the potential of manipulating the interaction between topological photonic states and matter by exploiting non-Hermiticity. _System and model.--_The SSH chain consists of six unit cells [Figs. 1(a) and 1(b)], in which each unit contains two split-ring-resonators (SRRs) fabricated on the F4B substrate [Fig. 2(a)]. In the experiment, the SRR exhibits a resonance at \(\omega_{0}/2\pi\)=5.62 GHz with an intrinsic loss of \(\gamma_{0}/2\pi\)=24.42 MHz, and the topological property is unaltered by the uniform losses along the chain [14]. Therefore, SRRs with the same loss can be used to build the Hermitian SSH model. Two neighboring SRRs are separated by staggered spacings to realize the intracell and intercell coupling rates, \(v\) and \(w\). Edge states appear in the finite chain when the bulk winding number of the Hermitian Hamiltonian is \(\mathcal{W}_{\text{h}}\)=1 [35]. The effective Hermitian SSH chain is designed in the topological non-trivial phase (\(v/2\pi\)=216.5 MHz, \(w/2\pi\)=341 MHz) and the Hamiltonian is written as [41]: \[\mathcal{H}_{\text{h}}/\hbar=\sum_{s=1}^{2N}(\omega_{0}-i\gamma_{0})\hat{a}_{s} ^{\dagger}\hat{a}_{s}+\sum_{s=1}^{2N-2}(v\hat{a}_{s}\hat{a}_{s+1}^{\dagger}+w \hat{a}_{s+1}\hat{a}_{s+2}^{\dagger}), \tag{1}\] where \(\hat{a}_{s}^{\dagger}\) (\(\hat{a}_{s}\)) is the photon creation (annihilation) operator of the \(s\)-th SRR. The uniform losses of the units only yield all eigenvalues of the chain to have the same imaginary component \(i\gamma_{0}\). The eigenvalues of the coupled SRRs are plotted in the complex plane, as shown in Fig. 1(c). A pair of zero-energy modes (Re(\(\widetilde{\omega}_{m=6;7}\)) - \(\omega_{0}\)=0, green dots) appear in the band gap (gray area), which are the edge modes. The measured transmission spectrum of the chain is shown in Fig. 1(d), where the peaks correspond to the resonances of the eigenmodes. By simulating the field distribution at the edge mode frequency of \(\omega_{0}/2\pi\)=5.62 GHz, we find that the electromagnetic field tends to localize at both edges of the chain, as predicted by wave function distribution [41]. In the low-frequency region, the measured spectrum [Fig. 1(d), solid line] displays an amplitude deviation from that in the high-frequency region. This is due to the residual dissipative coupling between SRRs [35, 41]. Then, on-site non-Hermiticity is added to the SSH chain. As depicted in Fig. 3(a), resistors \(R_{\text{A}}=0.1~{}\Omega\) and \(R_{\text{B}}=2.7~{}\Omega\) are integrated into odd and even sites of the chain, respectively, which induce alternated losses of \(\gamma_{\text{A}}/2\pi\)=36 MHz and Figure 2: Photograph of the Hermitian SSH chain. The chain contains six unit cells and twelve SRRs. The SRRs are labled by a site index \(s\). The YIG sphere is placed on the top of the device. (b)(e) The mappings of the transmission spectra are plotted versus the electromagnet current and probe frequency when a YIG sphere is placed at site-1 and site-12, respectively. Strong coupling between the edge (bulk) mode and the magnon mode is indicated by the large (small) level repulsion. (c) The squares of the coupling strengths \(g_{\text{m,s}}^{2}\) (\(m\)=8) (blue dots) are extracted when the YIG sphere is positioned at the \(s\)-th site, where \(m\) is the eigenmode index. The gray bars represent the intensity distributions of the bulk state wave function \(|\varphi_{m,s}|^{2}\) (\(m\)=8). (d) The squares of the coupling strengths \(g_{\text{m,s}}^{2}\) (\(m\)=6,7) (red dots) are plotted versus site index \(s\). The intensity distributions of the edge state wave functions \(|\varphi_{m,s}|^{2}\) (m=6,7) are depicted by the gray bar. \(\gamma_{\rm B}/2\pi\)=73 MHz. The Hamiltonian becomes [41]: \[\begin{split}\mathcal{H}_{\rm{nh}}/\hbar&=\sum_{s\in X}( \omega_{0}-i\gamma_{\rm A})\hat{a}_{s}^{\dagger}\hat{a}_{s}+\sum_{s\in Y}( \omega_{0}-i\gamma_{\rm B})\hat{a}_{s}^{\dagger}\hat{a}_{s}\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ mode and bulk mode (\(m\)=8) at site-1 and site-12 are obtained as \(g_{\text{bulk},1}/2\pi=g_{\text{bulk},12}/2\pi=37\) MHz. \(g_{m,s}^{2}\) as a function of the site index \(s\) are illustrated in Figs. 2(c) and 2(d), denoted by blue (\(m\)=8) and red dots (\(m\)=6,7), respectively. The observed \(g_{m,s}^{2}\) are in good agreement with the intensity distributions for the wave function \(|\varphi_{m,s}|^{2}\) (gray bar diagram). Then, we couple the spin ensemble to the non-Hermitian SSH chain, as shown in Fig. 3(a). Figures 3(b) and 3(e) display the mappingswhen the YIG sphere is placed at site-1 and site-12, respectively. The mappings show similar amount of level repulsion, but reflects very different linewidths of the edge modes. Using Eq. (3), the loss of the edge mode at site-1 is fitted to be \(\gamma_{\text{edge},1}/2\pi=41.1\) MHz, which is contributed by the addition of the two edge modes (\(m\)=6,7). The relation is \(\gamma_{\text{edge},s}=[\text{Im}(\widetilde{\omega}_{m=6})\cdot|\varphi_{6,s }|^{2}+\text{Im}(\widetilde{\omega}_{m=7})\cdot|\varphi_{7,s}|^{2}]/(|\varphi_{6,s}|^{2}+|\varphi_{7,s}|^{2})\), and the wave functions of the edge modes \(|\varphi_{m,s}|^{2}\) are displayed as the bar diagram in Fig. 3(d). Similarly, we get \(\gamma_{\text{edge},12}/2\pi\)=67.9 MHz. More interestingly, the coupling strengths between the magnon mode and edge modes at site-1 and site-12 are observed to be \(g_{\text{edge},1}/2\pi=g_{\text{edge},12}/2\pi\)=112 MHz, which is larger than that in the Hermitian case (80 MHz). We plot \(g_{m,s}^{2}\) versus site index \(s\) for \(m\)=8 and \(m\)=6, 7 in Figs. 3(c) and 3(d), respectively. It can be found that the bulk mode maintains expanded, similar to the Hermitian bulk mode. But, as shown in Fig. 3(d), the low-loss edge state (Edge\({}_{1}\)) accumulates at the left boundary, while high-loss edge state (Edge\({}_{2}\)) accumulates at the right edge. The introduction of on-site loss does contribute to the increase of PDOS at the boundaries. The mechanism can be interpreted as follows: When the PT-symmetry of the edge states is broken, the energy flow between adjacent resonators is partly blocked [57]. The low-loss (high-loss) edge state becomes more localized at the low-loss (high-loss) site, as shown in Figs. 1(b) and 3(a), it corresponds the left (right) boundary of the chain. It is also intriguing to detect the properties of the non-Hermitian topological edge states from spectroscopic measurements. In the PT-symmetry unbroken phase, two topological edge states cannot be distinguished via spectroscopic measurement, as shown in Fig. 4(a). The absorptivity spectra \(A_{1}\) measured when loading microwave to port 1 is totally coincident with \(A_{2}\) measured when loading microwave to port 2. In the symmetry broken phase, two topological edge states can be distinguished in spectra, as shown in Fig. 4(b). The spectra \(A_{1}\) exhibits the low-loss state with a relatively narrow bandwidth, while the spectra \(A_{2}\) reveals the high-loss state. Finally, we anticipate to discuss about some additional characteristics of the exceptional point (EP) in the non-Hermitian chain. The dimensionless eigenvalues are defined as \(\beta_{\text{real}}+i\beta_{\text{imag}}\), where \(\beta_{\text{real}}=\left[\text{Re}(\widetilde{\omega})-\omega_{\text{0}} \right]/(v+w)\), \(\beta_{\text{imag}}=\left[\left|\text{Im}(\widetilde{\omega})\right|-\bar{ \gamma}\right]/(v+w)\), and \(\bar{\gamma}=\left(\gamma_{\text{A}}+\gamma_{\text{B}}\right)/2\). In a finite SSH chain, when increasing the non-Hermitian parameter \(\delta\gamma/2(v+w)\), a series of exceptional points are gradually reached [Figs. 4(c) and 4(d)]. It can be found that the EP of the edge modes is distinctly away from the EPs of the bulk modes. The edge modes experience spontaneous PT-symmetry breaking (SPTB) at EP\({}_{1}\), where \(\delta\gamma/2(v+w)\) is only about 0.02. With the increase of chain length, the non-Hermiticity needed for SPTB in edge modes decreases exponentially. In the case of \(N\gg 1\), any finite \(\delta\gamma\) will lead to the SPTB in edge modes [41]. However, the minimum requirement of SPTB in bulk mode needs \(\delta\gamma/2\)\(>\)\(|w-v|\), which is much larger than 0.02. Additional analysis is provided in the supplementary materials. _Conclusion._--We have implemented the PT-symmetric non-Hermitian topological SSH model with microwave resonators and achieved the control of topological edge states using the on-site non-Hermiticity. Through spontaneous PT-symmetry breaking, we obtain the non-Hermitian edge modes, where the photonic mode densities are enhanced at both ends of the chain. We realize the strong coupling between the edge modes and the magnon mode in both Hermitian and non-Hermitian cases. We experimentally verify that the coupling strength between the non-Hermitian edge states and the spin ensemble is stronger than that in the Hermitian situation. Our research illustrates non-Hermiticity engineered topological edge states and paves a way for studying strong coherent interaction between topological photonic modes and matter. This work is supported by the National Key Research and Development Program of China (No. 2022YFA1405200), National Natural Science Foundation of China (No. 92265202, No. 11934010, No. U1801661, and No. 12174329), and the Fundamental Research Funds for the Central Universities Figure 4: (a)(b) The measured absorptivity spectra for both Hermitian and non-Hermitian SSH chains, where \(\delta\gamma/2(w+v)=0\) and 0.034, respectively. A\({}_{1}\) (A\({}_{2}\)) is measured when loading signal to port 1 (2), as shown by the blue (red) line. (c)(d) The real and imaginary parts of normalized eigenvalues, which are plotted versus the non-Hermiticity \(\delta\gamma/2(w+v)\). The left inset figure between (a) and (b) shows the PT-symmetry spontaneous breaking point (EP\({}_{1}\)) of the edge states. (No. 2021FZZX001-02).
2304.03743
Love Numbers for Rotating Black Holes in Higher Dimensions
We compute the tidal Love numbers and static response coefficients associated to several rotating black holes in higher dimensions, including Myers-Perry black holes, black rings, and black strings. These coefficients exhibit a rich and complex structure as a function of the black hole parameters and multipoles. Our results agree in limiting cases with known and new expressions for various lower-dimensional black holes. In particular, we provide an alternative approach to the computation of the static response of Kerr black holes as a limiting case of the boosted black string.
Maria J. Rodriguez, Luca Santoni, Adam R. Solomon, Luis Fernando Temoche
2023-04-07T17:27:12Z
http://arxiv.org/abs/2304.03743v2
# Love Numbers for Rotating Black Holes in Higher Dimensions ###### Abstract We compute the tidal Love numbers and static response coefficients associated to several rotating black holes in higher dimensions, including Myers-Perry black holes, black rings, and black strings. These coefficients exhibit a rich and complex structure as a function of the black hole parameters and multipoles. Our results agree in limiting cases with known and new expressions for various lower-dimensional black holes. In particular, we provide an alternative approach to the computation of the static response of Kerr black holes as a limiting case of the boosted black string. ###### Contents * 1 Introduction * 2 Methods * 3 \(5D\) Myers-Perry Black Hole * 3.1 Background * 3.2 Klein-Gordon equation * 3.3 Static responses * 4 Black Ring * 4.1 Background * 4.2 Klein-Gordon equation * 4.3 Static responses * 5 Boosted Black Strings * 5.1 Non-rotating boosted black string in \(D\) dimensions * 5.2 Boosted Myers-Perry black string in 6 dimensions * 5.3 Boosted Myers-Perry black string in 5 dimensions * 6 Discussion * 7 Acknowledgements * A Some useful relations involving hypergeometric functions Introduction The tidal Love numbers are a set of quantities that characterize the conservative static response of a gravitating object under the influence of an external tidal field. As an intrinsic property of black holes and other compact objects, the Love numbers have been studied extensively in recent years due to the role they play in gravitational-wave astronomy: during the inspiral phase of a binary merger, the tidal coupling between the two compact objects can leave observable imprints on the waveform. How an object deforms tidally is related to what it is made of, and indeed measurements of neutron star Love numbers are expected to provide new constraints on the equation of state [1, 2, 3]. For black holes in four-dimensional general relativity, however, the Love numbers do not appear to say very much about internal structure, because they vanish regardless of the black hole's mass and spin, for both gravitational-wave polarizations [4, 5, 6, 7, 8, 9, 10]. The static response coefficients turn out in fact to be purely imaginary, corresponding to dissipative effects induced by the rotation of the black hole [11, 12, 13, 14]. The vanishing of the real part, i.e., of the Love numbers, in four dimensions instead reveals underlying hidden symmetries of general relativity [15, 16, 17, 18], indicating their potential as a tool to better understand gravitational dynamics. In dimensions greater than four, the structure of the Love numbers becomes more intricate, vanishing for specific multipoles, while other multipoles can display features such as running [7, 10]. This complexity reflects the rich geometry of higher-dimensional spacetimes and highlights the need for a deeper understanding of black hole physics in these contexts. Ultimately, the study of Love numbers in higher-dimensional black holes can shed light on the fundamental nature of gravity and the behavior of gravitational waves. In this paper, we aim to further map out the behavior of the induced static response and Love numbers in the zoo of higher-dimensional rotating black holes.1 We consider three classes of solutions, distinguished by the topology of their horizons: rotating Myers-Perry black holes in \(D=5\), black rings, and boosted black strings. In these cases, in suitable regimes the relevant equation of motion can be put into hypergeometric form, allowing for explicit solutions from which the Love numbers can be read off. Footnote 1: See also Refs. [19, 20]. Strictly speaking, to determine the Love numbers we should solve the Einstein equations linearized about a black hole background, i.e., the equations of a massless spin-2 field on said background. It turns out that the qualitative features of the Love numbers, such as their multipolar and dimensional dependence, are often largely independent of the field's spin, and so we can simplify matters significantly by working instead with the Klein-Gordon equation for a massless scalar [7, 10, 14, 15].2 Therefore we should emphasize that in this paper we are calculating _scalar_ Love numbers. Footnote 2: There are however interesting exceptions, see, e.g., Refs. [10, 21]. This paper provides an in-depth analysis of the intersection of tidal deformations and higher-dimensional black holes in vacuum general relativity (GR). Section 2 gives an overview of the methods we use in each of the spacetime backgrounds we consider. Section 3 discusses these coefficients for Myers-Perry black holes in five spacetime dim discuss the static Love numbers in a black ring background, including separability of the Klein-Gordon equation, new problem-solving approaches, and limiting cases. The static Love numbers for black strings in near-zone approximation are explored in section 5. Finally, the discussion in section 6 provides a summary of the key takeaways for the static Love numbers in higher-dimensional GR and speculations on the point-particle effective field theory approach. _Notation and conventions:_ We work in natural units \(c=\hbar=1\) (though leave in \(G\)) and use the mostly positive signature for the metric. In the text, we use the same symbol \(R\) to denote two different quantities: the radial component of the scalar field in sections 3 and 5 (see, e.g., eq. (3.12)), and one of the parameters of the black ring metric in section 4 (see eq. (4.1)). In addition, the same symbol \(\Delta\) is used in the expressions of the various metrics that we consider in this paper: the reader should refer to the definition that is given within each section for \(\Delta\). ## 2 Methods As a proxy for linear tidal responses in gravity, this paper considers static solutions to the Klein-Gordon (KG) equation for a massless scalar \(\Psi(x)\) living on a \(D\)-dimensional stationary spacetime background \(g_{\mu\nu}\), \[\Box\Psi(x)=\frac{1}{\sqrt{-g}}\partial_{\mu}\left[\sqrt{-g}g^{\mu\nu} \partial_{\nu}\Psi(x)\right]=0. \tag{2.1}\] We will mostly be interested in solutions that have Cauchy horizons, such as black holes and black rings. To begin with, let us assume that the background has \(D-2\) Killing vectors, one timelike and the rest spacelike. We can therefore choose a coordinate basis \(x^{\mu}=(t,r,\theta,\phi_{k})\), where the index \(k\) runs over \(1,\cdots,D-3\), so the Killing vectors correspond to \(\partial_{t},\partial_{\phi_{k}}\). The invariance of eq. (2.1) under the isometries generated by these Killing vectors allows us to decompose the field as \[\Psi(x)=\exp\left(-i\omega t+i\sum_{k=1}^{D-3}m_{k}\phi_{k}\right)\Phi(r, \theta)\,. \tag{2.2}\] We will be interested in cases where the solution fully separates, \[\Phi(r,\theta)=R(r)Y(\theta). \tag{2.3}\] This separability property famously holds for Kerr black holes in \(D=4\) and persists to higher-dimensional (Myers-Perry) black holes [22]. In both cases this is due to a "hidden" symmetry generated by one or more Killing tensors [23]. The situation is more subtle for backgrounds with more general horizon topologies, such as black rings and black strings. However, it turns out that for the static field configurations (\(\omega=0\)) that we are interested in, the KG equation does separate in these backgrounds [24]. The object of our study will therefore be the radial equation of motion for \(R(r)\) in a variety of higher-dimensional black hole backgrounds. The radial equation is a second-order ordinary differential equation, so has two independent solutions. The physical situation we have in mind is a black hole immersed in an external, static tidal field. At infinity, if the metric is asymptotically flat, the solutions to the radial equation can either grow as \(r^{\ell}\) or decay as \(r^{-\ell-n}\), where we have assumed that the solutions \(Y(\theta)=Y_{\ell}(\theta)\) are spherical harmonics on an \(n+1\)-sphere.3 The growing behavior would not be physical if infinity were truly infinity, but here we are taking it to be a proxy for the location of the tidal source. Decomposing the external tidal field in terms of a superposition of \(r^{\ell}\) modes sets one boundary condition for each mode. Footnote 3: We have left \(n\) general here since its relationship to \(D\) will depend on the horizon topology. For instance, \(n=D-3\) for a black hole, but \(n=D-4\) for a black string in the near region (see section 5). Meanwhile, at the (outer) horizon, one solution for \(R(r)\) typically diverges logarithmically, while another can be chosen to approach a constant. For black holes, where the horizon is a physical location, we must discard the former solution on physical grounds.4 This fixes the other boundary condition. At infinity, this physical solution will be an admixture of growing and decaying modes, Footnote 4: To be more precise, one should require any diffeomorphism-invariant physical quantity built from the solution for \(\Psi\) to be well-defined at the event horizon. \[R_{\ell}(r)\to R_{\ell,\infty}\left(r^{\ell}+\lambda_{\ell}r^{-\ell-n} \right), \tag{2.4}\] where \(R_{\ell,\infty}\) a constant. The coefficient \(\lambda_{\ell}\) is interpreted as the static tidal response coefficient.5 The Love number is typically defined to be the conservative part of the response, which is obtained by taking the real part of \(\lambda_{\ell}\), while the imaginary piece corresponds to dissipative effects. Footnote 5: Both the growing and decaying terms are the leading pieces in an expansion in \(1/r\). This leads to an ambiguity in the definition of \(\lambda_{\ell}\) when \(\ell\) is an integer, as the \(\mathcal{O}(r^{-2\ell-n})\) correction to the growing mode has the same \(r^{-\ell-n}\) scaling as the leading static response. As we discuss in some more detail in section 4, this can be resolved by analytically continuing \(\ell\in\mathbb{R}\), where the source-response split is unambiguous, extracting \(\lambda_{\ell}\), and then taking \(\ell\to\mathbb{N}\). ## 3 \(5d\) Myers-Perry Black Hole We now turn to the Klein-Gordon wave equation (2.1) for the five-dimensional Myers-Perry black hole [25]. The equations of motion in this geometry (in arbitrary dimensions) are separable [26, 27, 28] due to "hidden symmetries" generated by a tower of Killing tensors [22, 23, 29, 30].6 Since the aim is to compute Love numbers, which are static responses, we will consider static scalar field configurations. Footnote 6: These symmetries are “hidden” in the sense that they act on the full phase space of the dynamics, rather than the configuration space (spacetime). This results in conserved quantities which are non-linear in the momenta, in contrast to the “explicit” symmetries generated by Killing vectors [23]. ### Background In many respects, the Myers-Perry solution describing spinning black holes in \(D=5\) possesses the same remarkable properties as the standard Kerr black hole in four dimensions. They are the unique asymptotically-flat vacuum solutions with spherical topology, parametrized by their mass and two angular momenta. The metric in Boyer-Lindquist coordinates is \[\mathrm{d}s^{2} =-\mathrm{d}t^{2}+\frac{\mu}{\Sigma}\left(\mathrm{d}t-a\sin^{2} \theta\,\mathrm{d}\phi-b\cos^{2}\theta\,\mathrm{d}\psi\right)^{2}+\frac{r^{2} \Sigma}{\Delta}\mathrm{d}r^{2}+\Sigma\mathrm{d}\theta^{2}\] \[\quad+(r^{2}+a^{2})\sin^{2}\theta\mathrm{d}\phi^{2}+(r^{2}+b^{2} )\cos^{2}\theta\mathrm{d}\psi^{2}, \tag{3.1}\] where \[\Sigma =r^{2}+a^{2}\cos^{2}\theta+b^{2}\sin^{2}\theta, \tag{3.2}\] \[\Delta =(r^{2}+a^{2})(r^{2}+b^{2})-\mu r^{2}. \tag{3.3}\] The coordinate ranges are \(0<r\leq\infty\), \(0<\theta<\pi\) and \(0\leq\psi,\phi\leq 2\pi\). These coordinates generalize the Boyer-Lindquist coordinates \((t,r,\theta,\phi)\) in \(D=4\) by the addition of a second angular Killing direction \(\psi\). There are three free parameters: \(\mu\) is a mass parameter and \(a\) and \(b\) are rotation parameters, related to the physical mass \(M\) and angular momenta \(J_{\phi}\) and \(J_{\psi}\) by \[\mu=\frac{8GM}{3\pi},\qquad J_{\phi}=\frac{2M}{3}a,\qquad J_{\psi}=\frac{2M}{3}b. \tag{3.4}\] There are two horizons, located at the roots of \(\Delta=(r^{2}-r_{+}^{2})(r^{2}-r_{-}^{2})\), \[2r_{\pm}^{2}=\mu-a^{2}-b^{2}\pm\sqrt{(\mu-a^{2}-b^{2})^{2}-4a^{2}b^{2}}. \tag{3.5}\] The existence of the horizons requires \[\mu \geq a^{2}+b^{2}+2|ab| \tag{3.6}\] \[\implies M^{3} \geq\frac{27\pi}{32G}\left(J_{\phi}^{2}+J_{\psi}^{2}+2\left|J_{ \phi}J_{\psi}\right|\right). \tag{3.7}\] ### Klein-Gordon equation To calculate the Klein-Gordon equation (2.1) for a static field \(\Psi(r,\theta,\phi,\psi)\) in this geometry, it is helpful to retain manifest covariance on the 3-sphere, since we are ultimately most interested in the radial dynamics. To this end, let us package the coordinates on \(S^{3}\) into \(\theta^{i}=(\theta,\phi,\psi)\) with \(i\in(1,2,3)\), and define the unit-sphere metric \(\gamma_{ij}\) by \(\mathrm{d}\Omega^{2}=\gamma_{ij}\mathrm{d}\theta^{i}\mathrm{d}\theta^{j}= \mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}+\cos^{2}\theta\mathrm{d} \psi^{2}\). The KG equation requires the \(5D\) metric determinant and the metric inverse in the angular directions, \[\sqrt{-g}=r\Sigma\sqrt{\gamma},\qquad\Sigma g^{ij}=\gamma^{ij}+M^{ij}(r), \tag{3.8}\] where \(\gamma^{ij}\) is the inverse of \(\gamma_{ij}\), and the matrix \(M^{ij}(r)\) is only non-zero along the Killing directions \((\phi,\psi)\): \[M^{ij}\partial_{i}\partial_{j}=\frac{(b^{2}-a^{2})(b^{2}+r^{2})-b^{2}\mu}{ \Delta}\partial_{\phi}^{2}+\frac{(a^{2}-b^{2})(a^{2}+r^{2})-a^{2}\mu}{\Delta} \partial_{\psi}^{2}-2\frac{ab\mu}{\Delta}\partial_{\phi}\partial_{\psi}. \tag{3.9}\] With these simplifications it is straightforward to check that the Klein-Gordon equation reduces (after multiplying by \(\Sigma\)) to \[\frac{1}{r}\partial_{r}\left(\frac{\Delta}{r}\partial_{r}\right)\Psi+M^{ij} \partial_{i}\partial_{j}\Psi+\nabla_{S^{3}}^{2}\Psi=0, \tag{3.10}\] where \(\nabla^{2}_{S^{3}}\) is the Laplacian on the 3-sphere. This equation is fully separable, admitting solutions of the form \[\Psi(x) =\Theta(\theta^{i})R(r) \tag{3.11}\] \[=\mathrm{e}^{i(m_{\phi}\phi+m_{\psi}\psi)}Y(\theta)R(r). \tag{3.12}\] After dividing out by \(\Psi\), the first two terms in eq. (3.10) depend only on \(r\) and the last only on \(\theta\). This implies that \(\Theta(\theta^{i})\) is an eigenfunction of \(\nabla^{2}_{S^{3}}\), i.e., a hyperspherical harmonic.7 The separation constant is well-known to be \(-\ell(\ell+2)\) for integer \(\ell\geq 0\),8 Footnote 7: This is the higher-dimensional analogue of the fact that static perturbations of Kerr are expanded in (spin-weighted) spherical rather than spheroidal harmonics. Footnote 8: See, e.g., App. A.1 of Ref. [10] and references therein. \[\nabla^{2}_{S^{3}}\Theta(\theta^{i})=-\ell(\ell+2)\Theta(\theta^{i}), \tag{3.13}\] or equivalently, \[\sec\theta\csc\theta\partial_{\theta}\left(\sin\theta\cos\theta\partial_{ \theta}Y\right)-(m_{\phi}^{2}\csc^{2}\theta+m_{\psi}^{2}\sec^{2}\theta)Y=- \ell(\ell+2)Y. \tag{3.14}\] With these simplifications we are only left with radial derivatives in eq. (3.10), so we can divide out \(\Theta(\theta^{i})\) to obtain the radial equation, \[\boxed{\frac{1}{r}\partial_{r}\left(\frac{\Delta}{r}\partial_{r}\right)R- \left(\ell(\ell+2)+M^{ij}m_{i}m_{j}\right)R=0,} \tag{3.15}\] where \(m_{i}\mathrm{d}\theta^{i}=m_{\phi}\mathrm{d}\phi+m_{\psi}\mathrm{d}\psi\). We can write this more explicitly as \[\frac{\Delta}{r}\partial_{r}\left(\frac{\Delta}{r}\partial_{r}R\right)+\left[ (am_{\psi}+bm_{\phi})^{2}\mu+(a^{2}-b^{2})(m_{\phi}^{2}(b^{2}+r^{2})-m_{\psi}^ {2}(a^{2}+r^{2}))\right]R=\ell(\ell+2)\Delta R. \tag{3.16}\] Herein we will replace \(\ell\) with \[\hat{\ell}\equiv\frac{\ell}{D-3}=\frac{\ell}{2}. \tag{3.17}\] This will turn out to be convenient because \(\hat{\ell}\) plays a role analogous to \(\ell\) in \(D=4\)[7, 10]. The radial equation (3.15) has five regular singular points, but two pairs of these (at the inner and outer horizons) are degenerate. By changing variables from \(r\) to \(r^{2}\) we can reduce the number of regular singular points to three, which guarantees that eq. (3.15) can be solved in terms of hypergeometric functions. Concretely, let us define the radial variable \(x\) by \[r^{2}=\frac{(r_{+}^{2}-r_{-}^{2})x+r_{+}^{2}+r_{-}^{2}}{2}. \tag{3.18}\] The inner and outer horizons are located at \(x=-1\) and \(x=+1\), respectively. In terms of \(x\) we have \[\Delta=\frac{1}{4}\left(r_{+}^{2}-r_{-}^{2}\right)^{2}(x^{2}-1),\quad\frac{1} {r}\partial_{r}=\frac{4}{r_{+}^{2}-r_{-}^{2}}\partial_{x}, \tag{3.19}\] so that the radial equation becomes \[\partial_{x}\left[(x^{2}-1)\partial_{x}R\right]-\left[\hat{\ell}(\hat{\ell}+1 )+\frac{1}{4}M^{ij}m_{i}m_{j}\right]R. \tag{3.20}\] In order to solve the radial equation and read off the Love numbers, it is convenient to change the basis of \(m_{i}\) to \((m_{L},m_{R})\) defined by \[m_{\phi}=m_{R}+m_{L},\quad m_{\psi}=m_{R}-m_{L}, \tag{3.21}\] and then to rescale each of these as9 Footnote 9: These prefactors can be expressed in terms of thermodynamic quantities: the angular velocities at the horizon \(\Omega_{L,R}\) and the surface gravity at the outer horizon \(\kappa_{+}\), \[\tilde{m}_{L}=\frac{a-b}{r_{+}+r_{-}}\frac{m_{L}}{2},\quad\tilde{m}_{R}=\frac{ a+b}{r_{+}-r_{-}}\frac{m_{R}}{2}. \tag{3.22}\] The term in eq. (3.20) involving \(m_{i}\) factorizes into poles at the horizons \(x=\pm 1\), \[-\frac{1}{4}M^{ij}m_{i}m_{j}=2\left(\frac{(\tilde{m}_{R}+\tilde{m}_{L})^{2}}{x -1}-\frac{(\tilde{m}_{L}-\tilde{m}_{R})^{2}}{x+1}\right), \tag{3.23}\] so the Klein-Gordon equation takes the form \[\boxed{\partial_{x}\left[(x^{2}-1)\partial_{x}R\right]+2\left(\frac{\left( \tilde{m}_{R}+\tilde{m}_{L}\right)^{2}}{x-1}-\frac{\left(\tilde{m}_{L}-\tilde {m}_{R}\right)^{2}}{x+1}\right)R=\hat{\ell}(\hat{\ell}+1)R.} \tag{3.24}\] ### Static responses The static Klein-Gordon equation (3.24) has three regular singular points--the inner and outer horizons and infinity--and so admits hypergeometric solutions. The simplest solutions, which do not depend on the Killing directions, \(m_{\phi}=m_{\psi}=0\), are in fact Legendre polynomials, \[R=c_{1}P_{\hat{\ell}}(x)+c_{2}Q_{\hat{\ell}}(x). \tag{3.25}\] To rewrite eq. (3.24) in hypergeometric form we transform the radial variable, \[z\equiv\frac{2}{1+x}=\frac{r_{+}^{2}-r_{-}^{2}}{r^{2}-r_{-}^{2}}, \tag{3.26}\] and perform a field redefinition, \[R(z)=z^{\hat{\ell}+1}(1-z)^{i(\tilde{m}_{L}+\tilde{m}_{R})}u(z), \tag{3.27}\] so that we obtain the standard hypergeometric equation, \[z(1-z)u^{\prime\prime}(z)+\left[\mathfrak{c}-\left(\mathfrak{a}+\mathfrak{b} +1\right)z\right]u^{\prime}(z)-\mathfrak{a}\,\mathfrak{b}\,u(z)=0 \tag{3.28}\] with \[\mathfrak{a}=1+\hat{\ell}+2i\tilde{m}_{L},\quad\mathfrak{b}=1+\hat{\ell}+2i \tilde{m}_{R},\quad\mathfrak{c}=\ell+2. \tag{3.29}\] These satisfy \[\mathfrak{a}+\mathfrak{b}-\mathfrak{c}=2i\left(\tilde{m}_{L}+\tilde{m}_{R}\right). \tag{3.30}\] We summarize some of the salient features of the hypergeometric equation and its solutions in appendix A. Let us assume that \(\tilde{m}_{L}\) and \(\tilde{m}_{R}\) are both non-vanishing.10 Then \(\mathfrak{c}\) is an integer but none of \(\mathfrak{a}\), \(\mathfrak{b}\), and \(\mathfrak{a}+\mathfrak{b}-\mathfrak{c}\) are, and we can choose a basis of solutions to be [31] Footnote 10: If \(a=\pm b\) then one of these vanishes, and in the Schwarzschild–Tangherlini case (\(a=b=0\)) both vanish. In either of these cases, a different basis of hypergeometric solutions needs to be selected, as discussed in detail in appendix A. However it turns out that the solution which is regular at the horizon in each of these cases can be obtained as a limit of the general solution. \[u_{1}(z) =F(\mathfrak{a},\mathfrak{b};\mathfrak{c};z), \tag{3.31}\] \[u_{2}(z) =F(\mathfrak{a},\mathfrak{b};1+\mathfrak{a}+\mathfrak{b}- \mathfrak{c};1-z). \tag{3.32}\] At the outer horizon \(z=1\), \(u_{1}(z)\) blows up while \(u_{2}(z)\) is regular, so we will focus on \(u_{2}(z)\) as the physical solution. Now we want to expand \(u_{2}(z)\) around infinity (\(z=0\)). For the parameters we have chosen, the following identity holds:11 Footnote 11: See eqs. (A.6) and (A.7). \[u_{2}(z) =F(\mathfrak{a},\mathfrak{b};1+\mathfrak{a}+\mathfrak{b}- \mathfrak{c};1-z)\] \[=F(\mathfrak{a},\mathfrak{b};\mathfrak{c};z)\ln z-\sum_{n=1}^{ \mathfrak{c}-1}\frac{(\mathfrak{c}-1)!(n-1)!}{(\mathfrak{c}-n-1)!(\mathfrak{a }-n)_{n}(\mathfrak{b}-n)_{n}}(-z)^{-n}\] \[\quad+\sum_{n=0}^{\infty}\frac{(\mathfrak{a})_{n}(\mathfrak{b})_ {n}}{n!(\mathfrak{c})_{n}}\left[\psi(\mathfrak{a}+n)+\psi(\mathfrak{b}+n)- \psi(1+n)-\psi(\mathfrak{c}+n)\right]z^{n}. \tag{3.33}\] Here \((a)_{k}=\Gamma(a+k)/\Gamma(a)\) is the Pochhammer symbol, and \(\psi(z)=\partial_{z}\ln\Gamma(z)\) is the digamma function. As \(z\to 0\) this is dominated by the terms in the top line, in particular the log and the term in the sum with \(n=\mathfrak{c}-1=\ell+1\): \[u_{2}(z)\to\ln z-\frac{\ell!(\ell+1)!}{(\mathfrak{a}-\ell-1)_{\ell+1}( \mathfrak{b}-\ell-1)_{\ell+1}}(-z)^{-(\ell+1)}. \tag{3.34}\] The first term corresponds to the decaying \(r^{-\ell-2}\) falloff in \(R(r)\), and the second to the growing \(r^{\ell}\) falloff, so the static response is given by the ratio of the first to the second coefficient: \[\boxed{\lambda_{\ell}=2(-1)^{\ell}\frac{(-\hat{\ell}+2i\tilde{m}_{L})_{\ell+ 1}(-\hat{\ell}+2i\tilde{m}_{R})_{\ell+1}}{\ell!(\ell+1)!}\ln\left(\frac{r_{0} }{r}\right).} \tag{3.35}\] As a check, we can compare this to the non-spinning Schwarzschild-Tangherlini metric, which is the limit \(a=b=0\) (implying \(\tilde{m}_{L}=\tilde{m}_{R}=0\)) of the Myers-Perry solution. The induced response in this spacetime, for general \(D\), are known to be zero for integer \(\hat{\ell}=\ell/(D-3)\) and to run logarithmically for half-integer \(\hat{\ell}\)[7, 10]. In \(D=5\) these are the only two options. We can see this behavior from the expression (3.35) by inspecting the Pochhammer symbols in this limit, \[(-\hat{\ell})_{\ell+1}=\frac{\Gamma(\hat{\ell}+1)}{\Gamma(-\hat{\ell})}= \begin{cases}0,&\hat{\ell}\text{ integer},\\ (-1)^{(\ell+1)/2}\frac{\ell!n^{2}}{2^{\ell+1}},&\hat{\ell}\text{ half-integer}. \end{cases} \tag{3.36}\] For integer \(\hat{\ell}\), \(1/\Gamma(-\hat{\ell})=0\), so that we recover the vanishing of the Love numbers. For half-integer \(\hat{\ell}\), eq. (3.35) agrees with eq. (4.21) of Ref. [10]. ## 4 Black Ring ### Background In this section, we compute the Love numbers for spinning black rings in \(D=5\). To this end we first review some of the properties of black rings relevant to this paper. The black ring is a solution of vacuum Einstein's equations in five spacetime dimensions [32]. In contrast with the \(5D\) Myers-Perry black hole, whose horizon is topologically a 3-sphere, the black ring represents a spinning, ring-shaped object with an event horizon that is topologically a \(S^{1}\times S^{2}\). The black ring solution has several interesting properties, such as the existence of an ergosphere outside the horizon where objects can be dragged along with the black ring rotation. The current literature on the black ring spacetime is reviewed in Ref. [33] and, the corresponding geometry has been given in related forms in Ref. [32]. In this paper we shall work primarily with the metric in the \((r,\theta)\) coordinates introduced in Ref. [33] and parameters \((r_{0},\sigma)\) which will correspond respectively to the mass and spin parameters. The solution is given by \[\mathrm{d}s^{2}= -\frac{\hat{f}}{\hat{g}}\left(\mathrm{d}t-r_{0}\sinh\sigma\cosh \sigma\sqrt{\frac{R+r_{0}\cosh^{2}\sigma}{R-r_{0}\cosh^{2}\sigma}}\,\frac{ \frac{r}{R}-1}{r\hat{f}}\,R\;\mathrm{d}\psi\right)^{2}\] \[+\frac{\hat{g}}{\left(1+\frac{r\cos\theta}{R}\right)^{2}}\left[ \frac{f}{\hat{f}}\left(1-\frac{r^{2}}{R^{2}}\right)\,R^{2}\mathrm{d}\psi^{2}+ \frac{\mathrm{d}r^{2}}{(1-\frac{r^{2}}{R^{2}})f}+\frac{r^{2}}{g}\,\mathrm{d} \theta^{2}+\frac{g}{\hat{g}}\,r^{2}\sin^{2}\theta\,\mathrm{d}\phi^{2}\right], \tag{4.1}\] where \[f=1-\frac{r_{0}}{r}\,,\qquad\hat{f}=1-\frac{r_{0}\cosh^{2}\sigma}{r}\,, \tag{4.2}\] and \[g=1+\frac{r_{0}}{R}\cos\theta\,,\qquad\hat{g}=1+\frac{r_{0}\cosh^{2}\sigma}{R }\cos\theta\,. \tag{4.3}\] The coordinates vary within the ranges \(0\leq r\leq R\), \(0<\theta<\pi\) and \(0\leq\psi,\phi\leq 2\pi\), and the dimensionless parameters within \[0<r_{0}\leq r_{0}\cosh^{2}\sigma<R. \tag{4.4}\] In these coordinates, \(R\) has dimensions of length, and for thin large rings it corresponds roughly to the radius of the ring \(S^{1}\) circle. In order to avoid conical singularities the parameters \((r_{0},\sigma)\) have to be related by \(\cosh^{2}\sigma=2/(1+(r_{0}/R)^{2})\). Fixing these values leaves only two independent parameters in the solution, \(R\) and \(r_{0}\). Actually, this is to be expected based on physical principles. When you have the mass and radius of a ring, the angular momentum needs to be adjusted to achieve a balance between the tension and self-attraction of the ring with the centrifugal force. This results in only two remaining free parameters. It is easy to see that the solution has a regular outer horizon at \(r=r_{0}\). In addition, there is an inner horizon at \(r=0\) and a ring-shaped ergosurface present at \(r=r_{0}\cosh\sigma^{2}\). One advantage of choosing the specific \((r,\theta)\) coordinates in the study of black rings is that the limit of a black string becomes straightforward. Consider the limit \[r,\,r_{0},\,r_{0}\cosh^{2}\sigma\ll R \tag{4.5}\] in which \(g\), \(\hat{g}\approx 1\), and redefine \(\psi=z/R\). Then the metric in equation (4.1) becomes exactly the metric for a boosted black string, that extends along the \(z\) direction with a boost parameter \(\sigma\), and the horizon is located at \(r=r_{0}\). In order to avoid conical singularities, \(\psi\) must be identified with a period of \(2\pi\), which results in periodic identification of the string's radius \(R\): \(z\sim z+2\pi R\). Consequently, the limit in equation (4.5) corresponds to the scenario where the ring's radius \(R\) is significantly larger than its thickness \(r_{0}\), with a focus on the region near the ring where \(r\sim r_{0}\). This precise definition clarifies the heuristic construction of a black ring as a boosted black string that has been bent into a circular shape. It also enables an approximate interpretation of \(r_{0},R\) and \(\sigma\). The parameter \(r_{0}\) is a measure of the radius of the \(S^{2}\) at the horizon, and the ring's radius \(R\). Hence, smaller values of \(r_{0}/R\) correspond to thinner rings. Additionally, \(\cosh^{2}\sigma\) provides an estimate of the ring's rotational speed, and can be approximately identified with the local boost velocity \(v=\tanh\sigma\). ### Klein-Gordon equation Often, when a spacetime possesses a Killing tensor, it is possible to find multiplicatively separable solutions of the KG equation. In the case of black rings, the equation seems not to be separable [34]. Only two specific scenarios allow for separability. The first case under consideration in this section involves a static time-independent perturbation where \(\omega\) equals zero [24]. Another scenario for the black ring, involves the infinite radius limit, where \(R\) approaches infinity. This limit results in a boosted black string, which we will analyze in the following section. To analyze the KG equation for a massless scalar for black rings it is convenient to adopt \((r,\theta)\) coordinates defined in terms of the most common employed \((x,y)\) coordinates. As we will see, the \((r,\theta)\) coordinates employed here are more convenient to show the separability of the wave equation and take straight string limit \(R\rightarrow\infty\). Let us try the following ansatz: \[\Psi(t,r,\theta,\phi,z)=\mathrm{e}^{-i\omega t+im\phi+i\nu z}\,\left(1+\frac{r }{R}\cos\theta\right)\,\Phi(r,\theta). \tag{4.6}\] It is worth noting that neither \(m\) nor \(\nu\) are integers, as \((\phi,z)\) do not have periodicity \(2\pi\). Below we will account for this. As found in Ref. [35], the classical wave equation for \(\Phi(r,\theta)\) becomes \[\partial_{r}\left[r\left(r-r_{0}\right)\left(1-\frac{r^{2}}{R^{2}} \right)\partial_{r}\,\Phi\right]+\frac{1}{\sin\theta}\,\partial_{\theta}\left[ \left(1+\frac{r_{0}}{R}\cos\theta\right)\sin\theta\,\partial_{\theta}\,\Phi\right] \[\qquad+\frac{r^{2}}{(r_{0}-r)\left(1-\frac{r^{2}}{R^{2}}\right) \left(r-r_{0}c_{\sigma}^{2}\right)}\left[\omega\,r_{0}c_{\sigma}s_{\sigma} \left(1-\frac{r}{R}\right)\sqrt{\frac{1+\frac{r_{0}}{R}c_{\sigma}^{2}}{1- \frac{r_{0}}{R}c_{\sigma}^{2}}}-\nu(r-r_{0}c_{\sigma}^{2})\right]^{2}\Phi\] \[\qquad+\omega^{2}\frac{(1+\frac{r_{0}}{R}c_{\sigma}^{2}\cos \theta)^{2}r^{3}}{(1+\frac{r}{R}\cos\theta)^{2}(r-r_{0}\,c_{\sigma}^{2})}\, \Phi-m^{2}\frac{(1+\frac{r_{0}}{R}c_{\sigma}^{2}\cos\theta)}{(1+\frac{r_{0}} {R}\cos\theta)\sin^{2}\theta}\,\Phi+(f_{r}+f_{\theta})\,\Phi=0, \tag{4.7}\] where \(f_{r}=-(2r-r_{0})\,r/R^{2}\), \(f_{\theta}=-r_{0}\,\cos\theta/R\) and, for simplicity we defined \(c_{X}=\cosh X\) and \(s_{X}=\sinh X\). Unfortunately, the \(\omega^{2}\) term appears to hinder separation. To compute the static Love number however, we consider the above equation with \(\omega=0\) that becomes fully separable. The remaining equations exhibits only regular singular points, which suggests that the problem can be solved locally around these points. This is the subject of the next section. ### Static responses In this section we will compute the static Love numbers for black rings. In the static limit (\(\omega=0\)), the KG equation (4.7) reduces to a coupled system of equations: \[\frac{1}{\sin\theta}\,\partial_{\theta}\left[\left(1+\frac{r_{0}}{R}\cos \theta\right)\sin\theta\,\partial_{\theta}\,\chi\right]-m^{2}\frac{(1+\frac{r_ {0}}{R}c_{\sigma}^{2}\cos\theta)}{(1+\frac{r_{0}}{R}\cos\theta)\sin^{2}\theta }\,\chi-\frac{r_{0}}{R}\,\cos\theta\,\chi=-K\chi, \tag{4.8}\] \[\partial_{r}\left[r\left(r-r_{0}\right)\left(1-\frac{r^{2}}{R^{2}}\right) \partial_{r}\,\Phi_{r}\right]+\nu^{2}\frac{r^{2}(r-r_{0}c_{\sigma}^{2})}{(r_{ 0}-r)\left(1-\frac{r^{2}}{R^{2}}\right)}\Phi_{r}-(2r-r_{0})\,\frac{r}{R^{2}} \Phi_{r}=K\Phi_{r}, \tag{4.9}\] when \(\Phi(r,\theta)=\chi(\theta)\,\Phi_{r}(r)\). The solution to both equations involves the use of generalized Heun functions, with the separation constants \(K\) serving as the eigenvalues on a sphere. Unlike in the case of the more familiar hypergeometric equation (see appendix A), the \(K\) values in (4.8) and (4.9) are not known in simple closed form, and can be computed only numerically or with perturbative methods. Alternatively, one can start by asking whether it is possible to find different near and far regions where the wave equations can be solved in terms of simple special functions, and then obtain a full solution by matching solutions in each region together along a surface of an intermediate overlap region. In the case of (4.8) and (4.9), we see that this occurs when the horizon radius \(r_{0}\) of the black ring is smaller compared to the \(S^{1}\) radius of ring: \[\frac{r_{0}}{R}\ll 1. \tag{4.10}\] In this case, each of these equations will be solvable analytically in two regions. This is the regime we will focus on in the rest of the section. #### Spheroidal equation Let us first focus on angular Laplacian. In the limit (4.10), the angular equation (4.8) for the black ring reduces to \[\frac{1}{\sin\theta}\,\partial_{\theta}\left(\sin\theta\,\partial_{\theta}\, \chi\right)-\left(\frac{m^{2}}{\sin^{2}\theta}-K\right)\,\chi=0\,. \tag{4.11}\] The associated Legendre function which represents the solution that is regular at \(\cos\theta=\pm 1\) is exactly the case \(m=0\). We can therefore consider with full generality the \(m=0\) case. The corresponding eigenvalues are \[K=\ell(\ell+1)+O(r_{0})\,. \tag{4.12}\] #### Radial equation We will now focus on the radial equation (4.9). To solve the differential equation we can perform an explicit matching, dividing the spacetime outside the horizon, \(r_{0}\leq r<R\), into two overlapping regions defined by the near region (\(r_{0}\leq r\ll R\)) and the far region (\(r_{0}\ll r<R\)). However, for the computation of the Love numbers, the complete matching procedure is unnecessary. As we will see, solving the wave equation in the near-horizon region (with ingoing boundary conditions on the surface) and finding the transformation to the boundary will suffice to deduce the static response coefficients. The radial wave equation (4.9) in the near region, where the coordinate distance \(r-r_{0}\) is small compared to \(R\), takes the form \[\Delta\partial_{r}\left(\Delta_{0}\,\partial_{r}\,\Phi\right)+(V(r)-K\,\Delta )\,\Phi=0, \tag{4.13}\] where \(\Delta\equiv r(r-r_{0})\) and \(\Delta_{0}\equiv r(r-r_{0})(1-r_{0}^{2}/R^{2})\). We can further replace \[V(r)+K\,\Delta\sim V(r_{0})-K\,\Delta_{0}\,, \tag{4.14}\] and eq. (4.9) then becomes approximately \[\partial_{r}\left[r(r-r_{0})\partial_{r}\Phi_{\rm near}\right]+\left[\frac{r _{0}^{2}\mathcal{W}^{2}}{r(r-r_{0})}-\ell(\ell+1)\right]\Phi_{\rm near}=0, \tag{4.15}\] with \[\mathcal{W}=\frac{\nu\,r_{0}\sinh\sigma}{1-\frac{r_{0}^{2}}{R^{2}}}\,. \tag{4.16}\] The above equation contains three regular singular points and is solved by hypergeometric functions. To bring it into the standard form of the hypergeometric equation we define a new variable and perform a field redefinition: \[x=\frac{r_{0}}{r}\,,\qquad\Phi(r(x))=(1-x)^{-\mathrm{i}\mathcal{W}}x^{-\ell}u (x)\,. \tag{4.17}\] Then, eq. (4.15) takes the standard hypergeometric form (A.1) with parameters \[\mathfrak{a}=-\ell\,,\qquad\mathfrak{b}=-\ell-2i\mathcal{W}\,,\qquad\mathfrak{c}=- 2\ell\,. \tag{4.18}\] Note that \(\mathfrak{a}+\mathfrak{b}-\mathfrak{c}=-2i\mathcal{W}\) is not an integer, but \(\mathfrak{a}\) and \(\mathfrak{c}\) are. To avoid ambiguities in the definition of the Love numbers due to a possible uncertainty in the identification of the response coefficients (see below), we first perform an analytic continuation \(\ell\mapsto\mathbb{R}\)[12, 14]. The two linearly independent solutions are therefore given by eq. (A.2). Since now none of the numbers \(\mathfrak{a}\), \(\mathfrak{b}\), \(\mathfrak{c}-\mathfrak{a}\), \(\mathfrak{c}-\mathfrak{b}\) and \(\mathfrak{c}\) is an integer, we can use the connection formula (A.3). We then impose the boundary condition \(\Phi\approx(1-x)^{-i\mathcal{W}}\) at the horizon \(r_{0}\), which amounts to requiring that \(u\) be regular at \(x=1\). This fixes \(C_{2}\) in terms of \(C_{1}\) as (A.4). Plugging back into eqs. (4.17) and (A.2), \[\Phi(r(x))=C_{1}(1-x)^{-i\mathcal{W}}\bigg{[}x^{-\ell}\,_{2} \mathsf{F}_{1}\left(-\ell,-\ell-2i\mathcal{W},-2\ell;x\right)\\ -\frac{\Gamma(-2\ell)\Gamma(\ell+1)\Gamma(\ell-2i\mathcal{W}+1)} {\Gamma(-\ell)\Gamma(-\ell-2i\mathcal{W})\Gamma(2\ell+2)}\,x^{\ell+1}\,_{2} \mathsf{F}_{1}\left(\ell+1,\ell-2i\mathcal{W}+1,2\ell+2;x\right)\bigg{]}\,. \tag{4.19}\] Note that in the analytic continuation sense there is no uncertainty in how to split the external tidal field (i.e., the term \(\propto x^{-\ell}\)) from the response (i.e., the one \(\propto x^{\ell+1}\)): they both have subleading terms, resulting from expanding the hypergeometric functions in powers of \(x\) in (4.19), but for real values of \(\ell\) the two series never overlap. On the other hand, if \(\ell\) is integer, the source contains, at subleading order, a piece that is degenerate with the response falloff, introducing in principle an ambiguity in the definition of the Love numbers. To avoid this ambiguity, supplemented with the analytic continuation \(\ell\in\mathbb{R}\), we can read off the response coefficients from (4.19), and only later consider the physical value \(\ell\) to be an integer. Taking the limit \(x\to 0\), the response coefficients are12 Footnote 12: This seems to be consistent with eq. (30) of Ref. [24]. \[\lambda_{\ell\in\mathbb{R}}=-\frac{\Gamma(-2\ell)\Gamma(\ell+1)\Gamma(\ell-2 i\mathcal{W}+1)}{\Gamma(-\ell)\Gamma(-\ell-2i\mathcal{W})\Gamma(2\ell+2)}\,. \tag{4.20}\] We can now take the limit \(\ell\to\mathbb{N}\). Using \[\Gamma(-n+\varepsilon)=\frac{(-1)^{n}}{n!\,\varepsilon}+O(\varepsilon^{0})\,, \qquad\text{for $n\in\mathbb{N}$}\,, \tag{4.21}\] we find \[\boxed{\lambda_{\ell\in\mathbb{N}}^{\text{BR}}=(-1)^{\ell+1}\frac{\Gamma( \ell+1)^{2}\Gamma(\ell-2i\mathcal{W}+1)}{2\,\Gamma(2\ell+1)\Gamma(2\ell+2) \Gamma(-\ell-2i\mathcal{W})}\,.} \tag{4.22}\] As in the four-dimensional black hole cases, the response coefficients are purely imaginary, which means that the static Love numbers vanish for the black ring (for all values of \(\ell\) and \(\nu\)). It is instructive to take the limit \(\mathcal{W}\to 0\) and compare our resulting expression with the induced response of a Schwarzschild black hole. The static response coefficients (4.22) vanish in this limit. This is consistent with the fact that, when \(\mathcal{W}=0\), eq. (4.15) formally coincides with the equation of a massless scalar field on a four-dimensional Schwarzschild spacetime (see, e.g., eq. (2.2) of Ref. [15] with \(r_{s}\mapsto r_{0}\)) and inherits all the symmetry structure discussed in Ref. [15]. We illustrate the behavior of the black ring configuration representing the dissipative coefficients (4.22). This is done for six different values of the angular momentum in fig. 1. In each case we have plotted the dissipative part of the coefficients, \(\mathrm{Im}[\lambda_{\ell}^{\mathrm{BR}}]\), for different values of the boost parameter \(\sigma\). ## 5 Boosted Black Strings The focus of this section is on boosted black string geometries, i.e., higher-dimensional stationary black string solutions carrying momentum along their length. We will derive in particular the static response of a test scalar field in two distinct cases: first, we will consider the Klein-Gordon equation on a \(D\)-dimensional non-rotating boosted black string spacetime; then, we will focus on a boosted Kerr black string in \(D=6\) and \(D=5\) dimensions. ### Non-rotating boosted black string in \(D\) dimensions Non-rotating boosted black string solutions in \(D\) dimensions can be easily constructed by boosting the static black string metrics along the \(z\) direction. The geometry of such solutions in generic \(D=n+4\) dimensions is given by the following line element [36, 37]: \[\begin{split}\mathrm{d}s^{2}=&\,-\left(1-\frac{r_{ 0}^{n}}{r^{n}}\cosh^{2}\sigma\right)\mathrm{d}t^{2}-2\frac{r_{0}^{n}}{r^{n}} \cosh\sigma\sinh\sigma\,\mathrm{d}t\,\mathrm{d}z+\left(1+\frac{r_{0}^{n}}{r^ {n}}\sinh^{2}\sigma\right)\mathrm{d}z^{2}\\ &+\left(1-\frac{r_{0}^{n}}{r^{n}}\right)^{-1}\mathrm{d}r^{2}+r^{ 2}\mathrm{d}\Omega_{S^{n+1}}^{2}\,,\end{split} \tag{5.1}\] Figure 1: Visualization of the response coefficients \(\lambda_{\ell}^{\mathrm{BR}}\) for black rings (4.22) as a function of the multipole moments \(\ell\) for various values of the boost parameter \(\sigma\). The real part of the coefficients vanish, leading to vanishing static Love numbers. The non-trivial dissipation coefficients, the imaginary part of the coefficients, are represented here. As the black ring spin increases, for increasing boost parameter \(\sigma\) (from _blue_ to _yellow_ curves), the dissipation parameters become larger. This behavior of the dissipation parameters is suppressed for increasing \(\ell\to\infty\) where the coefficients vanish. where we assume that the \(z\) direction is periodically identified [37]. Here \(\sigma\) is the boost parameter and \(\mathrm{d}\Omega^{2}_{S^{n+1}}\) the line element of the \((n+1)\)-sphere \(S^{n+1}\) defined recursively as \(\mathrm{d}\Omega^{2}_{S^{n+1}}=\mathrm{d}\theta^{2}_{n+1}+\sin^{2}\theta_{n+1} \mathrm{d}\Omega^{2}_{S^{n}}\). This solution has an event horizon located at \(r=r_{0}\) and an ergosurface at \(r=r_{0}\cosh^{2/n}\sigma\). The boost velocity is given by \(v=\tanh\sigma\). The total energy and momentum of the string are, respectively, \[M_{bs}=\frac{\Omega_{n+1}R}{8G}r_{0}^{n}(n\cosh^{2}\sigma+1)\,,\qquad P_{bs}= \frac{\Omega_{n+1}R}{8G}r_{0}^{n}n\cosh\sigma\sinh\sigma\,, \tag{5.2}\] where \(\Omega_{n+1}\) is the area of a unit \((n+1)\)-sphere. We can also define the horizon entropy \[S_{bs}=\frac{\pi\,\Omega_{n+1}R}{2G}r_{0}^{n+1}\cosh\sigma\,. \tag{5.3}\] Let us consider a massless Klein-Gordon field \(\Psi\) solving eq. (2.1) on the geometry (5.1). The symmetry structure of the metric allows us to decompose \(\Psi\) in separation of variables as \[\Psi(t,z,r,\theta)=\mathrm{e}^{-i\omega t+i\nu z}R(r)Y_{L}(\theta)\,, \tag{5.4}\] where we Fourier transformed in time and in the coordinate \(z\). \(Y_{L}(\theta)\) are the hyperspherical harmonics with \(\theta=\{\theta_{1},\cdots,\theta_{n+1}\}\) the coordinates on \(S^{n+1}\).13 Using the following expressions for the Christoffel symbols, Footnote 13: The functions \(Y_{L}(\theta)\) provide a representation of the rotation group \(\mathrm{SO}(n+1)\). The dimension \(N_{L}\) of the representation is given by \(N_{L}=\binom{n+L}{L}-\binom{n+L-2}{L-2}=\frac{(L+n-2)!(2L+n-1)}{(n-1)!L!}\), as it can be easily found by noting that a symmetric \(L\)-index tensor in \((n+1)\)-dimensions has \(\binom{n+L}{L}\) independent components and that the tracelessness condition imposes \(\binom{n+L-2}{L-2}\) conditions [10]. \[\Gamma^{i}_{ab}=\Gamma^{a}_{bi}=0\,,\qquad\Gamma^{a}_{ij}=-\frac{1}{2}g^{ab} \partial_{b}g_{ij}\,, \tag{5.5}\] where the indices \(i,j,\ldots\) and \(a,b,\ldots\) run over the coordinates \(\{t,z,r\}\) and \(\{\theta_{1},\cdots,\theta_{n+1}\}\) respectively, we can write the Klein-Gordon equation (2.1) as \[\Box\Psi=\nabla_{a}\nabla^{a}\Psi+\frac{1}{r^{2}}\nabla^{2}_{S^{n+1}}\Psi+ \frac{1}{2}g^{ij}g^{ab}\partial_{b}g_{ij}\partial_{a}\Psi=0\,, \tag{5.6}\] where \(\nabla^{2}_{S^{n+1}}\) is the spherical Laplacian on the \(S^{n+1}\) sphere. Using that \(Y_{L}(\theta)\) are eigenfunctions of \(\nabla^{2}_{S^{n+1}}\) with eigenvalues \[\nabla^{2}_{S^{n+1}}Y_{L}(\theta)=-L(L+n)Y_{L}(\theta)\,, \tag{5.7}\] the equation for the radial field component \(R(r)\) can be cast in the form \[r\left(r^{n}-r_{0}^{n}\right)\partial_{r}\left[r\left(r^{n}-r_{0}^{n}\right)R ^{\prime}(r)\right]-\left[L(L+n)r^{n}\left(r^{n}-r_{0}^{n}\right)+\nu^{2}r^{n+ 2}\left(r^{n}-r_{0}^{n}\cosh^{2}\sigma\right)\right]R(r)=0, \tag{5.8}\] in agreement with, e.g., Ref. [38] in the non-rotating limit. The equation (5.8) does not admit a simple closed-form solution. However, we can introduce a near-zone approximation, defined by \(r_{0}\leq r\ll|1/\nu|\), where (5.8) is exactly solvable in terms of hypergeometric functions, and define the induced response at values of \(r\) in the range \(r_{0}\ll r\ll|1/\nu|\). In practice, we will replace \(r\mapsto r_{0}\) in the potential as follows in such a way to preserve the form of the singularity at the horizon: \[r\left(r^{n}-r_{0}^{n}\right)\partial_{r}\left[r\left(r^{n}-r_{0}^{n}\right)R^{ \prime}_{\text{near}}(r)\right]-\left[L(L+n)r^{n}\left(r^{n}-r_{0}^{n}\right)- r_{0}^{2n}\mathcal{W}^{2}\right]R_{\text{near}}(r)=0\,, \tag{5.9}\] where we defined \[\mathcal{W}\equiv\frac{\nu r_{0}}{n}\sinh\sigma\,. \tag{5.10}\] We should stress that there is no unique way of defining the near-zone approximation (5.9).14 One can in fact define different schemes, corresponding to different truncations of the differential equation, that become all exact in the limit \(r\to r_{0}\) but differ in the region \(r>r_{0}\). In this sense, the response coefficients will be strictly speaking exact only in the limit \(\mathcal{W}\to 0\). Footnote 14: In the context of Schwarzschild or Kerr black holes in \(D=4\), see e.g. [16, 17, 39, 40, 41, 42, 43] for some examples of near-zone approximations. After the following change of coordinate and field redefinition, \[x=\left(\frac{r_{0}}{r}\right)^{n}\,,\qquad R_{\text{near}}(r(x))=(1-x)^{-i \mathcal{W}}x^{-\hat{L}}u(x)\,, \tag{5.11}\] where we defined \[\hat{L}\equiv\frac{L}{n}\,, \tag{5.12}\] the near-zone equation (5.9) takes the standard hypergeometric form (A.1) with parameters \[\mathfrak{a}=-\hat{L}\,,\qquad\mathfrak{b}=-\hat{L}-2i\mathcal{W}\,,\qquad \mathfrak{c}=-2\hat{L}\,. \tag{5.13}\] In particular, assuming that \(\hat{L}\) is neither integer nor semi-integer, a basis of two linearly independent solutions to the near-zone equation (5.9) for \(u(x)\) can be read off from eq. (A.2). Then, imposing the correct 'infalling' boundary condition at the event horizon \(r=r_{0}\), i.e. (see, e.g., Refs. [24, 38]), \[R_{\text{near}}(r)\sim(r-r_{0})^{-i\mathcal{W}}\,,\qquad\text{as }r\to r_{0}\,, \tag{5.14}\] is equivalent to requiring that \(u(x)\) is finite at the singular point \(x=1\). Such solution is written explicitly in eq. (A.5). The response coefficients \(\lambda_{L}\), defined for each \(L\) as the ratio between the coefficients of the two falloffs in \(R_{\text{near}}\), \[R_{\text{near}}(r)\propto\left(r^{L}+\lambda_{L}\frac{r_{0}^{2L+n}}{r^{L+n}} \right)\,, \tag{5.15}\] in the intermediate region \(r_{0}\ll r\ll|1/\nu|\), across the near zone and the far zone, are thus \[\boxed{\lambda_{L}=-\frac{\Gamma(-2\hat{L})\Gamma(\hat{L}+1)\Gamma(\hat{L}-2i \mathcal{W}+1)}{\Gamma(-\hat{L})\Gamma(-\hat{L}-2i\mathcal{W})\Gamma(2\hat{L}+ 2)}\,,} \tag{5.16}\] for real \(\hat{L}=L/n\). Two comments are in order here. First, note that the static response coefficients (5.16) of a scalar field on a boosted black string geometry in \(D=n+4\) dimensions reproduce exactly the Love numbers of a Tangherlini black hole in \(D-1\) dimensions [7, 10]. Eq. (5.16) thus provides an independent check of the results of Refs. [7, 10] for the scalar field case. We will extend this check to rotating spacetimes in the sections below. Second, taking the limit \(D\to 5\) (\(n\to 1\)) in the final result (5.16) recovers the static response coefficients of a black ring in five dimensions, to leading order in (4.10), computed in eq. (4.22) (with the replacement \(\ell\mapsto L\)). Note that performing the calculation in generic \(D\) allowed us to avoid possible ambiguities in the source/response splitting that may arise in degenerate cases and obtain an independent check of the result (4.22). ### Boosted Myers-Perry black string in 6 dimensions The result of the previous section can be easily extended to boosted Myers-Perry black strings. In this section, we will mainly focus on 6-dimensional spacetimes and compute the static response of a scalar perturbation. In particular, we will show that the calculation provides an alternative way of rederiving the response coefficients of a Myers-Perry black hole in \(D=5\). The line element describing the geometry of a boosted Myers-Perry black string in generic \(D=n+5\) dimensions15--obtained by adding a flat direction \(z\) to a Myers-Perry black hole (with single plane of rotation) and then applying a Lorentz boost to it with parameter \(\sigma\)--is [38] Footnote 15: Note that, here and in the next section, the relation between \(D\) and \(n\) is different with respect to what we used in section 5.1. \[\begin{split}\mathrm{d}s^{2}=&-\left(1-\frac{\mu \,r^{1-n}\cosh^{2}\sigma}{\Sigma}\right)\mathrm{d}t^{2}+\frac{\mu\,r^{1-n} \sinh(2\sigma)}{\Sigma}\mathrm{d}t\mathrm{d}z+\left(1+\frac{\mu\,r^{1-n}\sinh^ {2}\sigma}{\Sigma}\right)\mathrm{d}z^{2}\\ &+\frac{r^{2}\Sigma}{\Delta}\mathrm{d}r^{2}+\Sigma\,\mathrm{d} \theta^{2}+\frac{r^{2}(r^{2}+a^{2})^{2}-\Delta a^{2}\sin^{2}\theta}{r^{2} \Sigma}\sin^{2}\theta\,\mathrm{d}\phi^{2}\\ &-\frac{2\mu\,r^{1-n}\cosh\sigma}{\Sigma}a\sin^{2}\theta\, \mathrm{d}t\mathrm{d}\phi-\frac{2\mu\,r^{1-n}\sinh\sigma}{\Sigma-\mu\,r^{1-n} }a\sin^{2}\theta\,\mathrm{d}z\mathrm{d}\phi+r^{2}\cos^{2}\theta\,\mathrm{d} \Omega_{S^{n}}^{2}\,,\end{split} \tag{5.17}\] where \(\mathrm{d}\Omega_{S^{n}}^{2}\) again describes the line element of a unit \(n\)-sphere, and \[\Delta=r^{2}(r^{2}+a^{2}-\mu\,r^{1-n})\,,\qquad\Sigma=r^{2}+a^{2}\cos^{2} \theta\,. \tag{5.18}\] The Klein-Gordon equation on the geometry (5.17) admits separation of variables. We shall thus decompose the (static) scalar field \(\Psi\) as \[\Psi=\mathrm{e}^{im\phi+i\nu z}R(r)S_{\ell}^{m}(\theta)Y_{L}(\Omega)\,, \tag{5.19}\] where \(S_{\ell}^{m}(\theta)\) are 2-dimensional spheroidal harmonics, while \(Y_{L}\) are hyperspherical harmonics. After straightforward manipulations, the (static) radial equation for \(R(r)\) takes the form [38] \[\frac{\Delta}{r^{n+2}}\partial_{n}\left(r^{n-2}\Delta\partial_{r}R\right)+VR=0\,, \tag{5.20}\] with potential \[\begin{split} V&=-\frac{\Delta}{r^{2}}\left[\nu^{2} r^{2}+A_{\ell m}+L(L+n-1)\frac{a^{2}}{r^{2}}\right]+\left[m^{2}a^{2}\cosh^{2} \sigma\right.\\ &\qquad\qquad+\mu\,r^{1-n}(r^{2}+a^{2})\nu^{2}\sinh^{2}\sigma-m^ {2}a^{2}\sinh^{2}\sigma-2\nu ma\mu\,r^{1-n}\sinh\sigma\right],\end{split} \tag{5.21}\] where \(A_{\ell m}\) are the separation constants in the spheroidal harmonic equation, \(A_{\ell m}=\ell(\ell+n+1)+O(a^{2}\nu^{2})\), while \(L(L+n-1)\) are the eigenvalues of the hyperspherical harmonics on the \(n\)-sphere. Following the logic of the previous section, we define the near zone in the range \(r_{+}\leq r\ll|1/\nu|\) as \[\begin{split}\frac{\Delta}{r^{3}}\partial_{r}\left(r^{-1}\, \Delta\partial_{r}R\right)+\bigg{[}&-\frac{\Delta}{r^{2}}\left( \nu^{2}r_{+}^{2}+A_{\ell m}+\frac{a^{2}L^{2}}{r^{2}}\right)+m^{2}a^{2}\cosh^{2 }\sigma\\ &+\mu\left(r_{+}^{2}+a^{2}\right)\nu^{2}\sinh^{2}\sigma-m^{2}a^{2 }\sinh^{2}\sigma-2\nu ma\mu\sinh\sigma\bigg{]}R=0,\end{split} \tag{5.22}\] where we set \(n=1\) and where \(r_{+}\equiv\sqrt{\mu-a^{2}}\) is the event horizon. After the field redefinition and change of variable \[R(r(x))\equiv r^{-\frac{3}{2}}x^{\frac{2\ell+1}{4}+\alpha}(1-x)^{\beta}u(x) \,,\qquad\qquad x\equiv\frac{r_{+}^{2}}{r^{2}}\,, \tag{5.23}\] where \[\alpha\equiv\frac{1}{2}\left(\sqrt{\nu^{2}r_{+}^{2}+A_{\ell m}+1}-\ell-1 \right)\,,\qquad\beta\equiv\frac{i(am-\nu\mu\sinh\sigma)}{2r_{+}}\,, \tag{5.24}\] the radial equation takes the standard hypergeometric form (A.1). The \(\mathfrak{c}\) parameter is given by \[\mathfrak{c}=1+\sqrt{\nu^{2}r_{+}^{2}+A_{\ell m}+1}\,, \tag{5.25}\] while the expressions for \(\mathfrak{a}\) and \(\mathfrak{b}\) are more cumbersome and we do not report explicitly them here. We however just point out that they are non-integer--likewise \(\mathfrak{c}\) in (5.25)--and therefore the equation belongs to the non-degenerate case with independent solutions given by (A.2). To compute the response coefficients we will work perturbatively in \(\nu a\), as we will eventually take the limit \(\nu\to 0\). We shall thus formally write: \[A_{\ell m}=\ell(\ell+2)+A_{\ell m}^{(2)}\nu^{2}+\mathcal{O}(\nu^{4})\,. \tag{5.26}\] The coefficients \(A_{\ell m}^{(2)}\) can in principle be computed, e.g., in perturbation theory or numerically, however, as we shall see below, we will not need their explicit expressions. Since the equation is non-degenerate for \(\nu\neq 0\), we can take (A.2) with (A.4), which corresponds to the solution that is regular at the horizon \(r=r_{+}\) (\(x=1\)), and expand at large distances \(r\to\infty\) (\(x\to 0\)) at the first nontrivial order in \(\nu\): \[u(x\approx 0)\propto\left[1-\frac{\Gamma(\mathfrak{a})\Gamma(\mathfrak{b}) \Gamma(2-\mathfrak{c})}{\Gamma(\mathfrak{c})\Gamma(\mathfrak{a}-\mathfrak{c}+ 1)\Gamma(\mathfrak{b}-\mathfrak{c}+1)}\left(\frac{r_{+}}{r}\right)^{2\ell+2+ \nu^{2}\frac{A_{\ell m}^{(2)}+r_{+}^{2}}{\ell+1}}\right]\,. \tag{5.27}\] Note that the response coefficients in (5.27) are formally divergent when \(\nu=0\) because the argument of \(\Gamma(2-\mathfrak{c})\) approaches a negative integer. After using the formula (4.21) and regularizing the result by subtracting the \(1/\varepsilon\) pole term [7, 10], we find the following expression for the logarithmic dependence of the response coefficients (in units of \(r_{+}\)):16 Footnote 16: Here we are keeping only the coefficient of the logarithmic term in the ratio between the two falloffs in eq. (5.27). This is because only this term is unambiguous. In eq. (5.28), \(r\) should be thought of as the distance at which the response of the system is measured, with the logarithmic dependence being an example of classical renormalization group running [7, 10]. In this sense, the length scale \(r_{0}\) plays the role of a renormalization scale to be fixed by experiments. \[\boxed{\lambda_{\ell}=\left[(-1)^{\ell}\frac{2\Gamma(\mathfrak{a})\Gamma( \mathfrak{b})}{\ell!\,\Gamma(\ell+2)\Gamma(\mathfrak{a}-\ell-1)\Gamma( \mathfrak{b}-\ell-1)}\ln\left(\frac{r_{0}}{r}\right)\right]_{\nu=0}\,,} \tag{5.28}\] where the parameters are computed at \(\nu=0\). Note that, at \(\nu=0\), \(\mathfrak{a}\), \(\mathfrak{b}\) and \(\mathfrak{c}\) coincide with (3.29), if in (3.29) one sets to zero one of the two spins, e.g. \(b=0\), and identifies \(m_{\phi}\mapsto m\) and \(m_{\psi}\mapsto L\). In other words, \(\lambda_{\ell}\) in (5.28) reproduces exactly the scalar response coefficients of five-dimensional single-spin Myers-Perry black holes from eq. (3.35), providing a nontrivial consistency check of our results. ### Boosted Myers-Perry black string in 5 dimensions Following the same logic of the previous section, we now compute the static response coefficients of a boosted Myers-Perry black string in \(D=5\) dimensions (i.e., \(n=0\)). Taking the limit \(\sigma,\nu\to 0\) in the final result will allow us to rederive the dissipative response of a Kerr black hole in four dimensions [11, 12, 14, 44, 45]. In analogy with (5.22), we first define the following near zone starting from the scalar equation (5.20) (note that we need to set \(L=0\) along with \(n=0\)): \[\begin{split}\frac{\Delta}{r^{3}}\partial_{r}\left[r^{-1}\, \Delta\partial_{r}R\right]+\bigg{[}&-\frac{\Delta}{r^{2}}\left( \kappa^{2}r_{+}^{2}+A_{lm}\right)+a^{2}m^{2}\cosh^{2}\sigma+\mu r_{+}\left(r_{ +}^{2}+a^{2}\right)\nu^{2}\sinh^{2}\sigma\\ &-m^{2}a^{2}\sinh^{2}\sigma-2\nu ma\mu r_{+}\sinh\sigma\bigg{]}R =0\,,\end{split} \tag{5.29}\] where \(r_{+}\) (\(r_{-}\)) denotes the outer (inner) horizon, obtained from solving \(\Delta=0\) with \(n=0\): \[r_{\pm}\equiv\frac{\mu}{2}\pm\sqrt{\frac{\mu^{2}}{4}-a^{2}}\,. \tag{5.30}\] We shall then introduce \[R(r(x))\equiv x^{\frac{1}{2}\left(1-\sqrt{1+4A_{\ell m}+4\nu^{2}r_{+}^{2}} \right)}(1-x)^{\frac{i\left(ma+\mu\nu r_{+}\sinh\sigma\right)}{r_{+}-r_{-}}}\, u(x)\,,\qquad x\equiv\frac{r_{+}-r_{-}}{r-r_{-}}\,. \tag{5.31}\] Then, the near-zone equation (5.29) takes the standard hypergeometric form (A.1) with parameters \[\mathfrak{a} =\frac{1}{2}\left(1-\sqrt{1+4A_{\ell m}+4\nu^{2}r_{+}^{2}}\right)\,, \tag{5.32a}\] \[\mathfrak{b} =\mathfrak{a}+\frac{2iam}{r_{+}-r_{-}}+\frac{2i\mu a^{2}\nu}{r_{-} (r_{+}-r_{-})}\sinh\sigma\,,\] (5.32b) \[\mathfrak{c} =2\mathfrak{a}\,. \tag{5.32c}\] Again, for \(\nu\neq 0\), none of the parameters above is an integer number. This implies that a basis of two linearly independent solutions is (A.2) and the connection formula (A.3) holds. Note that imposing the correct infalling boundary condition at the horizon is equivalent to requiring that \(u(x)\) is regular at \(x=1\).17 This fixes the integration constants as in (A.4). Plugging back into the solution for \(R\) yields Footnote 17: This can be easily seen for instance by recalling that, for \(\sigma,\nu=0\), \(R(r)\) must oscillate as \((r-r_{+})^{\frac{ima}{r_{+}-r_{-}}}\) as \(r\to r_{+}\)[14, 41, 46]. \[R(r(x))=C_{1}(1-x)^{\frac{i\left(ma+\mu\kappa r_{+}\sinh\sigma \right)}{r_{+}-r_{-}}}\left[x^{\frac{\epsilon}{2}}\;_{2}\mathsf{F}_{1}\left( \mathfrak{a},\mathfrak{b},\mathfrak{c};x\right)\right.\\ \left.-\;\frac{\Gamma(\mathfrak{c})\Gamma(\mathfrak{a}-\mathfrak{ c}+1)\Gamma(\mathfrak{b}-\mathfrak{c}+1)}{\Gamma(\mathfrak{a})\Gamma( \mathfrak{b})\Gamma(2-\mathfrak{c})}\,x^{1-\frac{\epsilon}{2}}\;_{2}\mathsf{ F}_{1}\left(\mathfrak{a}-\mathfrak{c}+1,\mathfrak{b}-\mathfrak{c}+1,2- \mathfrak{c};x\right)\right]. \tag{5.33}\] Note that \(\frac{\mathfrak{c}}{2}\approx-\ell+\mathcal{O}(\nu^{2})\) as \(\nu\to 0\). We can thus define the response coefficients as \[\boxed{\lambda_{\ell m}=-\frac{\Gamma(\mathfrak{c})\Gamma(\mathfrak{a}- \mathfrak{c}+1)\Gamma(\mathfrak{b}-\mathfrak{c}+1)}{\Gamma(\mathfrak{a}) \Gamma(\mathfrak{b})\Gamma(2-\mathfrak{c})}}\,, \tag{5.34}\] with \(\mathfrak{a}\), \(\mathfrak{b}\) and \(\mathfrak{c}\) given in eq. (5.32). The static response of a scalar perturbation on Kerr spacetime can then be obtained by setting \(\sigma,\nu\to 0\) in (5.34). The limit is smooth and we find: \[\lambda_{\ell m}^{\rm Kerr}=-\frac{\Gamma(-2\ell)\Gamma(\ell+1)\Gamma(1+\ell+ \frac{2iam}{r_{+}-r_{-}})}{\Gamma(2\ell+2)\Gamma(-\ell)\Gamma(-\ell+\frac{2iam }{r_{+}-r_{-}})}\,, \tag{5.35}\] correctly reproducing, e.g., eq. (3.54) of Ref. [14].18 Footnote 18: Up to the factor \([(r_{+}-r_{-})/r_{s}]^{2\ell+1}\) because of the slight different definition of \(\lambda_{\ell m}\). It is well known that an ambiguity takes place in the calculation, in advanced Kerr coordinates, of the static response of a Kerr black hole in \(D=4\)[11, 12, 14]. This happens because, similarly to what we discussed in section 4, in the physical case \(\ell\in\mathbb{N}\) the subleading corrections in the falloff of the source have the same power exponent as the leading tidal response contribution. A possible way to address such ambiguity in the source/response split is to perform an analytic continuation in \(\ell\): the calculation of the response coefficients is performed by first assuming \(\ell\) real, in which case the degeneracy between source and response falloffs does not occur, and by then taking in the final expression the limit of integer \(\ell\)[11, 12, 14]. In this section, although similar in spirit, we provided an alternative derivation and check of the result (5.35). Doing the calculation for a boosted black string in one higher dimension provides an alternative way of breaking the degeneracy and defining the static response of Kerr black holes in \(D=4\)[7]. ## 6 Discussion The tidal Love numbers for rotating higher-dimensional black holes capture intricate features of the dynamics of massless fields on black hole backgrounds. Our main results for Love numbers of higher-dimensional rotating black holes can be summarized as follows: **Conjecture:**_The static response coefficients \(\lambda_{\ell m}\) for rotating black holes in higher-dimensional (\(D\geq 5\)) spacetimes display the following relations_ \[\lambda_{\ell m}^{\text{(D-1)-BH}}=\lambda_{\ell m}^{\text{D-BR}}=\lambda_{\ell m }^{\text{D-BS}}\,, \tag{6.1}\] _among \((D-1)\)-dimensional black holes (BH) (including Kerr and Myers-Perry black holes), \(D\)-dimensional thin black rings (BR) and \(D\)-dimensional black strings (BS)._ From our calculation of the static Love numbers, we see that those of Myers-Perry black holes are finite, unlike their four-dimensional Kerr counterparts, which vanish. The variation of the signs of the various Love coefficients for \(5D\) black holes are of paramount importance in determining the nature of the horizon and stability of the solution. In the presence of even/odd gravitational multipole moments, the black hole undergoes opposite distortions positive/negative. Interestingly, for, e.g., the single spinning Myers-Perry black holes (\(b=0\)) there seems to be a critical region around \((J/M^{2})_{\text{crit}}\sim 0.286\), where the behavior of the dissipative coefficients varies as a function of the multipole moment values \(\ell\). For increasing multipole values of \(\ell\), the Love numbers decrease for the slowly rotating Myers-Perry black holes with \(J/M^{2}<(J/M^{2})_{\text{crit}}\), while an increasing dissipative response is found for black holes with spins \(J/M^{2}>(J/M^{2})_{\text{crit}}\) (see fig. 2). This suggests that tidal deformations for the faster spinning Myers-Perry black holes may play an important role in elucidating the stability of these objects. Our results also complement the analysis of Refs. [7, 10], which have calculated the tidal response of non-spinning Schwarzschild-Tangherlini black holes. In the limit where the spin parameters \(a=b=0\) are zero the Love numbers reduce to the Schwarzschild-Tangherlini coefficients therein. While the KG equation is generically not separable for black rings, exploiting the fact that for \(\omega=0\) the wave equation is actually separable we were able to calculate the Love number in these backgrounds. The static response coefficients computed via a matching procedure imply the vanishing of the Love numbers for black rings, with a purely dissipative response. Importantly, we have found that the static response of _thin_ black rings (i.e., black rings with \(r_{0}/R\ll 1\)) matches exactly the one of Kerr black holes. Indeed, by identifying \[\nu\,R\to m\,,\qquad\mathcal{W}\to-\frac{ma}{\sqrt{r_{+}-r_{-}}}, \tag{6.2}\] the dissipative coefficients for black rings (4.22) become exactly the coefficients (5.35) for Kerr black holes: \[\lambda_{\ell m}^{\text{BR}}=\lambda_{\ell m}^{\text{Kerr}}\,, \tag{6.3}\] parametrized by mass \(M\), spin parameter \(a\), and azimuthal eigenvalue \(m\). This agreement between the tidal deformation coefficients for Kerr and black rings suggests that black rings resemble much more the \(4D\) black holes rather than their \(5D\) counterparts, the Myers-Perry black holes. Kerr black holes have been shown to be stable [47] and to have a 2D CFT dual interpretation [48]. For black rings, stability was considered in Ref. [37] and a 2D CFT interpretation was also proposed [35]. Our findings on the Love numbers add further evidence to the similarity between these \(4D/5D\) black hole solutions. It will be interesting in the future to understand the connection between the tidal coefficients beyond the thin black ring regime. Finally, note that the expression for the black ring dissipation coefficients (4.22) vanishes in the limit of vanishing spin parameter \(\sigma\to 0\), reproducing the well-established result that the scalar response coefficients of non-spinning Schwarzschild black holes are identically zero. We have also demonstrated several connections between the Love numbers for boosted black strings and \(D\)-dimensional black holes. Here are two points worth noting. Firstly, our analysis for the static response coefficients (5.16) of a scalar field in a boosted black string geometry in \(D=n+4\) dimensions perfectly reproduces the Love numbers of a Tangherlini black hole in \(D-1\) dimensions [7, 10] (see eq. (4.17) of Ref. [10]). Equation (5.16) acts as an independent verification of the outcomes of Refs. [7, 10] for the scalar field scenario. We also expanded this results to rotating spacetimes. Secondly, by taking the limit \(D\to 5\) (\(n\to 1\)), our expressions for the dissipation coefficients (5.16) match the static response coefficients of a black ring in five dimensions (4.22). The calculation in a general number of dimensions \(D\) helped us avoid potential ambiguities in the source/response splitting that arise in certain degenerate situations, and we were able obtain the coefficients without any analytic continuation in \(\ell\). Our analysis can be extended in multiple ways. In four dimensions, a matching between the point-particle effective field theory (EFT) [49, 50] and full general relativity calculations can be defined by employing the gauge-invariant definition of Love numbers as Wilson coefficients in the EFT. Higher-dimensional gravity poses an interesting puzzle. Black holes solutions are no longer unique in vacuum, hence a complete point-particle EFT interpretation should reflect this fact. Our analysis here established connections between the different black holes tidal responses which will certainly play a role in the definitions of these coefficients as Wilson coefficients in the EFT for \(5D\) gravity. In addition, we calculated response coefficients for a massless spin-0 field. The calculation of the tidal responses to spin-1 and spin-2 fields on these Figure 2: Visualization of the response coefficients \(\lambda_{\ell m}^{\rm MP}\) for the single spinning \(5D\) Myers–Perry black holes (3.35) as a function of the multipole moments \(\ell\). The imaginary part of the coefficients vanish, leading to vanishing dissipative response coefficients. The Love numbers, defined as the real part of the \(\lambda_{\ell m}^{\rm MP}\), are represented in the plot for fixed mass \(M=1\) and angular momenta \(J/M^{2}=0.26,0.29\) (from _gray_ to _black_ curves), respectively below and above the critical value \((J/M^{2})_{\rm crit}\sim 0.286\). As the multipole moments increase the Myers–Perry black holes with \(J/M^{2}>(J/M^{2})_{\rm crit}\) exhibit increasing values of the Love numbers. This behavior reverts for slowly rotating Myers–Perry black holes where the tidal distortion tends to zero as \(\ell\to\infty\). backgrounds has not been yet addressed. Finally, it will be interesting to explore the (hidden) symmetries of these fields on black hole geometries in higher dimensions, and study how they constrain the tidal response of the objects [15, 16]. We leave these research directions for future work. _Note added._ While this paper was being prepared, Ref. [20] appeared. The paper has some overlap with our work in the interpretation of Love numbers for Myers-Perry black holes in five spacetime dimensions. ## 7 Acknowledgements We would like to thank Lam Hui, Austin Joyce, Riccardo Penco, and Malcolm Perry for useful discussions and collaboration on related topics. We thank also the Centro de Ciencias de Benasque Pedro Pascual for the hospitality where some of the research was carried out. The work of MJR is partially supported through the NSF grant PHY-2012036, RYC-2016-21159, CEX2020-001007-S and PGC2018-095976-B-C21, funded by MCIN/AEI/10.13039/501100011033. LS is supported by the Centre National de la Recherche Scientifique (CNRS). ARS's research was partially supported by funds from the Natural Sciences and Engineering Research Council (NSERC) of Canada. Research at the Perimeter Institute is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI. LFT acknowledges support from USU PDRF fellowship and USU Howard L. Blood Fellowship. MJR would like also to thank the Mitchell Family Foundation for hospitality in 2023 at Cook's Branch workshop. ## Appendix A Some useful relations involving hypergeometric functions The hypergeometric equation is a second-order differential equation of the Fuchsian type, possessing three regular singular points. In the standard form it is written as \[x(1-x)u^{\prime\prime}(x)+[\mathfrak{c}-(\mathfrak{a}+\mathfrak{b}+1)x]u^{ \prime}(x)-\mathfrak{a}\,\mathfrak{b}\,u(x)=0\,,\] (A.1) where \(\mathfrak{a}\), \(\mathfrak{b}\) and \(\mathfrak{c}\) are constant parameters. In this appendix, we will provide a summary of the relevant properties of the solutions and the connection coefficients in the two main cases that we encountered in the main text: _(i)_ none of \(\mathfrak{a}\), \(\mathfrak{b}\), \(\mathfrak{c}-\mathfrak{a}\), \(\mathfrak{c}-\mathfrak{b}\), \(\mathfrak{c}\) is an integer and the equation is non-degenerate; _(ii)_\(\mathfrak{a}\), \(\mathfrak{b}\), \(\mathfrak{c}-\mathfrak{a}-\mathfrak{b}\) are non-integer, while \(\mathfrak{c}\) is integer. For a more complete discussion, see Ref. [31]. Non-degenerate hypergeometric equation.Let us assume that none of the numbers \(\mathfrak{a}\), \(\mathfrak{b}\), \(\mathfrak{c}-\mathfrak{a}\), \(\mathfrak{c}-\mathfrak{b}\), \(\mathfrak{c}\) is an integer. In this case, the equation is non-degenerate and the two linearly independent solutions, scaling as \(\sim 1\) and \(\sim x^{1-\mathfrak{c}}\) near the singularity \(x=0\), are (see, e.g., Refs. [31, 51]) \[u(x)=C_{1}\,{}_{2}\mathsf{F}_{1}\left(\mathfrak{a},\mathfrak{b},\mathfrak{ c};x\right)+C_{2}\,x^{1-\mathfrak{c}}\,{}_{2}\mathsf{F}_{1}\left(\mathfrak{a}- \mathfrak{c}+1,\mathfrak{b}-\mathfrak{c}+1,2-\mathfrak{c};x\right).\] (A.2) Using hypergeometric connection formulas, it is possible to re-express the linear combination (A.2) in terms of the fundamental solutions in the neighborhood of any of the other two singular points. For instance, if we are interested in \(x=1\), we can write \[u(x)=C_{1}\bigg{[}\frac{\Gamma(\mathfrak{c})\Gamma(\mathfrak{c}- \mathfrak{a}-\mathfrak{b})}{\Gamma(\mathfrak{c}-\mathfrak{a})\Gamma(\mathfrak{ c}-\mathfrak{b})}\,_{2}\mathsf{F}_{1}\left(\mathfrak{a},\mathfrak{b}, \mathfrak{a}+\mathfrak{b}-\mathfrak{c}+1;1-x\right)\\ +\frac{\Gamma(\mathfrak{c})\Gamma(\mathfrak{a}+\mathfrak{b}- \mathfrak{c})}{\Gamma(\mathfrak{a})\Gamma(\mathfrak{b})}(1-x)^{\mathfrak{c}- \mathfrak{a}-\mathfrak{b}}\,_{2}\mathsf{F}_{1}\left(\mathfrak{c}-\mathfrak{a },\mathfrak{c}-\mathfrak{b},1+\mathfrak{c}-\mathfrak{a}-\mathfrak{b};1-x \right)\bigg{]}\\ +C_{2}\,x^{1-\mathfrak{c}}\bigg{[}\frac{\Gamma(2-\mathfrak{c}) \Gamma(\mathfrak{c}-\mathfrak{a}-\mathfrak{b})}{\Gamma(1-\mathfrak{a})\Gamma( 1-\mathfrak{b})}\,_{2}\mathsf{F}_{1}\left(\mathfrak{a}-\mathfrak{c}+1, \mathfrak{b}-\mathfrak{c}+1,\mathfrak{a}+\mathfrak{b}-\mathfrak{c}+1;1-x\right) \\ +\frac{\Gamma(2-\mathfrak{c})\Gamma(\mathfrak{a}+\mathfrak{b}- \mathfrak{c})}{\Gamma(\mathfrak{a}-\mathfrak{c}+1)\Gamma(\mathfrak{b}- \mathfrak{c}+1)}(1-x)^{\mathfrak{c}-\mathfrak{a}-\mathfrak{b}}\,_{2}\mathsf{F }_{1}\left(1-\mathfrak{a},1-\mathfrak{b},1+\mathfrak{c}-\mathfrak{a}- \mathfrak{b};1-x\right)\bigg{]}\,,\] (A.3) which holds identically. In several cases in the main text, we will require that \(u(x)\) is regular at \(x=1\). This fixes \(C_{2}\) in terms of \(C_{1}\) as \[C_{2}=-C_{1}\frac{\Gamma(\mathfrak{c})\Gamma(\mathfrak{a}-\mathfrak{c}+1) \Gamma(\mathfrak{b}-\mathfrak{c}+1)}{\Gamma(\mathfrak{a})\Gamma(\mathfrak{b}) \Gamma(2-\mathfrak{c})}\,.\] (A.4) Plugging it back into (A.3) yields \[u(x)=C_{1}\frac{\Gamma(\mathfrak{c})\Gamma(\mathfrak{c}-\mathfrak{ a}-\mathfrak{b})}{\Gamma(\mathfrak{a})\Gamma(\mathfrak{c}-\mathfrak{a}) \Gamma(\mathfrak{c}-\mathfrak{b})}\bigg{[}\,_{2}\mathsf{F}_{1}\left( \mathfrak{a},\mathfrak{b},\mathfrak{a}+\mathfrak{b}-\mathfrak{c}+1;1-x\right) \\ -\frac{\Gamma(\mathfrak{c}-\mathfrak{a})\Gamma(\mathfrak{c}- \mathfrak{b})\Gamma(\mathfrak{a}-\mathfrak{c}+1)\Gamma(\mathfrak{b}-\mathfrak{ c}+1)}{\Gamma(\mathfrak{a})\Gamma(\mathfrak{b})\Gamma(1-\mathfrak{a})\Gamma(1- \mathfrak{b})}\,x^{1-\mathfrak{c}}\,_{2}\mathsf{F}_{1}\left(\mathfrak{a}- \mathfrak{c}+1,\mathfrak{b}-\mathfrak{c}+1,\mathfrak{a}+\mathfrak{b}- \mathfrak{c}+1;1-x\right)\bigg{]}\,.\] (A.5) Degenerate case: integer \(\mathfrak{c}\).Let us assume that in the hypergeometric equation (A.1) the parameters \(\mathfrak{a}\), \(\mathfrak{b}\), \(\mathfrak{c}-\mathfrak{a}-\mathfrak{b}\) are non-integer, while \(\mathfrak{c}\) is an integer number. In such a case, the fundamental solutions in (A.2) are no longer independent. In fact, a degeneracy occurs and a basis of linearly independent hypergeometric solutions is given by \[u_{1}(x)={}_{2}\mathsf{F}_{1}(\mathfrak{a},\mathfrak{b},\mathfrak{c};x)\,, \qquad\qquad u_{2}(x)=\ln(x)\frac{{}_{2}\mathsf{F}_{1}(\mathfrak{a}, \mathfrak{b},\mathfrak{c};x)}{\Gamma(\mathfrak{c})}+\mathsf{D}_{\mathfrak{a},\mathfrak{b},\mathfrak{c}}(x)\,,\] (A.6) where \[\begin{split}\mathsf{D}_{\mathfrak{a},\mathfrak{b},\mathfrak{c} }(x)&=\sum_{k=0}^{\infty}\left[\psi(\mathfrak{a}+k)+\psi( \mathfrak{b}+k)-\psi(k+1)-\psi(\mathfrak{c}+k)\right]\frac{(\mathfrak{a})_{k }(\mathfrak{b})_{k}}{(\mathfrak{c}-1+k)!k!}x^{k}\\ &\quad+\sum_{k=1}^{\mathfrak{c}-1}(-1)^{k-1}\frac{(k-1)!( \mathfrak{a})_{-k}(\mathfrak{b})_{-k}}{(\mathfrak{c}-1-k)!}x^{-k}\,,\end{split}\] (A.7) where \((\cdot)_{k}\) is the Pochhammer's symbol defined by \((c)_{k}\equiv\Gamma(c+k)/\Gamma(c)\) and \(\psi\) denotes the digamma function, \(\psi(z)\equiv\Gamma^{\prime}(z)/\Gamma(z)\). Now, the solution that is regular at the singular point \(x=1\) is \(u_{2}(x)\), while \(u_{1}(x)\) diverges. Expanding \(u_{2}(x)\) around \(x=0\) yields \[u_{2}(x\to 0)\approx\frac{\ln(x)}{\Gamma(\mathfrak{c})}+(-1)^{\mathfrak{c}}( \mathfrak{c}-2)!\frac{\Gamma(\mathfrak{a}-\mathfrak{c}+1)}{\Gamma(\mathfrak{a}) }\frac{\Gamma(\mathfrak{b}-\mathfrak{c}+1)}{\Gamma(\mathfrak{b})}x^{1- \mathfrak{c}}+\ldots\] (A.8) For our purposes in the main text, it is useful to define from (A.8) the \(\ln(x)\)-dependent ratio between the coefficient of the piece that goes as \(x^{1-\mathfrak{c}}\) and the first term: \[\lambda\equiv(-1)^{\mathfrak{c}}\frac{\Gamma(\mathfrak{a})\Gamma(\mathfrak{b})}{ (\mathfrak{c}-2)!\,\Gamma(\mathfrak{c})\Gamma(\mathfrak{a}-\mathfrak{c}+1) \Gamma(\mathfrak{b}-\mathfrak{c}+1)}\ln\left(x\right)\,.\] (A.9)
2306.02224
Auto-GPT for Online Decision Making: Benchmarks and Additional Opinions
Auto-GPT is an autonomous agent that leverages recent advancements in adapting Large Language Models (LLMs) for decision-making tasks. While there has been a growing interest in Auto-GPT stypled agents, questions remain regarding the effectiveness and flexibility of Auto-GPT in solving real-world decision-making tasks. Its limited capability for real-world engagement and the absence of benchmarks contribute to these uncertainties. In this paper, we present a comprehensive benchmark study of Auto-GPT styled agents in decision-making tasks that simulate real-world scenarios. Our aim is to gain deeper insights into this problem and understand the adaptability of GPT-based agents. We compare the performance of popular LLMs such as GPT-4, GPT-3.5, Claude, and Vicuna in Auto-GPT styled decision-making tasks. Furthermore, we introduce the Additional Opinions algorithm, an easy and effective method that incorporates supervised/imitation-based learners into the Auto-GPT scheme. This approach enables lightweight supervised learning without requiring fine-tuning of the foundational LLMs. We demonstrate through careful baseline comparisons and ablation studies that the Additional Opinions algorithm significantly enhances performance in online decision-making benchmarks, including WebShop and ALFWorld.
Hui Yang, Sifu Yue, Yunzhong He
2023-06-04T01:07:20Z
http://arxiv.org/abs/2306.02224v1
# Auto-GPT for Online Decision Making: Benchmarks and Additional Opinions ###### Abstract. Auto-GPT is an autonomous agent that leverages recent advancements in adapting Large Language Models (LLMs) for decision-making tasks. While there has been a growing interest in Auto-GPT styled agents, questions remain regarding the effectiveness and flexibility of Auto-GPT in solving real-world decision-making tasks. Its limited capability for real-world engagement and the absence of benchmarks contribute to these uncertainties. In this paper, we present a comprehensive benchmark study of Auto-GPT styled agents in decision-making tasks that simulate real-world scenarios. Our aim is to gain deeper insights into this problem and understand the adaptability of GPT-based agents. We compare the performance of popular LLMs such as GPT-4, GPT-3.5, Claude, and Vicuna in Auto-GPT styled decision-making tasks. Furthermore, we introduce the Additional Opinions algorithm, an easy and effective method that incorporates supervised/imitation-based learners into the Auto-GPT scheme. This approach enables lightweight supervised learning without requiring fine-tuning of the foundational LLMs. We demonstrate through careful baseline comparisons and ablation studies that the Additional Opinions algorithm significantly enhances performance in online decision-making benchmarks, including WebShop and ALFWorld. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted none: none on its description, where a success requires all matches on the product itself, attributes, options and price together. We use the IL (Imitation Learning) method with a fine-tuned action policy component as the baseline model, and compare it with popular generative LLMs with Auto-GPT styled adaption towards this web shopping task. #### 2.1.2. ALFWorld ALFWorld (Liu et al., 2017) is a ground-breaking research environment that harmonizes the sophisticated, task-oriented language understanding of the ALFRED (Liu et al., 2017) dataset with the immersive interactive fiction of TextWorld (Chen et al., 2017). The ALFRED (Action Learning From Realistic Environments and Directives) benchmark offers a robust testing ground for models to learn to parse and carry out intricate tasks from language directives within a detailed, interactive 3D environment. Meanwhile, TextWorld serves as a dynamic learning playground for training and evaluating reinforcement learning agents on text-based games. By interweaving these two platforms, ALFWorld brings together the linguistic comprehension and decision-making challenges of text-based games with the physical interactions in a 3D environment, embodying a critical step towards melding natural language instructions with real-world physical interactions. The environment contains over 25,000 unique, procedurally-generated tasks across photorealistic settings in various areas such as kitchens, living rooms, and bedrooms. These tasks require complex problem-solving skills and a thorough understanding of both language and environment, creating an elevated benchmark for AI performance. As much as ALFWorld presents a challenging yet fertile testbed for reinforcement learning, natural language understanding, and interactive decision-making research, we also start the evaluation process with the DAgger (Liu et al., 2017) IL (Imitation Learning) agent against the unseen dataset as the baseline. Then we benchmark it against prevailing generative language learning models that utilize an Auto-GPT style approach, with these models only being adjusted for the ALFWorld task with tool demonstrations. ### Prompt design We adapt Auto-GPT for both tasks without extensive tuning, simply by providing the task requirements or questions directly as Auto-GPT's goal. For instance, we input sentences such as _'I want to purchase a folding storage box that is easy to install, made of faux leather, and has dimensions of 60x40x40cm'_. To facilitate Auto-GPT's understanding of available actions, we represent each action as a tool. It is worth noting that we observed poor performance when using tool instructions without examples in a sermon-style manner. However, with just a few examples, the performance improved significantly. Therefore, we include one to three few-shot examples for tool demonstrations, to harness the in-context learning abilities of LLMs. ### Considering additional opinions We further engineer changes to the Auto-GPT workflow to take additional opinions from external expert models into consideration. Specifically, we sample top k opinions from an expert model in Auto-GPT's decision phase, and present those opinions into the context section of the prompt towards more informed decisions. Details of the modified Auto-GPT workflow is outlined in Algorithm 1. In this work we simply use readily available IL models for both tasks as the external expert. The prompt to suggest additional opinions to LLM follows the template as _'Here's one(a few) suggestion(s) for the command: -action with parameters- Please use this suggestion as a reference and make your own judgement.'_ ``` 0:\(o_{i}\): additional opinion sampled from expert model. \(P_{o}(o_{i})\): a prompt template wrapping top k \(o_{i}\) as suggestions to LLM. \(P_{h}\): the regular prompt as a human to trigger LLM response. Add(x): Add x into Auto-GPT context. 1: Initialize Auto-GPT 2:for each Auto-GPT step do Add(Initial Goal and Instruction Prompt) 3:if sampled \(o_{i}\) from expert models exists then 4: Add(\(P_{o}(o_{i})\)) for i < k 5:else 6: Add(\(P_{h}\)) 7:endif 8: Auto-GPT runs with the prompt added. 9:endfor 10:return result ``` **Algorithm 1** Additional Opinion Auto-GPT Algorithm. ## 3. Experiments ### Experimental setup #### 3.1.1. Webshop We utilized the original WebShop server setup from the GitHub Repository provided in the original paper(Zhu et al., 2017). For testing, we adhered to a fixed order of iteration, selecting the first 50 instructions. This limited test set is a trade-off due to cost and computational efficiency concerns, especially running GPT4. Imitation Learning (IL) models, both with and without image embeddings, were used to ensure a fair comparison with large language models, given that the latter lack image access. The 'additional Figure 1. One step of Auto-GPT with Additional Opinions. Here Additional Opinions are generated by other expert models - IL models here, but extensible to any other models like Rule or other LLMs. opinions' provided to Auto-GPT consistently used the superior IL model with image access. The temperature was set to 0.01 across all models to reduce randomness. Evaluation of IL models and Auto-GPT + IL variants followed a rigorous protocol to mitigate sampling randomness and assess small observed variations, respectively. In the case of Auto-GPT alone, we performed a single run due to the minimal variations observed as a result of the low temperature setting. Nonetheless, small variations were noticed in the Auto-GPT + IL variants, prompting us to conduct two runs and use their average for analysis. In the original research paper, a supervised Imitation Learning (IL) model was trained to aid the agent in making optimal decisions at each stage, with the ultimate goal of executing the correct purchase based on a given human instruction. The system was structured around four principal tasks: (1) generating a quality search query based on the original instruction, (2) selecting an item to browse from a variety of items and their titles, (3) deciding whether to check 'Description', 'Features' or 'Reviews' in the product detail page while also making the correct product option choices such as size, color etc., and (4) finalizing the purchase. Two IL models were utilized: a BART model for generating the most effective search query (task type 1) and a BERT model to make the correct choices for task type 2 to 4. Rule based model always directs to purchase the first item after search. #### 3.1.2. ALFWorld Adopting a similar approach to the Webshop experiment, we leverage the existing ALFWorld task simulation environment, including the unseen set of 134 games that was utilized for benchmarking in the original research paper. From the original study, we specifically incorporate the task agent (BUTLER::BRAIN) from the Imitation Learning (IL) model, excluding the visual input agent (BUTLER::VISION). The training of the text agent is performed using the DAgger (Dataset Aggregation) approach within an imitation learning context using expert demonstrations. To manage task execution failures, Beam Search is deployed to generate alternate action sentences, typically opting for the best sequence of words greedily for efficiency. The application of Beam Search primarily aims at enhancing the action sentence generation during failures rather than optimizing over embodied interactions. Echoing the Webshop experiment, to ensure a fair comparison with large language models (LLMs), we furnish the 'additional opinions' from the IL model to Auto-GPT. To control randomness, we maintain a temperature setting of 0.01 for all LLMs to minimize noise and strictly adhere to the original evaluation protocol to further mitigate randomness. ### Baseline comparison #### 3.2.1. Webshop Table 1 illustrates the results of running the first 50 test cases across the original IL models and Auto-GPT agent using different large language models (LLMs). The original IL model, lacking image input, only achieved a modest success rate, illustrating the complexity of the task. IL models incorporating image input as embeddings performed more favorably. Auto-GPT agents utilizing GPT3.5 or Claude alone performed worse than the original IL models, with or without images. However, GPT4 by itself exhibited superior performance compared to both IL models. A noteworthy point is that IL models display better rewards than Auto-GPT baselines without IL due to a higher \begin{table} \begin{tabular}{c c c c c} \hline \hline **Model** & **Success Rate** & **Reward** & **Precision** & **Purchase Rate** \\ \hline **Base Models** & & & & \\ Rule & 0.060 & 44.589 & 0.060 & 1.000 \\ IL w/o. Image & 0.213 & 56.056 & 0.213 & 1.000 \\ IL & 0.227 & 57.689 & 0.227 & 1.000 \\ \hline **Auto-GPT(Claude) Variants** & & & & \\ Auto-GPT(Claude) & 0.140 & 47.617 & 0.146 & 0.960 \\ Auto-GPT(Claude) + IL & 0.240 & 48.600 & 0.270 & 0.890 \\ Auto-GPT(Claude) + IL(top5) & 0.220 & 52.010 & 0.229 & 0.960 \\ \hline **Auto-GPT(GPT3.5) Variants** & & & & \\ Auto-GPT(GPT3.5) & 0.120 & 43.833 & 0.140 & 0.860 \\ Auto-GPT(GPT3.5) + IL & 0.200 & 47.717 & 0.241 & 0.830 \\ Auto-GPT(GPT3.5) + IL(top5) & 0.230 & 52.827 & 0.279 & 0.820 \\ AutoGPT(GPT3.5) + Random & 0.060 & 22.333 & 0.136 & 0.440 \\ \hline **Auto-GPT(GPT4) Variants** & & & & \\ Auto-GPT(GPT4) & 0.240 & 46.133 & 0.353 & 0.680 \\ Auto-GPT(GPT4) + IL & 0.300 & 56.233 & 0.361 & 0.830 \\ Auto-GPT(GPT4) + IL(top5) & **0.320** & **61.550** & **0.372** & 0.860 \\ \hline **Auto-GPT(Vicuna)** & & & & \\ Auto-GPT(Vicuna) & 0.000 & 0.000 & 0.000 & 0.000 \\ \hline \hline \end{tabular} \end{table} Table 1. Updated Webshop Model Performance Metrics purchase rate, and an allowance for more steps (100 vs. 20). However, the reward metric may not necessarily serve as the best end measurement, particularly considering real-world shopping scenarios where an agent refraining from making a purchase could be preferable to it making a purchase that doesn't entirely meet the requirements. In cases where a purchase is made, if we calculate precision solely based on these instances, Auto-GPT (GPT4), with or without IL, demonstrates significantly higher precision compared to any other variants. Further details on GPT4 considering the additional opinion can be found in Appendix 1. #### 3.2.2. ALFWorld Table 2 presents the results of our ALFWorld experiment involving the Imitation Learning (IL) model and Large Language Models (LLMs) in an AutoGPT configuration, evaluated across an unseen set of 134 data points from ALFWorld's IL Model. Notably, the IL model with Beam Search significantly outperformed the version without Beam Search, as indicated by a considerable decrease in the success rate from 0.306 to 0.179 when Beam Search was omitted. While Claude and GPT3.5 operating in the AutoGPT setting fell short of surpassing the IL model, GPT4 markedly exceeded the IL model's performance, irrespective of the use of Beam Search, despite the disadvantage of steps allowed (35 vs. 50). We hypothesize that the comparatively lower performance of Claude and GPT3.5 can be attributed to their lack of full episode demonstrations, and we posit that the introduction of more examples might enhance their performance. This is particularly pertinent in tasks where the description carries implications, such as "heat the mug and put the mug on the countertop," which presupposes the agent's understanding of how to heat the mug. Both Claude and GPT3.5 struggled with deriving such implicit knowledge. However, we anticipate that supplementing the context with more examples would likely enhance their performance in such complex tasks. ### LLM comparison #### 3.3.1. Webshop Across all LLMs, Claude and GPT3.5 alone (0.140 vs. 0.120 success rate) compare similarly as each other in the AutoGPT setting. GPT4 alone performs the best (0.24 success rate) and Vicuna is tested not able to generate formatted responses thus reported 0 here. Another perspective to consider here is that the speed of calling these LLM APIs - Claude is faster than GPT3.5 and much faster than GPT4. We recommend here that Claude could be a great solution considering the performance latency tradeoff for real world problems. #### 3.3.2. ALFWorld In line with our observations from the Webshop experiment, the success rates for Claude and GPT3.5, when applied in the AutoGPT setting, were relatively low, success rate at 0.075 and 0.082, respectively. Among the models, GPT4 demonstrated superior performance, achieving the highest success rate of 0.485 and a precision as high as 0.628, surpassing all other models including the Imitation Learning model. Despite Claude's advantage in speed over GPT3.5 and GPT4, its performance was markedly inferior to GPT4. Taking into account these observations, we recommend the use of GPT4 given its performance supremacy over the other models under consideration. ### Additional opinions #### 3.4.1. Webshop A novel paradigm emerged from our study, amalgamating Large Language Models (LLMs) with Expert models. Rather than solely relying on Expert models and their generated results, we propose an integrated approach. Firstly, the top k Additional Opinions are sampled from Expert models. Subsequently, these opinions are presented to the LLMs, prompting them to consider these views and make the final decision. This methodology particularly resonates with GPT4. Even though the underlying mechanism is still elusive and it is not yet clear if it mimics human decision-making processes(Shi et al., 2017), the effectiveness of this approach is tangible in the experimental outcomes. Our working hypothesis postulates that GPT4 exhibits inherent biases when \begin{table} \begin{tabular}{c c c c c} \hline \hline **Model** & **Success Rate** & **Reward** & **Precision** & **Completion Rate** \\ \hline **Base Models, average of three** & & & & \\ IL w/o. Beam Search & 0.179 & 24 & 1.000 & 0.179 \\ IL & 0.306 & 41 & 1.000 & 0.441 \\ \hline **Auto-GPT(Claude) Variants** & & & & \\ Auto-GPT(Claude) & 0.082 & 11 & 0.104 & 0.791 \\ Auto-GPT(Claude) + IL & 0.090 & 12 & 0.130 & 0.687 \\ \hline **Auto-GPT(GPT3.5) Variants** & & & & \\ Auto-GPT(GPT3.5) & 0.075 & 10 & 0.078 & 0.866 \\ Auto-GPT(GPT3.5) + IL & 0.030 & 4 & 0.048 & 0.470 \\ \hline **Auto-GPT(GPT4) Variants** & & & & \\ Auto-GPT(GPT4) & 0.485 & 65 & 0.628 & 0.582 \\ Auto-GPT(GPT4) + IL & **0.515** & **69** & **0.789** & 0.530 \\ \hline **Auto-GPT(Vicuna)** & & & & \\ Auto-GPT(Vicuna) & 0.000 & 0.000 & 0.000 & 0.000 \\ \hline \hline \end{tabular} \end{table} Table 2. Updated ALFWorld Model Performance Metrics making autonomous decisions. However, by introducing opinions from various weak learners, GPT4 can enhance its performance. Considering these diverse viewpoints may allow GPT4 to mitigate its own biases and overcome inherent limitations. Interestingly, the inclusion of a single IL choice as an additional opinion in the context resulted in improved performance for all LLMs. This performance boost is particularly noteworthy for GPT4, because GPT4 by itself outperforms the additional opinions provided by IL models, while still benefiting from this method. To explore the impact of single vs. multiple additional opinions, we also tested sampled top five additional opinions vs. one (see Table 1), and observed GPT4 with top 5 additional opinions reached the best Success Rate, Rewards and Precision across all groups (Table 1). From an intelligent agent perspective, awareness of an additional opinion, or even multiple distinct opinions, can be beneficial, as argued in (Zhou et al., 2018). This suggests that providing LLMs with one or a few additional opinions of reasonable quality can serve as a reference, resulting in a more informed decision. Out of curiosity, we also conducted one ablation study by providing random one additional opinion to GPT3.5. We observed the worst performance - the lowest reward (22.333) and an equivalently low success rate (0.060) as Rule based model. In Figure 2, we observe that Language Learning Models (LLMs) predominantly take in the additional opinion suggested by expert models, with GPT4 exhibiting the highest standard and the greatest proportion of disagreements. We consider any match among the top 5 additional opinions as being taken into account by the LLMs. Intriguingly, for GPT4, the ratio of considered opinions escalates from 0.549 to 0.602 as the number of opinions increases from 1 to 5. This trend could partially elucidate the disparity in the final outcomes of success rates. #### 3.4.2. ALFWorld Taking inspiration from our previous Web-shop experiment, we employed a similar paradigm to combine Large Language Models (LLMs) with Expert models in the context of the ALFWorld dataset. The fundamental premise remained analogous, but due to resource constraints and the extensive size of ALFWorld (134 unseen games), we confined our initial trial to a top 1 Imitation Learning (IL) opinion. Our hypothesis received empirical confirmation from the data observed for GPT4 and Claude, as detailed in Table 2. The success rate showed slight improvements, although the original DAgger agent provided by ALFWorld was less effective for general tasks compared to specific ones, as demonstrated by the task-specific performance rates listed in Table 4 from the ALFWorld paper (Li et al., 2018), i.e in a disparate performance range, the DAgger agent excelled in the 'Cool and Place' task, achieving a remarkable 1.00 success rate. However, for the 'Pick Two and Place' task, its proficiency was considerably lower, with a success rate of only 0.24. Our investigation further revealed a substantial variance in efficiency concerning the number of steps taken by the ALFWorld IL model to complete tasks. This observation highlights an additional layer of complexity, suggesting that the IL's performance is not only variable in terms of task success rates, but also in the practical efficiency of task execution. This large variability in step numbers indicates that while the IL model may excel in some tasks, it can be markedly inefficient in others, further reinforcing the inherent limitations and variability of IL models in complex task simulations like ALFWorld. In terms of task completion, the IL's outcomes are governed by real-time rewards. As such, all completed tasks have to be executed correctly, as evidenced by 1.00 precision for the both IL with and without Beam Search. Conversely, the tasks left incomplete were typically ones where the agent had exhausted the available steps, with most of these marked by repeated and meaningless actions as presented in Appendix 2. One of the standout observations from our study was GPT4's ability to integrate past memories, task descriptions, and environmental data to discern the pertinence of suggestions from the IL model. Even amidst noise, GPT4 demonstrated a robust capacity to differentiate beneficial from irrelevant advice, often confidently disregarding suggestions that were not beneficial, as illustrated in Appendix 3. Moreover, GPT4 was even able to extract values from the initial part of a repetitive action pattern suggested by the IL, underscoring its exceptional ability to distill useful information. Contrastingly, GPT3.5 was easily misled by irrelevant suggestions and frequently became entangled in the repetitive advice offered by the IL. Indeed, such confusion even compromised GPT3.5's capability to perform tasks that it could otherwise successfully accomplish independently, as detailed in Appendix 4. This highlights a stark divergence from the patterns observed with GPT4 under ALFWorld Context. In a compelling revelation, this study demonstrated a marked difference between the contexts of the Webshop Figure 2. For Webshop, the ratios of LLMs considering or disagreeing with additional opinion provided by expert models. For the top 5 scenario, we consider it as an agreement if any additional opinion matches. and ALFWorld experiments. The beneficial guidance provided by the Webshop's IL model effectively condensed the choice spaces for LLMs, contrasted with the repetitive and misleading advice offered by the IL in the ALFWorld context. Interesting discrepancies were also observed in how the LLMs disagreed with the IL's recommendations: Claude registered a disagreement rate of 0.814, GPT3.5 of 0.769, and GPT4, leading the pack, registered a rate of 0.854. This suggests an inherent capability within LLMs to filter out misleading suggestions. However, the extent to which this disagreement improved or impeded performance appeared to be context-dependent, highlighting the importance of discernment in processing the IL's advice. Claude and specially GPT4 showed remarkable adeptness at avoiding the pitfalls of misleading and repetitive advice. By contrast, GPT3.5 exhibited a clear shortfall in this respect, a performance echoed by its pairing with a random action in our Webshop ablation study. This underscores the importance of context when integrating IL models with LLMs and signals the need for careful evaluation when dealing with potentially misleading input, especially on an LLM like GPT3.5 which can get easily confused. ### Discussions Initially, Auto-GPT was conceptualized as an experimental idea rather than a robust workflow suitable for real-world applications. However, our research demonstrates otherwise. Auto-GPT not only proves its potential for practical use but also outperforms supervised state-of-the-art IL models with GPT4, signifying a shift in perspective towards this innovative approach. In the current discourse, we posit that this additional opinion approach can readily find widespread adoption across diverse industries, given the existing prevalence of expert models such as recommendation systems and traditional natural language processing (NLP) services, inclusive of text classification models and the like. An immediate application for this methodology can be envisaged in leveraging LLMs for making definitive determinations and give explanations regarding the prioritization of items, such as movies or songs, to be displayed to the user. This is achievable by providing the selected top-k outputs derived from a supervised recommendation model to LLMs as Additional Opinions. It is crucial to note, however, that the two tasks we have chosen to benchmark in this research do not fully encapsulate the vast array of potential real-world scenarios. They serve merely as a starting point for the exploration of this idea. This is the inaugural instance where the concept of adapting Auto-GPT to handle complex tasks by introducing Additional Opinions has been proposed. This innovative approach opens new avenues for further research and development, potentially expanding the realm of practical applications for AI models and significantly impacting our understanding of complex decision-making mechanisms. ## 4. Related Work Foundation models trained on self-supervision tasks have shown significant success in downstream tasks, particularly in few-shot and zero-shot settings (Dosovitskiy et al., 2017; Chen et al., 2018; Chen et al., 2018). More recently, generative pre-trained foundation models have demonstrated impressive in-context learning abilities, allowing them to tackle decision-making tasks that require logic reasoning (Chen et al., 2018) and/or interactions with external APIs (Chen et al., 2018; Chen et al., 2018; Li et al., 2019). However, adapting LLMs for decision-making tasks often involves non-trivial prompt design, memory retrieval mechanisms to dynamically construct the agent's context (Li et al., 2019; Li et al., 2019; Li et al., 2019), and sometimes model fine-tuning (Chen et al., 2018) to enhance its decision-making abilities. Several techniques have been proposed to adapt Large Language Models (LLMs) for improved planning and reasoning. These include methods for enabling explicit Chain of Thought (CoT) thinking processes (Li et al., 2019), as well as prompting and decoding techniques aimed at enhancing the self-consistency of LLMs (Chen et al., 2018; Li et al., 2019). However, most of these techniques primarily focus on offline reasoning tasks that can be planned ahead, while their implications in online decision-making scenarios are rarely discussed. ## 5. Conclusion Our experimental results highlight the successful adaptation of the Auto-GPT styled agent to complex online decision-making tasks through straightforward prompt design, surpassing IL-based baseline models specifically designed for these tasks. Among the foundational LLMs powering Auto-GPT, GPT-4 demonstrates superior performance. Additionally, we introduce an innovative strategy of incorporating additional opinions from external expert models, further enhancing the decision-making capabilities of Auto-GPT styled agents, particularly benefiting GPT-4. Our Additional Opinions algorithm provides a lightweight supervised training approach for Auto-GPT styled agents, enabling improved performance without requiring extensive fine-tuning of the LLMs. We demonstrate the effectiveness and Figure 3. For ALFWorld, the ratios of LLMs considering or disagreeing with additional opinion provided by expert models. For the top 1 case, we only redeem it as an agreement if the exact opinion matches. adaptability of this approach, especially for tasks with easily collectible training data for action policy. The code of this work is shared in: [https://github.com/younghuman/LLMAgent](https://github.com/younghuman/LLMAgent)
2304.13550
Turning block-sequential automata networks into smaller parallel networks with isomorphic limit dynamics
We state an algorithm that, given an automata network and a block-sequential update schedule, produces an automata network of the same size or smaller with the same limit dynamics under the parallel update schedule. Then, we focus on the family of automata cycles which share a unique path of automata, called tangential cycles, and show that a restriction of our algorithm allows to reduce any instance of these networks under a block-sequential update schedule into a smaller parallel network of the family and to characterize the number of reductions operated while conserving their limit dynamics. We also show that any tangential cycles reduced by our main algorithm are transformed into a network whose size is that of the largest cycle of the initial network. We end by showing that the restricted algorithm allows the direct characterization of block-sequential double cycles as parallel ones.
Pacôme Perrotin, Sylvain Sené
2023-04-26T13:19:06Z
http://arxiv.org/abs/2304.13550v1
Turning block-sequential automata networks into smaller parallel networks with isomorphic limit dynamics ###### Abstract We state an algorithm that, given an automata network and a block-sequential update schedule, produces an automata network of the same size or smaller with the same limit dynamics under the parallel update schedule. Then, we focus on the family of automata cycles which share a unique path of automata, called tangential cycles, and show that a restriction of our algorithm allows to reduce any instance of these networks under a block-sequential update schedule into a smaller parallel network of the family and to characterize the number of reductions operated while conserving their limit dynamics. We also show that any tangential cycles reduced by our main algorithm are transformed into a network whose size is that of the largest cycle of the initial network. We end by showing that the restricted algorithm allows the direct characterization of block-sequential double cycles as parallel ones. ## 1 Introduction Automata networks are classically used to model gene regulatory networks [9, 16][10, 2, 4]. In these applications the dynamics of automata networks help to understand how the biological systems might evolve. As such, there is motivation in improving our computation and characterization of automata networks dynamics. This problem is a difficult one to approach considering the vast diversity of network structures, local functions and update schedules that are studied. Rather than considering the problem in general, we look for families or properties which allow for simpler dynamics that we might be able to characterize [7, 8]. We are interested in studying the limit dynamics of automata networks, that is, the limit cycles and fixed points that they adopt over time, notably since these asymptotic behaviors of the underlying dynamical systems may correspond to real biological phenomenologies such as the genetic expression patterns of cellular types, tissues, or paces. More precisely, we are less interested in the possible configurations themselves than in the information that is being transfered and computed in networks over time. As such, given families of networks, one of our objectives is to count the fixed points and limit cycles they possess. In this paper, we provide an algorithm that, given an automata network and a block-sequential update schedule, produces an automata network of the same size or smaller with isomorphic limit dynamics under the parallel update schedule. After definitions in Section 2, this algorithm is detailed in Section 3. In Section 4, the feasibility of the algorithm on _Tangential Cycles_ (TC) is studied, a TC being a set of cycles that intersect on a shared path of automata. _Why focusing on TCs?_ Cycles are fundamental retroactive patterns that are necessary to observe complex dynamics [14]. They are present in many biological regulation networks [17] and are perfectly understood in isolation [6, 12]. In theory, cycles generate an exponential amount of limit cycles, which is incoherent with the observed behavior of biological systems [9]. The only way to reduce the amount of limit cycles is to constrain the degrees of freedom induced by isolated cycles, which can only be done by intersecting cycles from the purely structural standpoint. This leads us naturally to TCs, as a simple intersection case. Double cycles (intersections of two isolated cycles) in particular are the largest family of intersecting cycles for which a complete characterization exists [12, 5]; the present paper generalizes this result to block-sequential update schedules. Moreover, from the biological standpoint, double cycles are also observed in biological regulation networks, in which they seem to serve as inhibitors of their limit behavior [3]. ## 2 Definitions Let \(\Sigma\) be a finite alphabet. We denote by \(\Sigma^{n}\) the set of all words of size \(n\) over the alphabet \(\Sigma\), such that for all \(1\leq i\leq n\) and \(x\in\Sigma^{n}\), \(x_{i}\) is the \(i\)th letter of that word. An _automata network (AN)_ is a function \(F:\Sigma^{n}\to\Sigma^{n}\), where \(n\) is the size of the network. A configuration of \(F\) is a word over \(\Sigma^{n}\). The global function \(F\) can be divided into functions that are local to each automaton: \(\forall k,f_{k}:\Sigma^{n}\to\Sigma\), and the global function can be redefined as the parallel application of every local function: \(\forall 1\leq i\leq n,F(x)_{i}=f_{i}(x)\). For convenience, the set of automata \(\{1,\ldots,n\}\) is denoted by \(S\), and will sometimes be considered as a set of letters rather than numbers. For questions of complexity, we consider that _local functions are always encoded as circuits_. For \((i,j)\) any pair of automata, \(i\) is said to _influence_\(j\) if and only if there exists a configuration \(x\in\Sigma^{n}\) in which there exists a state change of \(i\) that changes the state of \(f_{j}(x)\). More formally, \(i\) influences \(j\) if and only if there exists \(x,x^{\prime}\in\Sigma^{n}\) such that \(\forall k,x_{k}=x^{\prime}_{k}\Leftrightarrow k\neq i\) and \(f_{j}(x)\neq f_{j}(x^{\prime})\). It is common to represent an automata network \(F\) as the digraph with its automata as nodes so that \((i,j)\) is an edge if and only if \(i\) influences \(j\). This digraph is called the _interaction digraph_ and is denoted by \(G_{I}(F)=(S,E)\), with \(E\) the set of edges. The automata network described in Example 1 is illustrated as an interaction digraph in Figure 1. Example 1: Let \(F:\mathbb{B}^{3}\to\mathbb{B}^{3}\) be an AN with local functions \[f_{a}(x) =\neg x_{b}\lor x_{c}\] \[f_{b}(x) =x_{a}\] \[f_{c}(x) =\neg x_{b}\] An _update schedule_ is an infinite sequence of non-empty subsets of \(S\), called blocks. Such a sequence describes in which order the local functions are to be applied to update the network, and there are uncountably infinitely many of them. A _periodic update schedule_ is an infinite periodic sequence of non-empty subsets of \(S\), which we directly define by its period. The application of an update schedule on a configuration of a network is the parallel application of the local functions of the subsets in the sequence, each subset being applied one after the other. For example, the sequence \(\pi=(S)\) is the parallel update schedule. It is periodic, and its application on a configuration is undistinguishable from the application of \(F\). The sequence \((\{1\},\ldots,\{n\})\) is also a periodic update schedule, and implies the application of every local function in order, one at a time. Formally, the application of a periodic update schedule \(\Delta\) to a configuration \(x\in\Sigma^{n}\) is denoted by the function \(F_{\Delta}\), and is defined as the composition of the applications of the local functions in the order specified by \(\Delta\). For any subset \(X\subseteq S\), updating \(X\) into \(x\) is denoted by \(F_{X}(x)\) and is defined as \[\forall i\in S,\ F_{X}(x)_{i}=\left\{\begin{array}{ll}f_{i}(x)&\mbox{if }i\in X \\ x_{i}&\mbox{otherwise}\end{array}\right..\] Example 2 provides an example of the execution of the network detailed in Example 1 under some non-trivial update schedule. Example 2: Let \(\Delta=(\{b,c\},\{a\},\{a,b\})\) be a periodic update schedule, and let \(x=000\) be an initial configuration. For \(F\) the AN detailed in Example 1, we have that: \[F_{\Delta}(000) = (F_{\{a,b\}}\circ F_{\{a\}}\circ F_{\{b,c\}})(000)\] \[= (F_{\{a,b\}}\circ F_{\{a\}})(001)\] \[= F_{\{a,b\}}(101)=111.\] A _block-sequential update schedule_ is a periodic update schedule where all the subsets in a period form a partition of \(S\); that is, every automaton is updated exactly once in the sequence. If every subset in the sequence is of cardinality 1, the update schedule is said to be sequential. For any AN with automata \(S\), both the parallel update schedule and the \(|S|\)! different sequential update schedules are block-sequential. Block-sequential update schedules are _fair_ update schedules, in the sense that applying it updates each automaton the same amount of times. The application of a block-sequential update schedule on an AN can be otherwise represented as an update digraph, introduced in [15, 1]. For \(F\) an AN and Figure 1: Interaction digraph of the AN detailed in Example 1. \(\Delta\) a block-sequential update schedule, the _update digraph_ of \(F_{\Delta}\), denoted by \(G_{U}(F_{\Delta})\), is an annotation of the network's interaction digraph, where any edge \((u,v)\) is annotated with \(<\) if \(u\) is updated strictly before \(v\) in \(\Delta\), and with \(\geqslant\) otherwise. An update digraph of the AN detailed in Example 1 is illustrated in Figure 2. Given an automata network \(F\) and a periodic update schedule \(\Delta\), we define the _dynamics_ of \(F_{\Delta}\) as the digraph with all configurations \(x\in\Sigma^{n}\) as nodes, so that \((x,y)\) is an edge of the dynamics if and only if \(F_{\Delta}(x)=y\). We call _limit cycle of length \(k\)_ any sequence of unique configurations \((x_{1},x_{2},\ldots,x_{k})\) such that \(F_{\Delta}(x_{i})=x_{i+1}\) for all \(1\leq i<k\), and \(F_{\Delta}(x_{k})=x_{1}\). A limit cycle of length one is called a _fixed point_. The _limit dynamics_ of \(F_{\Delta}\) is the subgraph which contains only the limit cycles and the fixed points of the dynamics. The limit dynamics of the network defined in Example 1 are emphasized in Figure 3. Since the dynamics of a network is a graph that is exponential in size relative to the number of its automaton, naively computing the limit dynamics of a family of network is a computationally expensive process. ## 3 The algorithm In this section, we look at an algorithm that can turn any automata network \(F\) with a block-sequential update schedule \(\Delta\) into another automata network \(F^{\prime}\), such that the limit dynamics of \(F_{\Delta}\) stays isomorphic to the limit dynamics of \(F^{\prime}\) under the parallel update schedule \(\pi\). Furthermore, the size of \(F^{\prime}\) will always be the size of \(F\), _or less_. Figure 3: Two dynamics of the AN \(F\) detailed in Example 1. On the left, the dynamics of \(F\) under the parallel update schedule. On the right, the dynamics of \(F\) under the update schedule \(\Delta=(\{a\},\{b\},\{c\})\). The limit dynamics are depicted with bold arrows. This algorithm is built from two parts: first, we parallelize the network thanks to a known algorithm in the folklore of automata networks. Second, we remove automata from the networks based on redundancies created in the first step. First, let us state the usual algorithm that, given an automata network \(F\) and a block-sequential update schedule \(\Delta\), provides a new automata network \(F^{\prime}\) defined on the same set of automata, such that \(F_{\Delta}\) and \(F^{\prime}_{\pi}\) have the same exact dynamics. ``` Input \(F\) local functions of a network over \(S\), encoded as circuits \(\Delta\) block-sequential update schedule over \(S\) Output \(F\) local functions of a parallel network over \(S\), encoded as circuits for\((u,v)\) such that \(u\) precedes \(v\) in \(\Delta\)do apply the substitution \(x_{u}\mapsto\theta_{u}\) in \(f_{v}\)\(\triangleright\)\(\theta\) is a temporary symbol let \(X\gets S\) while\(|X|>0\)do let \(s\in X\) such that \(f_{s}\) contains no \(\theta\) symbol \(X\gets X\setminus\{s\}\) for\(s^{\prime}\in X\)do if\(f_{s^{\prime}}\) contains \(\theta_{s}\)then apply the substitution \(\theta_{s}\mapsto f_{s}\) in \(f_{s^{\prime}}\) return \(F\) ``` **Algorithm 1** Parallelization algorithm of \(F_{\Delta}\) Algorithm 1 proceeds with two waves of substitutions. First, for every \(<\)-edge \((u,<,v)\), the influencing automaton \(u\) is replaced in the local function of \(v\) by a token symbol \(\theta_{u}\). All of these token symbols are then replaced by the corresponding local functions (in this case, \(f_{u}\)) in the correct order: that is, no function is ever used in a substitution if it contains a token character. This way, even if the network contains a complex tree of \(<\)-edges, the substitutions will be applied in the correct order. It holds that this algorithm always returns, and runs in polynomial time. Property 1: Algorithm 1 always returns, and does so in polynomial time. Proof: Let us denote by \(<\)_-graph_ the subset of the graph \(G_{U}(F_{\Delta})\) where only the \(<\)-edges have been preserved. The \(<\)-graph of \(F_{\Delta}\) is always a tree (or multiple disconnected trees): if this wasn't the case, there would be a cycle of \(<\)-edges in \(G_{U}(F_{\Delta})\), which would mean a cycle of automata that are all updated strictly before their out-neighbor, which is impossible. Algorithm 1 will place a \(\theta\) symbol for every edge in the \(<\)-graph. In the second loop, the selected \(s\) is always a leaf of one of the trees contained in the \(<\)-graph. The applied substitution removes that leaf from the \(<\)-graph. By the structure of a tree, all the \(<\)-edges will be removed and the algorithm terminates. To see that this algorithm can be performed in polynomial time, consider that all of the local functions are encoded as circuits. As such, it is enough to prepare a copy of each local function into one large circuit, on which every substitution will be performed. Any substitution \(x_{u}\mapsto\theta_{u}\) is performed by renaming the corresponding input gate. Any substitution \(\theta_{s}\mapsto f_{s}(x)\) is performed by replacing the input gate which corresponds to \(\theta_{s}\) by a connection to the output gate of the circuit that computes the local function \(f_{s}\). These substitutions are performed for every \(<\)-edge in the update digraph of \(F_{\Delta}\), which can be done by doing one substitution for every pair in the partial order provided by \(\Delta\), which is never more than \(n^{2}\). The resulting circuit is then duplicated for every automaton in the output network, which leads to a total size of no more than \(k^{2}\), for \(k\) the size of the input. Remark 1: This algorithm is not polynomial if the local functions are encoded as formulae, which is a detail often overlooked in the literature where this parallelization algorithm is always assumed to be polynomial. Theorem 2.1: _For any \(F_{\Delta}\) Algorithm 1 returns a network \(F^{\prime}\) such that the dynamics of \(F_{\Delta}\) is equal to that of \(F^{\prime}_{\pi}\)._ Proof: Let us consider some configuration \(x\in\Sigma^{n}\), and let us compute its image \(x^{\prime}\) in both systems. Let us consider the initial block \(X_{0}\) in \(\Delta\). For any automaton in \(X_{0}\), its local function is untouched in \(F^{\prime}\), and thus \(F_{\Delta}(x)|_{X_{0}}=F^{\prime}(x)|_{X_{0}}\). Suppose that \(F_{\Delta}(x)|_{X_{0}\cup\ldots\cup X_{k}}=F^{\prime}(x)|_{X_{0}\cup\ldots \cup X_{k}}\) for some \(k\), let us prove that is true when including the next block \(X_{k+1}\). Let \(v\in X_{k+1}\). By the nature of updates in \(\Delta\), \(f_{v}\) will be updated using the values in \(F_{\Delta}(x)\) for any \(x_{u}\) such that \(u\in X_{0}\cup\ldots\cup X_{k}\), and in \(x\) otherwise. In \(F^{\prime}\), in the local function \(f^{\prime}_{v}\) and for any \(u\in X_{0}\cup\ldots\cup X_{k}\) that influences \(v\), a substitution has replaced \(x_{u}\) by \(f^{\prime}_{u}(x)\), which implies that the value of \(v\) will be updated using a value of \(u\) in \(F^{\prime}(x)\). Pulling this together, we obtain that \(f_{v}(x)=f^{\prime}_{v}(x)\) and \(F_{\Delta}(x)|_{X_{0}\cup\ldots\cup X_{k+1}}=F^{\prime}(x)|_{X_{0}\cup\ldots \cup X_{k+1}}\), and the recurrence yields \(F_{\Delta}(x)=F^{\prime}(x)\) for any \(x\). Algorithm 2 is our contribution to this process, and removes automata that are not necessary for the limit dynamics of the network. It proceeds in two steps: first, the algorithm identifies pairs of automata with equivalent local functions, up to some function. In other terms, if one automaton \(u\) can be computed as a function \(g\) of the local function of another automaton \(v\), then \(u\) is not necessary and all references to \(x_{u}\) in the network can be replaced by \(g(x_{v})\) for an identical result. Of course, this only works under the hypothesis that \(u\) and \(v\) are updated synchronously, which is the case after the application of Algorithm 1. Second, the algorithm iteratively removes any automaton that has no influence in the network, that is, that has no accessible neighbor in the interaction graph of the network. These automata are not part of cycles and do not lead to cycles, and as such have no impact on the attractors. Algorithm 2 is non-deterministic, and when the local functions of any pair of automata \((u,v)\) are shown to be equivalent up to some reversible function \(g:\Sigma\to\Sigma\), either automata could replace the influence of the other without preference. As such, more than one result network is possible, but all are equivalent in their limit dynamics, as will be shown later. While it is clear that Algorithm 2 always terminates, its complexity is out of the deterministic polynomial range, as applying it implies solving the coNP-complete decision problem of testing if two Boolean formulae are equal for all possible pairs of automata and for every possible function \(g:\Sigma\to\Sigma\). As such, a polynomial implementation of this algorithm would (at least) imply P = NP. This drastic conclusion is softened when looking at restricted classes of networks where redundancies can be easily pointed out, which is the case for the rest of the paper. Theorem 2.2: _For any \(F_{\Delta}\), Algorithm 2 returns a network \(F^{\prime}\) such that the limit dynamics of \(F_{\Delta}\) and \(F^{\prime}_{\pi}\) are isomorphic._ Proof: By Theorem 2.1, the network \(F^{\prime}\) returned by the application of Algorithm 1 to \(F_{\Delta}\) has identical dynamics to \(F_{\Delta}\). Algorithm 2 operates two kinds of modifications. The first operation is replacing the influence of any automaton \(u\) by another automaton \(v\) if they are found to have equivalent local function up to some \(g:\Sigma\to\Sigma\), that is, \(f_{u}=g\circ f_{v}\). For any configuration \(x\), the value of \(f_{u}(x)\) and \(g(f_{v}(x))\) are always equal. Thus, substituting the variable \(x_{u}\) by \(g(x_{v})\) in the local functions of every out-neighbor of \(u\) will lead to an identical limit behavior. After this substitution, the automaton \(u\) does not have any influence over the network. Moreover, all its previous out-neighbors in \(G_{I}(F^{\prime})\) are now the out-neighbors of \(v\). The second operation is iteratively removing automata that do not influence any automaton. Let \(u\) be such an deleted automaton. Consider a limit cycle \((x^{1},x^{2},\dots,\)\(x^{k})\) in \(G\). By definition of a limit cycle, \(G(x^{i})=x^{i+1}\) for any \(i\) \(G(x^{k})=x^{1}\), and \(x^{i}=x^{j}\Rightarrow i=j\). Consider the component \(x^{i}_{u}\) for some \(i\). Since \(u\) does not influence any automaton, \(x^{i+1}\) is a function of \(x^{i}|_{S\setminus\{u\}}\). As the entire sequence is aperiodic, the sequence of the subconfigurations \(x^{i}|_{S\setminus\{u\}}\) is also aperiodic, and the attractor is preserved in \(F^{\prime}\). ## 4 Reductions in size of tangential cycles In this section, we characterize the reduction in size that our algorithm provides on a specific family of networks. We call _tangential cycles_ (TC) any AN composed of any number of cycles \(\{C_{1},C_{2},\ldots,C_{k}\}\) such that a unique path of automata, called the _tangent_, is shared by all of the cycles. The first automaton of the tangent is the only automaton with more than one in-neighbor, and is called the _central automaton_. A TC is represented as part of Figure 4, which contains three cycles and a tangent of length 0 (only one node is shared between the cycles). ### Reducing block-sequential TCs The reduction in size provided by Algorithm 2 can be quite large on TCs, as even TCs updated in parallel can be reduced in size by merging the different cycles as much as possible. As such, the reduction power of this algorithm is greater than just removing the redundancies inherent to the block-sequential to parallel update translation. Indeed, Figure 4 provides an example of a parallel TC, the size of which is greatly reduced by the application of Algorithm 2. But, by this process, the final result of Algorithm 2 is no longer a TC. As explained above, TCs are studied as the next simplest cases of complex ANs that make biological sense, after automata cycles. Both isolated cycles and double cycles are examples of TCs. To show that the study of TCs under block-sequential update schedules can be directly reduced to the study of TCs under the parallel update schedule, we provide an algorithm that transforms any TC under a block-sequential update schedule into a TC under the parallel update schedule, such that their limit dynamics are isomorphic, and the local functions of their central automaton equivalent. This is done by simply stopping the process of Algorithm 2 earlier to preserve the TC shape of the network. The only difference between Algorithms 2 and 3 is that the latter restricts the reductions it operates. If two local functions are found to be equivalent up to some function \(g\), Algorithm 3 removes a node if and only if these local functions are duplicates of the previous local function of the central automaton of the network. Removing duplications of any function that is part of a cycle would merge two cycles and the network would no longer be tangential cycles, in a way that is harder to count the reductions for. Since Algorithm 3 is a variation of Algorithm 2 that only does less reductions, Theorem 2.2 still applies in its case. An application of Algorithms 2 and 3 is illustrated in Figure 4 and the difference between the algorithms is highlighted. \(f_{a}(x)=x_{a}\lor x_{d}\lor\neg x_{h}\)\(f_{b}(x)=\neg x_{a}\)\(f_{c}(x)=x_{b}\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(f_{d}(x)=x_{c}\)\(f_{e}(x)=x_{a}\)\(f_{h}(x)=x_{e}\)\(f_{a}(x)=x_{a}\lor x_{d}\lor\neg\theta_{h}\)\(f_{b}(x)=\neg\theta_{a}\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\)\(\Delta=(\{h\},\{a,c,d,e\},\{b\})\(\Delta=(\{h\ Theorem 3.1: _Let \(F\) be a TC and \(\Delta\) a block-sequential update schedule. The amount of reductions in size that Algorithm 3 operates on \(F_{\Delta}\) is the number of \(<\)-edges in the update digraph of \(F_{\Delta}\), and the result is a TC._ Proof: Algorithm 1 operates a substitution for every \(<\)-edge in the update digraph of \(F_{\Delta}\). In this proof, we will show that each of the possible transformations implies the removal of exactly one node from the network. For any such edge \((u,<,v)\), there are two cases. Either \(u\) is the central automaton, or not. In any case, \(u\neq v\) since the contrary would imply that an automaton is updated strictly before itself. If we suppose that \(u\) is the central automaton, this means that \(f_{v}\) is a local function that only depends on \(x_{u}\). It can thus be written \(f_{v}(x)=g(x_{u})\) for some \(g:\Sigma\to\Sigma\). After the application of Algorithm 1, we thus obtain \(f_{v}(x)=g(f_{u}(x))\), which implies the removal of either \(u\) or \(v\) (but at this point, not both) by Algorithm 3. If we suppose that \(u\) is not the central automaton, this means that \(f_{v}\) is an arbitrary formula which contains \(x_{u}\), and \(f_{u}\) is a function of the form \(f_{u}(x)=g(x_{w})\) for some \(g\) and some \(w\in S\). Note that \(w\neq u\) by the hypothesis that \(F\) is a TC, either \(w\) is the previous automaton in the path, or it is the central automaton \(v\). As such, applying Algorithm 1 substitutes any mention of \(x_{u}\) in \(f_{v}\) by \(g(x_{w})\). Previously, \(u\) only had one accessible neighbor, as it was part of a path connecting to the central automaton. This leaves \(x_{u}\) without any accessible neighbors in the interaction digraph of \(F\), which means that it is removed by Algorithm 3. If the removed edge is part of a cycle, this means that this cycle will be reduced in size. If the edge is part of the tangent, this means that the tangent will be reduced in size. We thus obtain that the number of reductions is at least the number of \(<\)-edges in the update digraph of the network. Suppose now that some extra automaton \(u\) is removed on top of any \(<\)-edge related reduction. First observe that if \(u\) has no accessible neighbor, it must have had none from before the application of Algorithm 1, since in none of the two cases are external automaton disconnected from each other. Now suppose that \(f_{u}\) is equivalent to some \(f_{v}\) up to some \(g\). Neither \(u\) nor \(v\) can be the central automaton, as any duplication of that function is handled in the first case. This proves that the number of reductions is exactly the number of \(<\)-edges in the update digraph of \((F,\Delta)\). Let us now show that the result of Algorithm 3 is a TC. If the initial network had a central automata, there still exists a unique central automata at the end of the algorithm, even if the original central automata was removed in a chosen reduction. Paths that exit the central automata in the previous network still exit the central automata in the result, in the same number, and still share some tangent. The paths can be smaller in size, as well as the tangent, but they still end in the central automata. If Algorithm 2 cannot be polynomial in the worst case under the hypothesis that \(\mathrm{P}\neq\mathrm{NP}\), Algorithm 3 can be simplified to the following rule: taking a TC with a block-sequential update schedule, we obtain the equivalent parallel TC by reducing each cycle by the number of \(<\)-edges that its update digraph contains. This process is quadratic, since we only need to check the possible \(<\)-edges defined by the partial order defined by \(\Delta\), which are no more than \(n^{2}\). ### Reducing parallel Boolean TCs further Applying Algorithm 2 to its full extent to a Boolean TC (That is, a TC defined over the Boolean alphabet) may result in a larger reduction in size. As any automaton that is not the central one has a unary function as its local function, any pair of non-central local functions is equivalent up to some \(g:\Sigma\to\Sigma\) if they are influenced by the same automaton. For example, if the central automaton influences three other automata that represent the start of three chains, these three automata can be merged into one. Continuing this zipping process yields a final network only as large as the longest cycle of the initial TC. This process is not straightforward for non-Boolean TCs, as the local functions along the chains can be non-reversible using modular arithmetics, for example. Optimizing these networks is still possible, but requires a more complex set of substitutions to do so. It has been proven in general using modules and output functions [13]. The following theorem corresponds to the Boolean case, proven with more classical means. An example of its application is illustrated in the two last steps of Figure 4. Theorem 4.1: _Let \(F\) be a Boolean TC. Applying Algorithm 2 to \(F_{\pi}\) generates a network \(F^{\prime}\) whose size is that of the largest cycle in \(F\)._ Proof: Starting from the initial TC \(F\), all of the automata directly influenced by the automata at the end of the tangent \(u\) (but that are not \(u\)) have local functions \(f_{v}(x)=g(x_{u}),f_{w}(x)=h(x_{u}),\ldots\) for \(g,h,\ldots:\Sigma=\{0,1\}\to\Sigma\). All these functions \(g,h,\ldots\) are not constant functions, since the automata that they represent are influenced by an automaton by hypothesis. Thus, they can only be defined as the identity or the negation of \(x_{u}\). As a consequence, all but one of these automata will be removed by the algorithm as they are all equivalent up to some \(g\). This same argument can be repeated by taking all the automata influenced by the only automaton resulting from the previous iteration, excluding the central automaton. At each step, all of the automata at the same distance from the central automaton are merged. Hence, at the end of this process, whatever the choices made for merging automata along the iterative process, the resulting AN will be compoesed of \(k\) automata, with \(k\) the length of the largest cycle of \(F\). ## 5 An application: disjunctive double cycles As an application of this algorithm and as an example to the algorithm's capacities to reduce the size of the provided network, we turn to the family of disjunctive double cycles. Notice that the result still holds for conjunctive double cycles since conjuctive and disjunctive cycles have isomorphic dynamics [12, 11]. In disjunctive automata networks, an edge \((u,v)\) is signed positively if the \(x_{u}\) appears as a positive variable in \(f_{v}\). An edge \((u,v)\) is signed negatively if \(x_{u}\) appears as a negative variable in \(f_{v}\). A cycle is said to be positive if it contains an even number of negative edges, and negative otherwise. A _disjunctive double cycle_ is an automata network with an interaction digraph that is composed of two automata cycles that intersect in one automaton. The local function of this central automaton is a disjunctive clause. This family of networks is very simple to define, and is a simple and intuitive next step after the family of Boolean automata cycles, which are composed of a single cycle. Both families have been characterized under the parallel update schedule [12, 7]; that is to say, given basic parameters concerning the size of the cycles, their sign, and any integer \(k\), an explicit formula (defined as a polynomially computable function) has been given among other to count the number of limit cycles of size \(k\) of such networks under the parallel update schedule. In this section, we extend this characterization to the block-sequential equivalents by showing how applying our algorithm reduces the network to a smaller instance of the same family of networks. Furthermore, as Boolean automata cycles and disjunctive double cycles are TCs, our method can be simplified to the simple following rules: given a TC \(F\), a block-sequential update schedule \(\Delta\), count the number of \(<\)-edges in the update digraph \(G_{U}(F_{\Delta})\); for every cycle, substract to its size the number of such edges it contains, while keeping its sign; the final network, under the parallel update schedule, and the initial network under \(\Delta\) have isomorphic limit dynamics. This is a simple application of Theorem 4.1, and of the rule of thumb deduced from Algorithm 3. We denote by \(DC(s,s^{\prime},a,b)\) the disjunctive double cycles with cycle sizes \(a,b\) and signs \(s,s^{\prime}\). Theorem 5.1: _Let \(D=DC(s,s^{\prime},a,b)\) be disjunctive double cycles, \(\Delta\) a block-sequential update schedule. For \(A\) (\(B\) respectively) the number of \(<\)-edges on the cycle of size \(a\) (\(b\) respectively) in \(G_{U}(F_{\Delta})\), the limit dynamics of \(D_{\Delta}\) is isomorphic to that of \(D^{\prime}_{\pi}\), where \(D^{\prime}=DC(s,s^{\prime},a-A,b-B)\)._ Proof: This is a straightforward application of Theorem 4.1. ## 6 Conclusion In this paper we provide a novel algorithm which allows the reduction in size of automata networks, in particular by passing the network from a block-sequential to a parallel update schedule, while keeping isomorphic limit dynamics. While this algorithm is too computationally expensive for the general case, we study the specific family of intersection of automata cycles, on which this algorithm is easily applied. This study allows the discovery that all block-sequential tangential cycles have isomorphic limit dynamics to parallel tangential cycles. Finally, we apply this fact to Boolean automata double cycles to characterize their behavior under block-sequential update schedules. It seems now clear to us that the difference between the parallel update schedule and block-sequential update schedules is that the latter changes the timing of the information along sections of the network. In particular, structures such as tangential cycles can be directly translated into an equivalent parallel network with shorter cycles. We are interested in seeing what effects this translation could have in a more general set of families of networks, and if there exists other families in which block-sequential update schedules lead to equivalent parallel networks which are still part of the family. As a perspective, we would like to characterize more redundancies that can be removed from networks to help with the computation of their dynamics. For example, we are currently interested in more complex compositions of automata cycles, and have already found equivalences that show that many networks are equivalent in their limit dynamics where complex parts of automata networks can be moved alongside cycles without affecting the network's limit dynamics. Isolated paths are also a strong candidate for size reduction. Isolated paths are paths that lead from cycles to other cycles but can only be crossed once. Our current algorithms conserve such paths, despite it being possible to reduce them completely without changing the limit dynamics of the network in many cases, for example when an isolated path is the only way to go from one part to another. We have to be careful when multiple isolated paths exit from and join onto the same parts, as the synchronicity of the information in the entire network must be preserved. ###### Acknowledgements. This work has been partially funded by ANR-18-CE40-0002 FANs project (PP & SS), ECOS-Sud CE19E02 SyDySy project (PP & SS), and STIC-AmSud 22-STIC-02 CAMA project (SS).
2307.05662
Charge Transfer and Zhang-Rice Singlet Bands in the Nickelate Superconductor $\mathrm{La_3Ni_2O_7}$ under Pressure
Recently, a bulk nickelate superconductor $\mathrm{La_3Ni_2O_7}$ is discovered at pressures with a remarkable high transition temperature $T_c \sim 80K$. Here, we study a Hubbard model with tight-binding parameters derived from \textit{ab initio} calculations of $\mathrm{La_3Ni_2O_7}$, by employing large scale determinant quantum Monte Carlo and cellular dynamical mean-field theory. Our result suggests that the superexchange couplings in this system are comparable to that of cuprates. The system is a charge transfer insulator as hole concentration becomes four per site at large Hubbard $U$. Upon hole doping, two low-energy spin-singlet bands emerge in the system exhibiting distinct correlation properties: while the one composed of the out-of-plane Ni-$d_{3z^2-r^2}$ and O-$p_z$ orbitals demonstrates strong antiferromagnetic correlations and narrow effective bandwidth, the in-plane singlet band consisting of the Ni-$d_{x^2-y^2}$ and O-$p_x / p_y$ orbitals is in general more itinerant. Over a broad range of hole doping, the doped holes occupy primarily the $d_{x^2-y^2}$ and $p_x / p_y$ orbitals, whereas the $d_{3z^2-r^2}$ and $p_z$ orbitals retain underdoped. We propose an effective $ t-J$ model to capture the relevant physics and discuss the implications of our result for comprehending the $\mathrm{La_3Ni_2O_7}$ superconductivity.
Wéi Wú, Zhihui Luo, Dao-Xin Yao, Meng Wang
2023-07-11T17:51:18Z
http://arxiv.org/abs/2307.05662v2
# Charge Transfer and Zhang-Rice Singlet Bands in the Nickelate Superconductor ###### Abstract Recently, high-Tc superconductivity is reported in the bulk nickelate La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at pressures above 14 GPa. Here, we study an eleven-band Hubbard model with hopping parameters derived from _ab initio_ calculations of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\), by employing large scale determinant quantum Monte Carlo and cellular dynamical mean-field theory methods. Our result suggests that the superexchange couplings in this system are comparable to that of cuprates. The system is a charge transfer insulator as hole concentration becomes four per site at large Hubbard \(U\). Upon hole doping, two low-energy spin-singlet bands emerge in the system exhibiting distinct correlation properties: while the one composed of the out-of-plane Ni-\(d_{3z^{2}-r^{2}}\) and O-\(p_{z}\) orbitals demonstrates strong antiferromagnetic correlations and narrow effective bandwidth, the in-plane singlet band consisting of the Ni-\(d_{x^{2}-y^{2}}\) and O-\(p_{x}/p_{y}\) orbitals is in general more itinerant. Over a broad range of hole doping, the doped holes occupy primarily the \(d_{x^{2}-y^{2}}\) and \(p_{x}/p_{y}\) orbitals, as the \(d_{3z^{2}-r^{2}}\) and \(p_{z}\) orbitals retain underdoped. We propose an effective \(t-J\) model to capture the relevant physics and discuss the implications of our result for comprehending the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) superconductivity. _Introduction -_ Since the discovery of cuprate superconductors [1], understanding and searching for novel high transition temperature (high-\(T_{c}\)) superconductors [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] has been one of the major topics in the condensed matter physics. The discovery of infinite layer nickelate superconductor [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30] marks a notable recent advancement, although in which the superconductivity (SC) has been only observed in thin films on substrates [16], but not yet in bulk samples [31]. The very recently discovered bulk superconductor La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under high pressures [2], which exhibits a remarkable \(T_{c}\) of \(\sim\) 80 Kelvins, then represents a significant breakthrough in this field. As revealed by the density-functional theory (DFT) calculations [32; 2; 3], a hallmark of the nickelate bi-layer La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\)[2; 33] is the activating of both \(3d_{x^{2}-y^{2}}\) and \(3d_{3z^{2}-r^{2}}\) orbitals in vicinity of Fermi level [3; 34]. This distinct feature may lead to superconductivity that differs significantly from the cuprates and infinite layer nickelates. From a theoretical perspective, several crucial questions arise. First, to understand the driving force behind the SC, it is necessary to elucidate the magnetic exchange couplings [22; 35] among the four active \(e_{g}\) orbitals in the NiO\({}_{2}\) bi-layer. Furthermore, the \(e_{g}\) orbitals of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure possess a large hole concentration, as the nominal valence of Ni is Ni\({}^{2.5+}\) here, indicating an average of 1.25 holes per \(e_{g}\) orbital. This high hole filling level is on the verge of quenching SC by overdoping in the context of cuprates. Therefore, resolving the distributions of the holes in the Ni-\(3d_{x^{2}-y^{2}}\), Ni-\(3d_{3z^{2}-r^{2}}\), and the correlated O-\(2p\) orbitals is crucial for understanding the correlation effects in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure. To address above questions, here we study an 11-band Hubbard model that includes four \(3d_{x^{2}-y^{2}}\) / \(3d_{3z^{2}-r^{2}}\) orbitals of nickel, and seven most relevant \(2p\) orbitals of oxygen in the NiO\({}_{2}\) bi-layer per site. We carry out determinant quantum Monte Carlo simulations (DQMC) [36; 37] and cellular dynamical mean-field theory (CDMFT) [38; 39] calculations in the normal state of the system. Our result suggests that the superexchange couplings in this system are in general comparable to that in cuprates [40], supporting a magnetic correlation origin of the high \(T_{c}\) superconductivity in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure. We show that at large Hubbard \(U\), the system is a charge-transfer insulator [41] in the Zaanen-Sawatzky-Allen (ZSA) scheme[42] at half-filling ( _i.e._, four holes per site). Upon hole doping, the spin-singlet band associating with the vertical Ni-\(d_{3z^{2}-r^{2}}\) - O-\(p_{z}\) orbitals possesses strong antiferromagnetic correlations, as where a small hole doping level is retained. In contrast, the singlet band that consists of the in-plane Ni-\(d_{x^{2}-y^{2}}\) and O - \(p_{x}/p_{y}\) orbitals, drawing an analogy to the Zhang-Rice singlet band (ZRSB) in cuprates, exhibits a higher propensity for hole doping and greater itinerancy. We discuss the interplay between the two spin-singlet bands and propose an effective \(t-J\) model that considers the leading-order exchange couplings in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure. _Model and Methods -_ To fully take into account the superexchange couplings in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at pressure, here we consider an 11-band Hubbard model [3] that can be written as, \[H=\sum_{i,j,\alpha,\beta,\sigma}t_{i,j,\alpha,\beta}d^{\dagger}_{ i\alpha\sigma}c_{j\beta\sigma}+\sum_{i,j,\alpha,\beta,\sigma}t_{i,j,\alpha,\beta}c^{ \dagger}_{i\alpha\sigma}c_{j\beta\sigma}\] \[+\sum_{i\alpha\sigma}(\epsilon_{\alpha}-\mu)n^{d}_{i\alpha \sigma}+\sum_{i\alpha\sigma}(\epsilon_{\alpha}-\mu)n^{c}_{i\alpha\sigma} \tag{1}\] \[+\sum_{i\alpha}U_{d\alpha}n^{d}_{i\alpha\uparrow}n^{d}_{i\alpha \downarrow}+\sum_{i,\alpha\neq\beta,\sigma,\sigma^{\prime}}U_{d\sigma}n^{d}_{i \alpha\sigma}n^{d}_{i\beta\sigma^{\prime}}-\sum_{i\alpha\sigma}E_{d\epsilon}n^{d}_ {i\alpha\sigma}\] where \(t_{i,j,\alpha,\beta}\) denote hoppings between electrons on sites \((i,j)\) and orbital \((\alpha,\beta)\)( can be either Ni-\(d\) or O-\(p\) or bitals). \(d^{\dagger}_{\alpha,i,\sigma}\) (\(c^{\dagger}_{\alpha,i,\sigma}\)) is the creation operator for electrons on \(\alpha\in 3d\) (\(\in 2p\)) orbital. \(\epsilon_{\alpha}\) is the site-energy of \(\alpha-\) orbital. \(U_{dd}\) is the Hubbard interaction between two electrons on the same \(d\)-orbital (\(d_{x^{2}-y^{2}}\) or \(d_{3z^{2}-r^{2}}\)) and \(U_{dd^{\prime}}\) is for that on two different \(d\)-orbital. \(E_{dc}\) is the double counting (DC) term [43; 44; 45] to be subtracted in the DQMC or CDMFT. Here we use the Held's formula [46]: \(E_{dc}=\frac{1}{3}(U_{dd}+2U_{dd^{\prime}})(n_{d}^{0}-0.5)\), with \(n_{d}^{0}\) being the occupation number of \(d-\)orbitals in the non-interacting limit, \(n_{d}^{0}\approx 2.16\). We have carefully checked that this DC term does not shift the non-interacting Fermi surface (FS) significantly [44]. We adopt the hopping parameters and site-energies proposed in Ref. [3], which is obtained by downfolding the DFT result in the maximally localized Wannier orbitals. See Fig. 1 for all the hopping parameters and site-energy values. In line with DFT result [3], we assume that the chemical potential \(\mu=0\) in above Hamiltonian corresponds to the single crystal La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) material at pressures \(>14\)GPa, without considering other potential doping effects (oxygen deficiency for example). We will also vary \(\mu\) to explore regimes with different hole concentrations that defined as \(n_{h}=(22-\sum_{\alpha,\sigma}n_{\alpha,\sigma}^{d})/4\), namely, the average number of holes per \(d-\) orbitals per site. \(n_{h}=1\) corresponds to half-filling in our study. In this work, we neglect the Hund's coupling and keep \(U_{dd^{\prime}}=0.7U_{dd}\) when \(U_{dd}\) is varied for the 11-band Hubbard model. Here we use two typical values of \(U_{dd}\), \(U_{dd}=7\) eV and \(U_{dd}=9\) eV, where no qualitative difference is found between the two results. Below we use electron volt (eV) as the energy unit throughout the paper. For the calculations on cuprate, we employ a canonical set of parameters for the three-band Hubbard model [40] (one \(d_{x^{2}-y^{2}}\) orbital and two \(p_{x}\) / \(p_{y}\) orbitals in the CuO\({}_{2}\) plane): \(t_{pd}=1.39,t_{pp}=0.64,t_{pp}^{\dagger}=0.103,\Delta_{dp}\equiv\epsilon_{d}- \epsilon_{p}=2.6,U_{dd}=8.5,\mathrm{E_{DC}}=3.12\). This set of parameters is assumed to be most relevant for the LSCO compound, which has been used in different studies of cuprates [47; 48; 49; 40]. For DQMC simulation, we use a two dimensional \(6\times 6\times 11=396\) orbitals lattice with periodic conditions for the 11-band Hubbard model, on which we have verified that the finite size effects are negligible in the parameter regime we study. For CDMFT study, we carry out computations in the normal state, where the \(2\times 2\times 11=44\) orbitals cluster effective impurity model is used. The Hirsch-Fye quantum Monte Carlo (HFQMC) Figure 1: The seven hopping terms in our 11-band Hubbard model [3]. **(a)-(d):** Four hopping processes between Ni-\(d\) and O-\(p\) orbitals that lead to major superexchanges between Ni-\(d\) orbitals. Here \(t_{1}=-1.56,t_{2}=0.75,t_{3}=-1.63,t_{6}=1.37\). The site energy \(\epsilon_{d_{x^{2}-y^{2}}}=-1.06,\epsilon_{d_{3z^{2}-r^{2}}}=-1.16,\epsilon_{p_ {x}/p_{y}}=-4.94,\epsilon_{p_{x}}=-4.30,\epsilon_{p^{\prime\prime}_{x}}= \epsilon_{p^{\prime}_{z}}=-3.77\). Note that the hopping process (c) is from combining (a) and(b). The superexchange between the on-site intra-layer \(d_{x^{2}-y^{2}}\) and \(d_{3z^{2}-r^{2}}\) orbitals (not shown here) vanishes due to symmetry. **(e)** Hoppings between O-\(p\) orbitals are shown on a cartoon depicting the structure of the bilayer NiO\({}_{2}\) planes. The \(3d\) orbitals in Ni (black dots) are not shown for clarity. Here \(t_{4}=0.58,t_{5}=0.49,t_{7}=0.43\). These seven hopping terms combining the site-energies define our 11-band Hubbard model in the non-interacting limit. Figure 2: Magnetic correlations of the 11-band Hubbard model **(a)**: The spin-spin correlation function \(\langle S_{i,\alpha}\cdot S_{j,\beta}\rangle\) for four neighboring d-orbitals are shown in numbers to profile the relative strength of the antiferromagnetic exchange couplings in the system. Brown (blue) symbols denote \(d_{x^{2}-y^{2}}\) (\(d_{3z^{2}-r^{2}}\)) orbitals. Result is from DQMC calculations on \(6\times 6\times 11\) lattice at half-filling (\(n_{h}=1\)), \(T=0.25\). **(b)** Magnetic correlation between a pair of neighboring \(d\) orbitals in Ni-1-band and Cu-3-band model as a function of temperature \(T\) at half-filling. For Ni, we use \(6\times 6\times 11\) orbitals and \(6\times 6\times 3\) orbitals for Cu. Black curve shows result for intra-layer (IR) \(d_{x^{2}-y^{2}}\). \(d_{x^{2}-y^{2}}\) correlations, while green curve denotes the correlation between the inter-layer (IT) \(d_{3z^{2}-r^{2}}\) - \(d_{3z^{2}-r^{2}}\) orbitals. **(c)** The spin structure factor \(S(Q)=\frac{1}{N}\sum_{i,j}\langle S_{i\alpha}\cdot S_{j,\beta}\rangle e^{-iQ \cdot(R_{i}-Rj)}\) for Ni (intra-layer, \(\alpha=\beta=d_{x^{2}-y^{2}}\) component) as a function of hole concentration \(n_{h}\) compared with that of cuprate. Here \(U_{dd}=7\) for nickelate and \(U_{dd}=8.5\) for cuprate. is used as impurity solver. The time discretions are \(\Delta\tau=0.0625\) (DQMC) and \(\Delta\tau=0.078\) (HFQMC). _Superexchanges-_ We first discuss the property of the magnetic exchange couplings in the system. As shown in Fig. 1, there are a few hopping processes can give rise to significant superexchanges. In the atomic limit of the charge-transfer picture, the spin singlet state of two Ni-\(d\) electrons acquires an energy gain of \(J=\frac{-4t_{pd}^{*}}{(U_{dd}+\Delta_{pd})^{2}}\times(\frac{1}{U_{dd}}+\frac{1 }{U_{dd}+\Delta_{pd}})\) over the spin triplet states, where \(\Delta_{pd}=\epsilon_{d}-\epsilon_{p}\), and \(t_{pd}\) is the hopping between Ni-\(d\) and O-\(p\) orbitals. In principle, inclusion of the hopping integrals between O-\(p\) orbitals (\(t_{pp}\)) and interaction effects may modify this superexchange coupling [40]. The numbers shown in Fig. 2a are the magnitudes of spin correlation function \(\langle S_{i,\alpha}\cdot S_{j,\beta}\rangle\) for a few pairs of neighboring \(d-\) orbitals at \(T=0.25\), where \(S_{i,\alpha}\) is the spin operator at site \(i\) and orbital \(\alpha\). This result profiles the fact that there are three types of main exchange couplings in the system: the inter-layer (IT) on-site \(d_{3z^{2}-r^{2}-d_{3z^{2}-r^{2}}}\) antiferromagnetic exchange [ \(\langle S\cdot S\rangle=-0.113(2)\)] dominates the exchange couplings in the system. Then it is the intra-layer (IR) nearest-neighboring \(d_{x^{2}-y^{2}-d_{x^{2}-y^{2}}}\) exchange [\(\langle S\cdot S\rangle=-0.054(1)\)]. The intra-layer \(d_{3z^{2}-r^{2}}\)-\(d_{x^{2}-y^{2}}\) superexchange, in contrast, is significantly weaker than the aforementioned two [\(\langle S\cdot S\rangle=-0.017(1)\)]. Finally, we note that the intra-layer \(d_{3z^{2}-r^{2}-d_{3z^{2}-r^{2}}}\) exchanges are less than \(1/20\) of inter-layer \(d_{3z^{2}-r^{2}-d_{3z^{2}-r^{2}}}\) coupling, hence can be neglected in further analysis. It is worthy noting that in CDMFT at lower temperatures (\(T\sim 0.1\)), we observe similar relative strengths of the magnetic correlations at half-filling(not shown). Fig. 2b and 2c show respectively \(\langle S_{i,\alpha}\cdot S_{j,\beta}\rangle\) as a function of temperature at half-filling, and spin structure factor \(S_{\alpha,\beta}(Q)\) as a function of \(n_{h}\) at \(T=0.3\), with comparison from the result of 3-band Hubbard model of cuprates. As one can see that the intra-layer \(d_{x^{2}-y^{2}-d_{x^{2}-y^{2}}}\) antiferromagnetic correlations are suggested to be in general comparable to its cuprate counterpart, while the inter-layer \(d_{3z^{2}-r^{2}}\)-\(d_{3z^{2}-r^{2}}\) coupling seems to be essentially stronger than the former. This result implies that the antiferromagnetic correlations between inter-layer \(d_{3z^{2}-r^{2}}\) orbitals could be at the origin of the observed superconductivity in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure. Fig. 3a and 3b show \(\langle S_{i,\alpha}\cdot S_{j,\beta}\rangle\) respectively as a function of \(U_{dd}\) at \(n_{h}=1.25\), and as a function of \(n_{h}\) at \(U_{dd}=7\) in CDMFT, from which one sees that varying the value of \(U_{dd}\) in the range of (\(4\sim 9\)) eV does not change substantially the magnetic correlations between the inter-layer \(d_{3z^{2}-r^{2}}\) orbitals. More importantly, as shown in Fig. 3b, it does not vanish until a huge hole doping \(p=n_{h}-1>0.4\) is approached. _Charge transfer insulator-_ We now focus on the metal-insulator transition in the 11-band Hubbard model. Fig. 4 displays the hole filling per Ni-\(d\) orbital \(n_{h}\) as a function of hole chemical potential \(\mu_{h}\equiv-\mu\). As one can see that the DQMC result (dots) of \(n_{h}\) exhibit an inflection point around \(n_{h}=1\) ( \(\mu_{h}\sim-1.6\)) with small values of compressibility \(\partial n_{h}/\partial\mu_{h}\), implying the forming of a charge gap. CDMFT result are obtained at lower temperatures (diamonds), where one indeed see a flat plateau in the \(n_{h}\) curve between \(\mu_{h}\in(-1.6\sim-2.4)\), indicating the opening of a charge gap at half-filling. In Fig.4, DQMC and CDMFT results at the same temperature \(T=0.3\) are shown to be in good agreement. To understand the nature of this insulating behavior, we further study how the hole concentration changes in different orbitals as a function \(\mu_{h}\) in the Inset of Fig. 4. In other words, as chemical potential \(\mu_{h}\) is increased from the half-filling value \(\mu_{h}\sim-1.6\), holes can be added into Figure 3: Magnetic correlations between two neighboring \(d-\) orbitals from CDMFT. **(a)**: The spin-spin correlation function \(\langle S_{i,\alpha}\cdot S_{j,\beta}\rangle\) at \(n_{h}=1.25\) (\(\mu\sim 0),T=0.1\) as a function of \(U_{dd}\). Results are for two inter-layer (IT) \(d_{3z^{2}-r^{2}}\) orbitals and two intra-layer (IR) \(d_{x^{2}-y^{2}}\) orbitals. **(b)**\(\langle S_{i,\alpha}\cdot S_{j,\beta}\rangle\) as a function of hole concentration \(n_{h}\) at \(U_{dd}=7,T=0.1\). Dashed line indicates \(\langle S_{i,\alpha}\cdot S_{j,\beta}\rangle\) between the neighboring \(d_{x^{2}-y^{2}}\) orbitals of the 3-band Hubbard model for cuprate at fixed \(n_{h}=1.05,T=0.1\). Figure 4: The hole filling per \(d-\) orbital \(n_{h}\) as a function of hole chemical potential \(\mu_{h}\equiv-\mu\). Both DQMC (dots) and CDMFT (diamonds) suggest that a charge gap opens as hole chemical potential decreases to around \(\mu_{h}\sim-1.6\). Hollow symbols show that our CDMFT result are in excellent agreement with that of DQMC at \(T=0.3\). Green arrows indicate the \(\mu_{h}\) range where charge gap opens in CDMFT. **Inset:** The change of hole concentration \(\Delta n_{h}^{\alpha}\) for different orbital-\(\alpha\) as a function of hole chemical potential in CDMFT at \(T=0.1\). \(\Delta n_{h}^{\alpha}\) is defined as \(\Delta n_{h}^{\alpha}\equiv n_{h}^{\alpha}(\mu_{h})-n_{h}^{\alpha}(\mu_{h}=-1.6)\), where \(n_{h}^{\alpha}(\mu_{h}=-1.6)\) is the hole filling of orbital-\(\alpha\) at \(\mu_{h}=-1.6\) ( where \(n_{h}\approx 1.0\) ). Here \(U_{dd}=9\). Similar charge-transfer insulating behaviour is also observed at \(U_{dd}=7\) in CDMFT (not shown). different Ni-\(d\) and O-\(p\) orbitals of the system, which is denoted by \(\Delta n_{h}^{\alpha}\) in the Inset of Fig. 4. As one can see that the doped holes go primarily to the oxygen orbitals, which unambiguously points to the charge-transfer nature of the insulating state at half-filling of this system. Similar to cuprates [48], here \(d_{x^{2}-y^{2}}\) orbital also has a sizable portion of the doped holes. It is remarkable that, however, the hole content of \(d_{3z^{2}-r^{2}}\) orbital almost do not change with \(\mu_{h}\) for \(\mu_{h}<0\) ( or for hole doping \(p\lesssim 22\%\) ). We note that in cuprates, a smaller portion of holes residing on cations in general indicates a larger superexchange and higher superconducting \(T_{c}\)[48, 51]. _Zhang-Rice singlet band-_ In Fig. 5a, and Fig. 5b, we plot the local density of states (DOS) for Ni-\(d_{x^{2}-y^{2}}\) and Ni-\(d_{3z^{2}-r^{2}}\) orbitals respectively at \(U_{dd}=9\). That of the in-plane O-\(p_{x}/p_{y}\) orbitals, and of the out-of-plane O\(p_{z}\) orbitals are also plotted along respectively. Here \(n_{h}\approx 1.05,T=0.1\). Near the Fermi level, the local DOS are shown to have mixed weights of Ni-\(d\) and O-\(p\) orbitals. To be more specific, as depicted in Fig. 5a, the low energy DOS of \(d_{x^{2}-y^{2}}\) and O-\(p_{x}\)\(/p_{y}\) orbitals share similar structure (peak-dip-peak) near Fermi level, suggesting the presence of a Zhang-Rice singlet band[52] (ZRSB-I) with correlated \(d_{x^{2}-y^{2}}\)-\(p_{x}/p_{y}\) electrons. Likewise, in Fig. 5b, the Ni-\(d_{3z^{2}-r^{2}}\) orbital and O-\(p_{z}\) orbitals also exhibit similar structure (a single narrow peak) in the low-energy DOS, suggesting the formation of another singlet band (ZRSB-II) along the c-axis. It is remarkable that the vertical ZRSB-II singlet band are much narrower in bandwidth comparing to ZRSB-I, which suggests a more localized nature of the \(d_{3z^{2}-r^{2}}\) orbital. There are a few points we would like to emphasize. First, the vertical \(d_{3z^{2}-r^{2}}\) singlet band is quite different from a conventional ZRSB in cuprates. In cuprates, a doped hole at oxygen sites hybridize with a Cu\({}^{2+}\) hole in terms of the superposition of four O\(p\) hole states adjacent to the Cu\({}^{2+}\) iron, forming a spin singlet. Then the singlet moves effectively in the antiferromagnetic background of Cu\({}^{2+}\) lattice with a bandwidth being (2 \(\sim\) 3)eV [48]. Here the vertical singlet states of Ni-\(d_{3z^{2}-r^{2}}\)-O-\(p_{z}\) have a narrow bandwidth, and they barely interact with each other directly. Instead, the doped holes in the in-plane \(p_{x}/p_{y}\) orbitals can hybridize with the vertical singlets via \(d_{3z^{2}-r^{2}}\) - \(p_{x}/p_{y}\) hopping (\(t_{2}=0.75\)), or via the \(p_{x}\) -\(p_{z}\) hopping (\(t_{5}=0.49,t_{7}=0.43\), see also Fig. 1). As a result, the vertical \(d_{3z^{2}-r^{2}}\) singlet band may behave more like scattering centers with antiferromagntic characteristics in the system. Finally, upon heavy hole doping, one can expect that the in-plane ZRSB will be destroyed, and, in contrast, the vertical singlet band can maintain intact, due to the imbalanced distributions of the doped holes in the two charge-transfer bands. In Table-1 we inspect specifically the hole filling level of the system at \(\mu=0\), which may arguably correspond to that of the real material of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure. As one can see that, the 11-band Hubbard model is about \(21\sim 26\%\) hole doped at \(\mu=0\), with small variations up to the value of Hubbard \(U\) and temperatures \(T\). This result roughly coincides with nominal average doping level (\(=25\%\)) of the \(e_{g}\) orbitals in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). We find that the hole doping in the out-of-plane \(d_{3z^{2}-r^{2}}\) and \(p_{z}\) orbitals combined, \(p=(n_{h}^{OP}-1)\approx(5\%\sim 8\%)\), while that of the in-plane \(d_{x^{2}-y^{2}}\) and \(p_{x}/p_{y}\) orbitals, \(p=(n_{h}^{IP}-1)\) is about \(\approx 40\%\) at \(\mu=0\). Given this large value of hole doping of \(n_{h}^{IP}\), one may expect that the in-plane \(d_{x^{2}-y^{2}}\) and \(p_{x}/p_{y}\) orbitals can be seen as itinerant orbitals in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). However, we find that in CDMFT Figure 5: Spin singlet bands and charge-transfer gaps in the local density of states (DOS). **(a):** DOS of the in-plane \(d_{x^{2}-y^{2}}\) and \(p_{x}\) (\(p_{y}\)) orbitals. **(b):** DOS of the out-of-plane \(d_{3z^{2}-r^{2}}\) and \(p_{x}\)\((p_{x}\)\({}^{\prime\prime}\)) orbitals. Dashed vertical lines indicate the Zhang-Rice singlet band (ZRSB) while arrows show the charge-transfer gap (CTG). UHB stands for upper Hubbard band, and LHB stands for lower Hubbard band. DOS are obtained by maximum entropy (MEM) analytic continuation [53] of the Matsubara Green’s functions that obtained by CDMFT at \(T=0.1,n_{h}=1.05\). at \(\mu=0\), the magnetic correlations between intra-layer \(d_{x^{2}-y^{2}}-d_{x^{2}-y^{2}}\) orbital is weak but not vanishing ( see Fig. 3b). It is notable that the strange metal (SM) phase can be very sensitive to weak magnetic correlations [54]. Hence, if the observed strange metal state in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at \(P>18\)GPa is related to also the magnetic correlations in \(d_{x^{2}-y^{2}}\) orbitals, then the latter may cannot be simplified as pure itinerant. _Effective t-J model-_ Based on our study above, we propose a four-band \(t-J\) model to describe the low-energy physics of the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure. The proposed Hamiltonian can be written as \(\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{J}\) with \(\mathcal{H}_{0}=\sum_{\mathrm{k}\sigma}\Psi_{\mathrm{k}\sigma}^{\dagger}H( \mathrm{k})\Psi_{\mathrm{k}\sigma}\), which reads, \[H(k)_{1,1}=H(k)_{3,3}=-2t_{1}^{x}[\cos(\mathrm{k_{x}})+\cos( \mathrm{k_{y}})]\] \[-4t_{2}^{x}\cos(\mathrm{k_{x}})\cos(\mathrm{k_{y}})+\epsilon_{ \mathrm{x}}\] \[H(k)_{2,2}=H(k)_{4,4}=-2t_{1}^{z}[\cos(\mathrm{k_{x}})+\cos( \mathrm{k_{y}})]\] \[-4t_{2}^{x}\cos(\mathrm{k_{x}})\cos(\mathrm{k_{y}})+\epsilon_{ \mathrm{z}}\] \[H(k)_{1,2}=H(k)_{3,4}=-2t^{xzz}[\cos(\mathrm{k_{x}})-\cos( \mathrm{k_{y}})]\] \[H(k)_{2,4}=-2V_{\perp}[\cos(\mathrm{k_{x}})-\cos(\mathrm{k_{y}})] \tag{2}\] where \(\Psi_{\sigma}=\left(d_{x_{1}\sigma},d_{z_{1}\sigma},d_{x_{1}\sigma},d_{z_{2} \sigma}\right)^{T}\), denoting the annihilation operators of \(d_{x^{2}-y^{2}}\) and \(d_{3z^{2}-r^{2}}\) orbitals in the two NiO\({}_{2}\) layer. The \(\mathcal{H}_{0}\) part is taken from the down-folded tight-binding model from Luo _et al_'s work in Ref. [3], namely, \(t_{1}^{x}\approx 0.5,t_{2}^{x}\approx 0.07,t_{1}^{z}\approx 0.11,t_{2}^{x} \approx 0.02,t^{xz}=-0.24,V_{\perp}=0.64\). Note that the renormalization factors g\({}_{\mathrm{t}}\)[55] associating the corresponding hopping amplitudes in Eq. 2 are not explicitly written down. For the interacting part \(\mathcal{H}_{J}\), we consider three main magnetic exchanges terms, \[\mathcal{H}_{J}=J_{1}\sum_{i}\left(S_{i,z_{1}}S_{i,z_{2}}-\frac{ 1}{4}n_{i,z_{1}}n_{i,z_{2}}\right)\] \[+J_{2}\sum_{\langle i,j\rangle,\alpha=x_{1},x_{2}}\left(S_{i, \alpha}S_{j,\alpha}-\frac{1}{4}n_{i,\alpha}n_{j,\alpha}\right)\] \[+J_{3}\sum_{\begin{subarray}{c}\langle i,j\rangle\\ \alpha,\beta=(x_{1},z_{1})/(x_{2},z_{2})\end{subarray}}\left(S_{i,\alpha}S_{j, \beta}-\frac{1}{4}n_{i,\alpha}n_{j,\beta}\right) \tag{3}\] where \(J_{1}\) captures the exchange couplings between the on-site inter-layer \(d_{3z^{2}-r^{2}}\) orbitals, \(J_{2}\) the exchanges between the intra-layer \(d_{x^{2}-y^{2}}\) orbitals on nearest neighboring (NN) sites, and finally \(J_{3}\) for the intra-layer \(d_{x^{2}-y^{2}}\) - \(d_{3z^{2}-r^{2}}\) exchanges on NN sites. Considering our magnetic correlation results compared with LSCO cuprate [56; 57], typical values of the antiferromagnetic exchange couplings can be set around \(J_{1}\sim 0.18\mathrm{eV}\), \(J_{2}\sim 0.09\mathrm{eV}\), and \(J_{3}\sim 0.03\mathrm{eV}\). Note that if the derived effects from Hund's coupling are considered [58], then \(J_{3}\) may be enhanced to be comparable to \(J_{2}\). _Discussion and Conclusion -_ After the discovery of high-\(T_{c}\) superconductivity in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure [2], a number of effective interacting models have been proposed to study the pairing symmetry [59; 50; 51; 50; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61], as inspired by the _ab initio_ calculations [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 12; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 14; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 21; 210; 211; 22; 231; 241; 25; 261; 27; 282; 293; 206; 209; 2211; 232; 242; 25; 262; 27; 294; 28; 295; 207; 201; 208; 209; 222; 233; 243; 25; 263; 264; 265; 266; 27; 28; 296; 297; 202; 209; 201; 203; 205; 206; 207; 208; 209; 212; 213; 214; 215; 216; 217; 217; 22; 218; 219; 233; 244; 25; 267; 27; 28; 298; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 629; 63; 64; 65; 66; 67; 68; 69; 70; 71; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 101; 112; 133; 144; 15; 166; 17; 18; 192; 102; 103; 104; 105; 106; 107; 108; 109; 111; 122; 133; 145; 157; 158; 159; 160; 171; 18; 193; 194; 195; 196; 197; 198; 199; 207; 210; 211; 22; 233; 24; 25; 26; 27; 28; 299; 308; 32; 333; 34; 35; 36; 37; 38; 39; 40; 42; 43; 44; 45; 46; 47; 49; 51; 50; 53; 54; 55; 57; 56; 58; 59; 61; 70; 72; 73; 74; 75; 76; 77; 78; 79; 81; 80; 82; 83; 84; 85; 86; 87; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 112; 133; 14; 14; 15; 16
2303.06844
Why do Tweeters regret sharing? Impacts of Twitter users' perception of sharing risk, perceived problems on Twitter, and the motivation of use on their behavior of regret sharing
This study presents a secondary data analysis of the survey data collected as part of the American Trends Panel series by the Pew Research Center. A logistic regression was performed to ascertain the effects of the perceived risk of sharing, perceived problems on Twitter, and motivation of using Twitter on the likelihood that participants regret sharing on Twitter. The logistic regression model was statistically significant, \c{hi}2(15) = 102.5, p < .001. The model correctly classified 78.5 percent of cases. Whether or not Twitter users regret sharing on Twitter depends on different motivations for using Twitter. We observe that "A way to express my opinion" is statistically significant in the mod-el, indicating that the odds of Twitter users regretting sharing for this motivation is 2.1 times higher than that of entertainment. Perceived risks of potential hostility and visibility were negatively associated with an increased likelihood of regret sharing. In contrast, perceived problems on Twitter concerning misinformation were negatively associated with the likelihood of regret sharing.
Kijung Lee
2023-03-13T04:20:37Z
http://arxiv.org/abs/2303.06844v1
**Why do Tweeters regret sharing? Impacts of Twitter users' perception of sharing risk, perceived problems on Twitter, and the motivation of use on their behavior of regret sharing** ###### Abstract This study presents a secondary data analysis of the survey data collected as part of the American Trends Panel series by the Pew Research Center. A logistic regression was performed to ascertain the effects of the perceived risk of sharing, perceived problems on Twitter, and motivation of using Twitter on the likelihood that participants regret sharing on Twitter. The logistic regression model was statistically significant, \(\chi 2(15)=102.5\), p \(<.001\). The model correctly classified 78.5 percent of cases. Whether or not Twitter users regret sharing on Twitter depends on different motivations for using Twitter. We observe that "A way to express my opinion" is statistically significant in the model, indicating that the odds of Twitter users regretting sharing for this motivation is 2.1 times higher than that of entertainment. Perceived risks of potential hostility and visibility were negatively associated with an increased likelihood of regret sharing. In contrast, perceived problems on Twitter concerning misinformation were negatively associated with the likelihood of regret sharing. Keywords:Regret sharing on Twitter, Logistic regression, Risk assessment. ## 1 Introduction The rise of social media has rapidly changed how people communicate, build relationships, and consume information. We are beginning to see how individuals perceive risks due to their interactions on these platforms. Regret posting, the habit of posting online content that a user later regrets, has been identified as a significant risk factor of social media usage. The relationship between regret and risk perception of social media users is an increasingly discussed topic as social media use grows in popularity. The ease with which one can post a message on a social media platform has led to a heightened sense of risk perception among users, concerns of potential regret down the line, and an overall ambivalence towards using these platforms. The idea of regret is one that social media users are familiar with. When a person posts something online, whether it is a photo, a status update, or a message to someone else, there is always a chance that that person will regret it later on. This feeling often leads to intensely negative emotions and a feeling that the risks of posting an item are not worth the potential temporary benefits. This feeling of regret is intensified when what was posted is inappropriate or when the post has negative or embarrassing consequences. In addition to the possibility of regretting a post, social media users are also increasingly aware of the potential risks of each post. Users know that anything posted on social media can become public knowledge, whether by a malicious intention or a careless mistake. As such, users must now consider the potential consequences of posting something before posting it, making it more likely that users will take longer to consider their options before posting. The risk perception of social media users is further intensified by the increased presence of numerous online predators who use these platforms to exploit unsuspecting victims. As more and more people join these platforms, online predators can easily track and target those less mindful of their online safety. This realization creates a heightened sense of risk perception among users as they become aware of the potential downside to posting an item online. The relationship between regret and risk perception of social media users is a complex one, and the effects can be seen in the way that people use these platforms. While social media use can be beneficial, users must be aware of the potential consequences that come along with it and understand that any post has the potential for either positive or negative repercussions. As a result, the regret and risk perception of social media users continue to be closely intertwined, and it is essential that users understand both aspects of this relationship and make informed decisions when using these platforms. This paper investigates whether and how regret posting is related to the user's risk perception. We argue that the discovery of the causal and nuanced link between regret posting and risk perception would benefit social media users by providing insights into how they perceive risks online. From a broader perspective, it would also inform public policy and security and welfare standards in the digital age. ## 2 Literature Review Regret is an emotion that often involves feeling sorrow, guilt, or remorse for a past action or decision. It is a concept that has been studied extensively in both psychology and philosophy. According to philosophical theories, regret involves cognizance of a lost opportunity or a wrong decision. A psychological perspective suggests that regret is a response to a discrepancy between a current state of affairs and a preferred one. Additionally, this discrepancy is typically accompanied by the motivation to undo the current state to restore the preferred one (Leary,2020). This motivation is often driven by a desire to reduce or eliminate the negative emotion of regret (Quigley et al., 2020). A substantial body of research has explored the factors that lead to regret and its effects on decision-making. Specifically, research shows that regret can cause people to make more conservative decisions in the future (Krueger & Kariv, 2020). That is, people may be motivated to avoid making the same mistake twice, and as a result, they choose options with lower expected losses and fewer risks (Breheny & Daly, 2020). Furthermore, regret has been shown to increase the level of ruminating or going over the same problem repeatedly due to the repetitive thought loop associated with regret (Robinson et al., 2019). In addition to influencing decision-making, regret has also been linked to psychological well-being. Research suggests that regretful feelings can decrease self-esteem and life satisfaction (Aharon-Lansky, 2016). Furthermore, regret has been linked to increased anxiety, depression, and psychological distress (Robinson et al., 2019). Studies of regret have also explored the potential for people to learn from and benefit from their mistakes. Some research has found that people can use regret and rumination constructively, such as looking back to reflect on what went wrong and how to improve in the future (Robinson et al., 2019; Zhou and Zhuang, 2019). This idea of "productive regret" suggests that people can use regretful emotions to motivate themselves to make better decisions and thus create better outcomes. Regret is a complex emotion that can significantly impact decision-making and psychological well-being. Philosophical theories suggest that regret involves the cogninance of a wrong decision or lost opportunity. Psychological literature demonstrates that regret is often linked to more conservative decision-making and decreased self-esteem. Additionally, research suggests that people can constructively use regret and rumination to understand the past better, gain insight into their behavior, and motivate themselves to make better decisions in the future. ### Regret posting on social media The relative anonymity of cyber communication can lead some users to post items they later regret. A literature review of regret and social media revealed a range of psychological outcomes associated with unwanted posts and the strategies employed to cope with regret. The most common psychological reaction to unwanted posts is regret and self-perceived social disapproval (Dinu, Papadopoulos, and Polyzios, 2018; Jeon and Park, 2017; Spruijt and Ophoff, 2017). Self-perceived social disapproval is linked to heightened regret and anxiety because of a fear of social retribution (Spruijt and Ophoff, 2017). The degree of regret and anxiety increases with the visibility of a post, the size of the social network, and the group membership level (Jeon and Park, 2017; Karahan, Burgess and Elhai, 2018). Several strategies are employed to cope with regret over social media posts. The most common is to delete, modify or conceal the post (Bohn and Hein, 2015). Self-forgiveness is another strategy employed by users to cope with regret (Niedrich, 2015). Posters may also rationalize the post to lessen regret (Spruijt and Ophoff, 2017). Strategies such as these are informed by an individual's level of self-esteem, perceived social approval, and trust in others (Dinu, Papadopoulos, and Polyzios, 2018). To reduce regret over social media posts, some users have taken proactive actions such as curating their online profiles and limiting the size of their online networks (Karahan, Burgess, and Elhai, 2018). Some users have even proactively created external policies to govern their online behavior (Niedrich, 2015). It has become clear that different users employ various coping strategies informed by their level of self-esteem and their relationship with the online community. Going forward, it is essential to develop interventions to reduce regret associated with social media posts. ### Regret and assessment of risk In essence, regret has been defined as an emotional experience involving sadness and a sense of being deeply unsatisfied. Research has demonstrated that it can be a compelling motivator for people when making decisions about risk and risk-taking (Machina, 2009). Regret has also been shown to play a role in anticipating, evaluating, and managing risks in decision-making (Geerlings, Wetzels, & Hoogwegt, 2005). One of the primary ways in which regret has been linked to risk assessment is through the concept of 'expected utility theory,' which suggests that people calculate a subjective utility of each input and outcome of a given situation in order to make the best decision (Bazerman et al., 1986). This theory suggests that when regret is considered during the decision process, people adjust their expectations to account for the possibility of loss or regret. As such, regret can be seen as an essential factor in evaluating risk, as it helps to quantify potential outcomes and consequences better. In addition to the expected utility theory, research has also suggested that regret can be a powerful tool in assessing risk. For example, van Oosterhout and Hovland (2012) studied the effects of regret on behavior concerning gambling decisions and found that those with higher levels of regret exhibited more risk-averse behavior. This suggests that regret helps people evaluate potential outcomes and may also cause them to adjust their behavior to avoid scenarios with a greater likelihood of regretful outcomes. Finally, research has also demonstrated that regret can lead to 'riskier' decisions when it comes to risk. In a study by Coricelli, Van der Luis, and Weber (2005), participants who exhibited higher levels of regret were found to take more significant risks when playing a game involving incomplete information. This may suggest that while regret can lead to more conservative decision-making in some scenarios, it can also play a role in more risky behaviors in specific contexts. Overall, the literature demonstrates that regret is an essential factor in risk assessment. Not only does regret help to quantify potential outcomes and consequences, but it may also lead to riskier behavior in certain situations. Therefore, it is crucial to consider the influence of regret when making decisions about risk. To explore the predictive relationship between the behavior of regret sharing on Twitter and the risk perceptions of Twitter users, we propose a research Question; RQ: Is the Twitter users' behavior of regretting their behavior on Twitter influenced by their perception of sharing risk, perception of problems on Twitter, and the motivation of using Twitter? Methods This study presents a secondary data analysis of the survey data collected as part of the American Trends Panel series by Pew Research Center. The data was collected during the panel wave from May 17 to May 31 in, 2021. The primary sample is drawn from the sampling frame consisting of the panelists who identified as Twitter users, ages 18 and older, living in the U.S. ### Data preparation The data was cleansed and prepared for the principal analysis of binomial logistic regression to predict the users' regret sharing behavior based on the perceived risk of sharing, perceived problems on Twitter, and motivation for using Twitter. Out of 2,548 participants who completed the survey by the Pew Research Center, our data cleansing resulted in 2,045 participants (n=2,045) after removing irrelevant and erroneous data. We checked a series of assumptions against the criteria of binomial logistic regression; 1) the dependent variable is measured on a dichotomous scale, 2) out of 3 independent variables, two independent variables are on a continuous scale while one independent variable is on a categorical scale, 3) the observations are independent, and the dependent variable has mutually exclusive and exhaustive categories, and 4) the linear relationship between the continuous independent variables and the logit transformation of the dependent variable is demonstrated through the Box-Tidwell procedure. ### Variables and measurements The primary analysis of this study is binomial logistic regression to predict the behavior of Twitter users' regret sharing based on the perceived risk of sharing, perceived problems on Twitter, and motivation of using Twitter. The dependent variable, i.e., regret sharing, is measured on a dichotomous scale with answer choices, "Yes, have done this" or "No, have not done this" to a question stating, "Have you ever posted something on Twitter that you later regretted sharing?". The first set of independent variables, i.e., perceived risk of sharing, consists of 5 questions about the users' risk perception and assessment when deciding whether to do things on Twitter. Each question measures 1) offending others, 2) potential hostility, 3) potential attack, 4) visibility and 5) impression management. The questions are asked on a 4-point Likert scale, each point representing "A great deal," "Some," "Not too much," and "Not at all". For example, the respondents were asked to answer about a specific risk context, e.g., "Whether it will offend people who follow you," starting with a general leading question, "How much, if at all, do you consider the following when deciding whether to do things on Twitter that might be visible to other people - such as posting, retweeting, or liking something?" The second set of independent variables, i.e., perceived problems on Twitter, consists of 5 questions about the Twitter users' degree of awareness about problems on Twitter. Each item measures 1) civility of discussions, 2) user banning, 3) content moderation, 4) abuse from other users, and 5) misinformation. The items are measured on a 3-point Likert scale, indicating "A major problem," "A minor problem," and "Not a problem." For example, the respondents were asked to answer about a specific problem, e.g., "Inaccurate or misleading information," starting with a general leading question, "How much of a problem, if at all, do you think each of the following is on Twitter?" The third independent variable, i.e., motivation for using Twitter, is measured with a question, "Which would you say is the MOST important reason you use Twitter?" The respondents were asked to choose one from the choices of "Entertainment," "A way to stay informed," "Keeping me connected to other people," "Lets me see different points of view," "A way to express my opinions," and "It is useful for my job or school." ## 4 Results and Analysis ### Demographic information The sample (n=2,045) consists of 48.7 percent male and 50.5 percent female, with 0.9 percent claiming other. The age group between 30 and 49 comprises 44.4 percent of the sample, followed by 50-64 (28 percent), 18-29 (16.8 percent), and 65 and (10.6 percent). 68.1 percent report having a college or postgraduate degree, while only 1.3 percent report their education less than high school. Concerning their ideology, 34.3 percent identify themselves as moderate, while 17.7 percent are either very conservative or conservative, and 47.4 percent are either very liberal or liberal. 84.1 percent of the respondents are born in the U.S., while 10.6 percent report that they have lived in the U.S. for over ten years. ### Descriptive statistics and preliminary analysis Perceived risk of sharing is measured with five individual questions, each representing 1) offending others, 2) potential hostility, 3) potential attack, 4) visibility and 5) impression management. Although the survey manual does not indicate that they are designed as a scale, the reliability analysis for a scale indicates reliable statistics (Cronbach's alpha =.85, mean of inter-item correlations =.53). In addition, respondents answered each question distinctively, as shown in the ANOVA between the items (F(4, 8176) = 68.2, p\(<\).001). Perceived problems on Twitter are measured with five individual questions, each representing 1) civility of discussions, 2) user banning, 3) content moderation, 4) abuse from other users, and 5) misinformation. Unlike the perceived risk of sharing, the reliability analysis for a scale shows independence among the items (Cronbach's alpha =.61, mean of inter-item correlations =.24). ANOVA between the items (F(4, 8176) = 679.72, p\(<\).001) shows a distinctive difference among the items within the respondents. The motivation of using Twitter, is measured with a question, "Which would you say is the MOST important reason you use Twitter?" The respondents indicated one from the choices of "Entertainment" (34.2 percent), "A way to stay informed" (31.4 percent), "A way to express my opinions" (6.7 percent), "Keeping me connected to other people" (9.3 percent), "Lets me see different points of view" (10.4 percent), and "It is useful for my job or school" (8 percent). As shown in Figure 1, entertainment and information motivations stood out as the primary reason people use Twitter. A crosstabulation between motivation (IV3) and regret behavior (DV) shows \(\chi\)(5) \(=\) 14.4, p\(<\).05, indicating a statistically significant association between motivation and regret behavior; that is, whether Twitter users regret or do not regret sharing on Twitter depend on different motivations of using Twitter. ### Inferential statistics To answer the main research question, logistic regression was performed to ascertain the effects of the perceived risk of sharing, perceived problems on Twitter, and motivation of using Twitter on the likelihood that participants regret sharing on Twitter. The logistic regression model was statistically significant, \(\chi\)2(15) \(=\) 102.5, p \(<\).001. The model explained 7.6 percent (Nagelkerke R squared) of the variance in regret sharing and correctly classified 78.5 percent of cases. Whether Twitter users regret or do not regret sharing on Twitter, depends on different motivations for using Twitter. We observe that "A way to express my opinion" is statistically significant in the model, indicating that the odds of Twitter user regretting sharing for this motivation is 2.1 times higher than the motivation of entertainment. Perceived risks of potential hostility and visibility were negatively associated with an increased likelihood of regret sharing. In contrast, perceived problems on Twitter concerning misinformation were negatively associated with the likelihood of regret sharing. Figure 1: A Crosstabulation between motivation and regret behavior ## 5 Discussion and Conclusion Researching regret posting on social media has a range of implications. Firstly, it raises questions about the impact of social media on our well-being. Many studies have suggested that using social media may have a negative impact on mental health and psychological wellbeing. Research on regret posting could provide further insights into this issue, considering the psychological consequences of sharing too much personal information online. Second, the research could affect how people use social media more generally. People who regret posting may be more cautious about sharing information in the future. They may also be more likely to take steps to carefully manage their online profiles and think twice before posting anything that could be seen as impulsive or controversial. As such, the research findings may lead to improved online safety and information management behaviors. Third, research on regret posting could help inform privacy policies and guidelines designed to protect users. Social media platforms could use the research results to develop better ways to protect users' privacy and combat potential cases of regret posting. They could also use the information to better educate users on the potential risks of sharing personal information online. Finally, the research could have implications for how people use social media as a form of communication. People may be more aware of their digital footprints and the potentially damaging impact that regret posting can have. As such, people may opt for more measured, controlled forms of communication that are less likely to cause regret or embarrassment.
2308.03496
Design and Implementation of an Efficient Onboard Computer System for CanSat Atmosphere Monitoring
With advancements in technology, the smaller versions of satellites have gained momentum in the space industry for earth monitoring and communication-based applications. The rise of CanSat technology has significantly impacted the space industry by providing a cost-effective solution for space exploration. CanSat is a simulation model of a real satellite and plays a crucial role in collecting and transmitting atmospheric data. This paper discusses the design of an Onboard Computer System forCanSat, used to study various environmental parameters by monitoring the concentrations of gases in the atmosphere. The Onboard Computer System uses GPS, accelerometer, altitude, temperature, pressure, gyroscope, magnetometer, UV radiation, and air quality sensors for atmospheric sensing. A highly efficient and low-power ESP32 microcontroller and a transceiver module are used to acquire data, facilitate seamless communication and transmit the collected data to the ground station.
Abhijit Gadekar
2023-08-07T11:43:02Z
http://arxiv.org/abs/2308.03496v1
# Design and Implementation of an Efficient Onboard Computer System for CanSat Atmosphere Monitoring ###### Abstract With advancements in technology, the smaller versions of satellites have gained momentum in the space industry for earth monitoring and communication-based applications. The rise of CanSat technology has significantly impacted the space industry by providing a cost-effective solution for space exploration. CanSat is a simulation model of a real satellite and plays a crucial role in collecting and transmitting atmospheric data. This paper discusses the design of an Onboard Computer System for CanSat, used to study various environmental parameters by monitoring the concentrations of gases in the atmosphere. The Onboard Computer System uses GPS, accelerometer, altitude, temperature, pressure, gyroscope, magnetometer, UV radiation, and air quality sensors for atmospheric sensing. A highly efficient and low-power ESP32 microcontroller and a transceiver module are used to acquire data, facilitate seamless communication and transmit the collected data to the ground station. keywords: CanSat, Telemetry, Atmospheric Sensing, Onboard Computer System, Satellite + Footnote †: journal: Computer Science ## 1 Introduction Satellites are man-made objects that orbit around the Earth and are used for various purposes. There are different types of satellites which include communication satellites, navigation satellites, weather satellites, remote sensing satellites, and scientific research satellites used for various applications. CanSat is one such type of small satellite that is designed to fit inside a can, often used for educational purposes. While CanSats share a similar design to CubeSats, they are significantly smaller and less costly to develop [1]. They are typically launched from rockets and reach altitudes of a few hundred meters before descending back to Earth using parachutes. During their flight, CanSats can collect various types of atmospheric data. The Onboard Computer (OBC) of a CanSat is a small, autonomous computer system that plays a critical role in the success of the CanSat mission [2]. The OBC is responsible for controlling and monitoring CanSat's various subsystems, including data acquisition, power management, communication, and sensor calibration. The OBC is designed to operate in space, where it must function reliably in the presence of radiation, extreme temperatures, and vibration. It is built using off-the-shelf components such as microcontrollers, sensors, communication devices, and custom-designed software algorithms to the specific mission requirements. This paper presents an in-depth investigation into the design and performance of the low-cost OBC of a CanSat suitable for STEM education. The study includes a detailed analysis of the hardware components, power management system, software architecture, and cost-effectiveness of the OBC. It investigates the objectives of the OBC designed for a CanSat, which include recording the air quality and UV radiation using atmospheric remote sensing sensors, tracking CanSat's health parameters such as its acceleration, calibration, gyroscope, and power consumed, recording atmospheric parameters such as temperature, pressure, humidity, and altitude, obtaining exact location coordinates using GPS, and transmitting the acquired payload data through the uplink to the ground station in real-time. The low-cost OBC also has a modular design architecture, allowing components such as sensors and communication devices to be easily replaced or upgraded, providing flexibility in educational applications. The remaining sections of this paper are organized as follows. Section 2 provides a literature review of the CanSats for various applications. Section 3 offers a detailed description of the design of an Onboard Computer System (OBC), highlighting the key components such as the hardware subsystem, ground station, and software subsystem. It provides a thorough overview of the design process, including the development of each subsystem, which is integrated to achieve optimal performance for CanSat. The operational functioning of OBC is described in detail in Section 4. Section 5 presents the results of the OBC performance and discussions of the findings. Section 6 summarizes the key findings from the study and a conclusion on the design of an OBC for a CanSat, as well as its potential applications. Section 7 offers recommendations that include suggestions for further improvements. ## 2 Materials and Methods In recent years, CanSats have gained popularity as an educational tool for teaching students about aerospace engineering, electronics, and other related subjects. They are relatively low-cost, which makes them accessible to a wide range of students and educators. A CanSat OBCwith autonomous control and wireless communication capabilities were developed for an educational satellite system [3]. The OBC performs functions such as attitude control, wireless data transmission, temperature and humidity sensing. It uses an Arduino microcontroller and a Zigbee wireless communication module to establish communication between the OBC and the ground station. An autonomous control algorithm was deployed, which enables the CanSat to perform tasks such as detecting and correcting the orientation during flight. The software framework of CanSat can be developed with the real-time operating system (RTOS) using the FreeRTOS kernel [4]. The development of RTOS allowed for reliable and efficient execution of the software on the OBC. An Atmel AVR microcontroller, specifically the ATmega328P, was used as the brain for the OBC. A range of sensors was used for atmospheric monitoring, such as a GPS receiver, a temperature sensor, an accelerometer, and a radio transceiver for wireless communication. The OBC was made to be compact and lightweight, weighing only 60 grams and consuming minimal power. The performance of an OBC was evaluated in terms of its power consumption, data acquisition and processing capabilities, and wireless communication range. A miniature OBC for CanSat was implemented using commercial-off-the-shelf (COTS) components. The use of the Raspberry Pi as the CPU, combined with other COTS components and open-source software, provided a practical and efficient solution for controlling and communicating with the CanSat [5]. Another low-cost OBC was developed for a CanSat using AVR microcontroller ATmega2560. The OBC includes a real-time clock (RTC) module for timekeeping and a micro SD card for data storage and is programmed in the Clanguage using the Atmel Studio Integrated Development Environment (IDE) [6]. A modular OBC based on an ARM Cortex-M3 microcontroller was developed, capable of handling various tasks such as data acquisition, telemetry, and control of the CanSat. It allows for easy customization and expansion [7]. The development of a Raspberry Pi OBC equipped with sensors for measuring temperature, humidity, pressure, and acceleration was accomplished for monitoring the atmosphere. It was capable of transmitting data to a ground station using a radio module [8]. A low-cost OBC based on an Arduino microcontroller was designed for handlingtasks such as data acquisition, telemetry, and control of the CanSat[9]. ## 3 Theory and Calculations Atmospheric parameters are variables that describe the state of the Earth's atmosphere. They include physical properties such as temperature, pressure, humidity, and wind speed, as well as chemical properties such as greenhouse gas concentrations and air quality. The rise in air pollution levels can be attributed to multiple factors including increased road transportation vehicles, industries, and other sources. However, accurate measurement of air pollutants is hindered by the high cost of environmental monitoring processes, leading to a lack of reliable data on the extent of the problem. Measuring atmospheric parameters is essential for understanding weather patterns, climate change, and air quality and making predictions. The proposed Onboard Computer System for a CanSat focuses on achieving the objectives of recording air quality and UV radiation, tracking CanSat's health parameters, recording atmospheric parameters, obtaining the exact location coordinates of the CanSat, and transmitting the acquired payload data in real-time to the Ground Station. The Onboard Computer System is divided into three systems: The hardware system of CanSat, the Ground Station System, and the Software system of CanSat. Figure 1 below shows system block diagram of OBC implemented for a CanSat. ### Hardware System of CanSat The hardware design of the onboard computer system is a critical component of a CanSat. It is used for processing data from the CanSat's sensors, communicating with the ground station, and executing mission-specific commands and must adhere to specific requirements and limitations to ensure its successful operation. First, all components of the CanSat must be able to fit within a standard-sized can, except for the parachute. Additionally, the CanSat must be powered by a battery, providing the necessary energy for its functions. Moreover, the CanSat's weight should be kept to a minimum to allow for greater altitude and to ensure accurate measurement of various parameters at different altitude levels. For precise results in CanSat operations, it is crucial to consider the design constraints and requirements. The OBC's hardware system is categorized into Sensor, Communication, and Electric power subsystems which are explained in further sections. Since OBC serves as the brain of CanSat, it is essential to consider factors such as performance, power consumption, communication capabilities, weight, and ease of use when selecting OBC hardware. Microcontrollers such as Arduino, ESP, and Raspberry Pi are popular choices for their affordability and ease of use. They are also highly customizable, allowing users to add various sensors and peripherals to their CanSat system. Table 1 presents a comparative analysis conducted to select the best onboard computer for a CanSat project. ESP\({}_{32}\) was selected as the OBC due to its low-power applications, built-in Wi-Fi and Bluetooth connectivity, and ease of programming, thus providing a cost-effective solution. #### 3.1.1 Sensor Subsystem Design A sensor subsystem is a set of sensors used to collect data about a specific environment or system. In this case, the subsystem consists of sensors that measure different parameters such as UV radiation, altitude, temperature, humidity, pressure, GPS, acceleration, gyroscope, and air quality. In a comparison of various sensors, factors such as accuracy, range, Figure 1: System Block Diagram of an OBC for a CanSat sensitivity, response time, and cost were evaluated. Instead of using individual sensors for acceleration, gyroscope, temperature, humidity, pressure, and altitude measurements, an all-in-one sensor module was chosen for the OBC. The GY-87 module includes a barometer, magnetometer, gyroscope, accelerometer, and thermometer, all on one board. This not only reduces the overhead of receiving values from different sensors, but it is also lightweight and takes up less space, making it an ideal choice for the CanSat's payload. The GY-87 module is a single-board solution that includes multiple sensors such as the MPU605o 3-axis accelerometer and 3-axis gyroscope, HMC5883L triple-axis magnetometer, and BMP180 barometric pressure sensor. Toconnect the GY-87 module to the ESP32 MCU, the SDA (Serial Data Pin) and SCL (Serial Clock Pin) of the module need to be connected to the corresponding pins on the MCU. This is done via I2C communication and allows the MCU to receive sensor data from the GY-87 module. The Neo 6M GPS module was chosen as the GPS sensor for the CanSat project due to its ability to provide accurate location information in terms of latitude and longitude. This module utilizes a combination of GPS and GLONASS satellites to determine location and is capable of providing a 10-meter accuracy. The module is also relatively small and lightweight, making it an ideal choice. The Neo 6M GPS module communicates with the \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline Parame-ter & ESP32 & STM32 & Arduino Nano & Raspberry Pi \\ & & & & Pico \\ \hline Proces-sor & Dual-core & ARM Cortex-M4 & AVR & ARM \\ & Tensilica LX6 & 32-bit & ATmega328P & Cortex-Mo+ \\ & 32-bit & & 8-bit & 32-bit \\ Clock & Up to 240 MHz speed & Up to 180 MHz & 16 MHz & Up to 133 MHz \\ Memory & 520 KB SRAM, 4 MB flash & 128 KB flash & 512 KB SRAM, 512 KB flash & 2 KB flash & 264 KB flash & SRAM, 2 MB flash \\ I/O & 34 programmable GPIO pins & 70-114 GPIO pins & 14 digital I/O pins, 6 analog pins & 26 multi-function GPIO pins & 26 multi-function GPIO pins \\ Connectivity & Wi-Fi, Bluetooth, I2C, SPI, UART & USB, CAN, UART & USB, I2C, SPI, UART & USB, I2C, SPI, \\ Operating Voltage & 2.2 V to 3.6 V, Voltage & 1.7 V to 3.6 V & 5V & 1.8 V to 5.5 V \\ Price & \$4 to \$8 & \$5 to \$20 & \$5 to \$15 & \$4 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparative Analysis of OBCs for CanSat MCU via UART communication. Air quality monitoring is an essential aspect of the CanSat project, and the MQ-135 gas sensor was selected as the primary sensor for this purpose. This sensor is capable of detecting a range of gases, including carbon dioxide, ammonia, and benzene, and provides an analog output that can be read by the MCU. The MQ-135 gas sensor is designed to detect a range of polluting gases in the air. It utilizes SnO2, which has a higher resistance in clear air, as a gas sensing material. When polluting gases are present, the resistance of the sensor decreases in proportion to the concentration of the gas. The MQ-135 gas sensor is capable of detecting concentrations of up to 2000 ppm. The GUVA-S12SD UV Light Sensor Module was chosen as the primary UV light sensor. This analog sensor is capable of converting UV light intensity into a proportional voltage output that can be measured by the MCU. The sensor communicates with the MCU via an analog input pin. It can detect ultraviolet radiation in the range of 200-400 nm. It is commonly used for detecting sunlight and measuring UV exposure. Figure 2 shows the sensor's subsystem data to the ESP32 MCU. #### 3.1.2 Communication Subsystem Design The Communication Subsystem Design for a CanSat mission is an essential component that enables the transmission of data from the CanSat to the Ground Station. In this design, the HC12 433 MHz Transceiver Module is used as a communication device between the CanSat and the Ground Station. Figure 2: Sensor Subsystem Data The HC12 433 MHz Transceiver Module is a radio frequency (RF) module that operates in the 433 MHz frequency band. This module is chosen for its reliable performance, low power consumption, and ability to operate over along range. The module is equipped with an onboard antenna, which enables the transmission and reception of data over a distance of up to 1 km in open space. The module operates through UART serial communication protocol interfaced with the microcontroller. The HC12 433 MHz Transceiver is also equipped with several features that make it easy to use and integrate into the CanSat's Communication Subsystem. These features include a configurable data rate, adjustable transmit power, and an onboard status LED that indicates the module's operational status. The key parameters of HC12 module is described in Table 2. #### 3.1.3 Electric Power Subsystem Design The OBC is included with several sensors, each with its power requirements. To calculate the power needed, the operating voltage and current of each sensor were taken into consideration. This information was used to design and implement the electric power subsystem for the CanSat, ensuring that it had sufficient power to operate all of its sensors and components. The power (_P_) is product of voltage (_V_) and current (_I_) as shown in Equation 1. Table 3 shows the power budget analysis for OBC. \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Frequency & 433.4 - 473.0 MHz \\ Baud Rate & 1200, 2400, 4800, 9600, 19200, 38400, 57600, 115200 bps \\ Transmit Power & 0.8 - 100 mW (adjustable) \\ Sensitivity & -117 dBm \\ Operating Voltage and & 3.2 - 5.5 VDC; 30 mA (transmitting), 19 mA \\ Operating Current & (receiving), 2.5 uA (sleep mode) \\ \hline \hline \end{tabular} \end{table} Table 2: Key Parameters of HC12 Communication Module \begin{table} \begin{tabular}{l l l l l} \hline \hline Component & Function & Voltage (Max) & Current & Operational Power \\ \hline GY 87 & Sensor & 3\(\cdot\)3 V & 4.034 mA & 13\(\cdot\)315 mW \\ Neo 6M & Sensor & 3\(\cdot\)3 V & 67 mA & 221.1 mW \\ MQ135 & Sensor & 5 V & 20 mA & 100 mW \\ Govu S12SD & Sensor & 5 V & 20 mA & 100 mW \\ HC-12 & Communication & 5 V & 150 mA & 750 mW \\ Microcontroller & ESP32 & 5 V & 250 mA & 1250 mW \\ **Total** & & & **511 mA** & \\ \hline \hline \end{tabular} \end{table} Table 3: Power Budget Analysis An Orange ISR 18650 battery with an operating voltage of 3.7 V and a capacity of 2200 mAh was chosen for the requirements. Lithium Ion was chosen over Lithium Polymer due to its high leakage protection and reliability. The chosen battery was integrated into the CanSat's electric power subsystem to ensure that it had sufficient power to operate throughout the duration of the mission. ### Ground Station Design Ground station design plays a significant role in receiving and processing data transmitted by the CanSat. Arduino Nano, a popular microcontroller with a wide range of I/O options and programming capabilities, was selected as the Ground Station computer. It is compact and affordable, thus making it a suitable choice for building a ground station. HC-12 module, on the other hand, is a wireless communication module that offers long-range communication capabilities at a low cost. The HC-12 wireless transceiver module was interfaced with the Arduino Nano microcontroller board using a UART interface. The HC-12 module was connected to the Arduino's RX and TX pins, while the baud rate was set to 9600 bits per second. This allowed for the wireless transmission of data between the CanSat and ground station using the HC-12 module, which had a range of up to 1 km in open air conditions. Figure 3 shows the interfacing diagram of HC-12 Module and Arduino Nano for Ground Station design. ### Software System of CanSat The software development plan for the CanSat OBC is categorized into three main phases. In the first phase, component-level development and testing were conducted. This phase ensured that each component of the OBC was functioning correctly before being integrated with other subsystems. The second phase, integration testing, ensured that the Figure 3: Interfacing Diagram of Ground Station Design software was ready for final testing with integrated subsystems. The final phase, final calibration, and system testing were conducted on the OBC and involved the implementation of a web server. This phase served as the final check to ensure that the OBC was functioning correctly and met all requirements. A web server was developed to showcase sensor data retrieved from an HC12 module. The software subsystem is developed using the Arduino Integrated Development Environment (IDE) and several libraries of MPU6050 accelerometer, BMP180 Temperature/Pressure sensor, and Adafruit NEOGPS, etc. The sensors are connected to the ESP32 microcontroller using serial communication protocols, such as I2C and UART. Figure 4 shows the flow of data collection from sensors. Figure 4: Flowchart of data collection ## 4 Working The working of the designed OBC is as follows: 1) Data Acquisition: The software subsystem of the CanSat OBC acquires data from various sensors such as MQ135, NEO 6M, GUVA-S12SD, GY87, and BME28o interfaced as shown inFigure 5. The sensors measure environmental parameters such as air quality, temperature, humidity, GPS location, pressure, altitude, and UV radiation. 2) Data Processing and Transmission: The acquired data is processed by the microcontroller (ESP32) of the CanSat OBC. The microcontroller calibrates the data to ensure accuracy. The processed data is then transmitted to the ground station using wireless communication through the HC12 module. The transmission protocol includes data packaging and error checking to ensure data integrity during transmission. 3) Data Reception and Analysis: The ground station receives the transmitted data and stores it in a database for further analysis. Various statistical and analytical methods are used to extract useful information from the data. Figure 5: Interfacing Diagram of OBC A CAD model was created to place OBC components in a CanSat, taking into account clearance, signal routing, and thermal considerations, and complying with manufacturing and performance requirements as shown in Figure 6. ## 5 Results and Discussion The CanSat OBC demonstrated effective performance during experimentation and testing, aligning with its theoretical calculations. Figure 7 shows the integration testing of OBC on a breadboard. The OBC system was tested in outdoor environments from a height ranging from ground level up to 60 meters. The serial communication between the ground station and the CanSat up to 800 m was accomplished by HC-12 module. The use of a web server for data from the CanSat enabled real-time monitoring of the CanSat's position and surroundings. The sensors took a calibration time of 2 seconds for accurate readings. The GUVAS12SD sensor detected UV rays in the sunlight of wavelength 200-370nm and a response time of fewer than 0.5 seconds. It was observed that the amount of UV radiation that reaches us is influenced by several factors, including the amount of cloud cover and air pollution. Thus, UVIndex is higher at high altitudes than at ground level. The GY87 module measured acceleration in all three axes and the orientation of OBC with a sensitivity of 2gand 0.1. The sensor measured temperature at different altitudes with an accuracy of 1C. It was observed that temperatures lower with altitude due to a decrease in air pressure. The BMP180 sensor on GY 87 measured atmospheric pressure in millibars which was used to compute the estimated altitude level. The GPS sensor was capable of determining the precise location of the OBC with an accuracy of up to 2 meters. The MQ135 sensor measured the Air Quality in PPM which gave vital information regarding Figure 6: CAD Model of an OBC for CanSat its surroundings. The ESP32 microcontroller has an onboard analog-to-digital converter (ADC) that was used to measure its input voltage thus giving OBC's power consumption details. The battery provided a run time of 4o minutes. Figure 8 shows the data received from OBC to the ground station. Figure 9 shows the OBC's data which is displayed on web server after receiving to ground station. The changes in atmospheric parameters with respect to different altitude levels is shown in Figure 10. ## 6 Conclusion This paper presented a practical and cost-efficient solution for gathering and relaying environmental data from high altitudes using the CanSat OBC. It comprises various sensors to collect and transmit data, such as temperature, humidity, pressure, altitude, air quality, UV radiation, and GPS location. By integrating the OBC with a web server, real-time access to data was made possible, enabling more in-depth analysis and informed decision-making. Nevertheless, the OBC can benefit from further improvements to enhance its accuracy, processing speed, and transmission range. Possible enhancements include incorporating machine learning algorithms to optimize data processing and analysis, as well as integrating advanced wireless communication technologies to extend the reach of data transmission. Overall, this study lays the groundwork for future research and development of environmental monitoring from high altitudes. Figure 7: Testing of an OBC for a CanSat ## 7 Recommendations As technology continues to advance, future upgrades to the OBC can further enhance its capabilities and expand its potential uses beyond its current functionality. The low-cost OBC used in a CanSat can serve as a valuable tool for mission control. The OBC of a CanSat can be improved in various ways to enhance its performance, including using low-power and highly efficient boards and FPGAs. In addition to these factors, other considerations, such as the selection of a better microcontroller and sensors, can also play a significant role in determining the optimal Onboard Computer System for a specific application. With the ability to adapt to new technologies, the OBC has the potential to become an even more integral component in the success of CanSat missions.
2303.08869
Probing Cosmological Particle Production and Pairwise Hotspots with Deep Neural Networks
Particles with masses much larger than the inflationary Hubble scale, $H_I$, can be pair-produced non-adiabatically during inflation. Due to their large masses, the produced particles modify the curvature perturbation around their locations. These localized perturbations eventually give rise to localized signatures on the Cosmic Microwave Background (CMB), in particular, pairwise hotspots (PHS). In this work, we show that Convolutional Neural Networks (CNN) provide a powerful tool for identifying PHS on the CMB. While for a given hotspot profile a traditional Matched Filter Analysis is known to be optimal, a Neural Network learns to effectively detect the large variety of shapes that can arise in realistic models of particle production. Considering an idealized situation where the dominant background to the PHS signal comes from the standard CMB fluctuations, we show that a CNN can isolate the PHS with $\mathcal{O}(10)\%$ efficiency even if the hotspot temperature is $\mathcal{O}(10)$ times smaller than the average CMB fluctuations. Overall, the CNN search is sensitive to heavy particle masses $M_0/H_I=\mathcal{O}(200)$, and constitutes one of the unique probes of very high energy particle physics.
Taegyun Kim, Jeong Han Kim, Soubhik Kumar, Adam Martin, Moritz Münchmeyer, Yuhsin Tsai
2023-03-15T18:34:28Z
http://arxiv.org/abs/2303.08869v1
# Probing Cosmological Particle Production and Pairwise Hotspots with Deep Neural Networks ###### Abstract Particles with masses much larger than the inflationary Hubble scale, \(H_{I}\), can be pair-produced non-adiabatically during inflation. Due to their large masses, the produced particles modify the curvature perturbation around their locations. These localized perturbations eventually give rise to localized signatures on the Cosmic Microwave Background (CMB), in particular, pairwise hotspots (PHS). In this work, we show that Convolutional Neural Networks (CNN) provide a powerful tool for identifying PHS on the CMB. While for a given hotspot profile a traditional Matched Filter Analysis is known to be optimal, a Neural Network learns to effectively detect the large variety of shapes that can arise in realistic models of particle production. Considering an idealized situation where the dominant background to the PHS signal comes from the standard CMB fluctuations, we show that a CNN can isolate the PHS with \(\mathcal{O}(10)\%\) efficiency even if the hotspot temperature is \(\mathcal{O}(10)\) times smaller than the average CMB fluctuations. Overall, the CNN search is sensitive to heavy particle masses \(M_{0}/H_{I}=\mathcal{O}(200)\), and constitutes one of the unique probes of very high energy particle physics. ## 1 Introduction An era of cosmic inflation [1; 2; 3] in the primordial Universe remains an attractive paradigm to explain the origin of (approximately) scale invariant, Gaussian, and adiabatic primordial perturbations, inferred through cosmic microwave background (CMB) and large scale structure (LSS) observations. This inflationary era can be characterized by a rapid expansion of spacetime, controlled by an approximately constant Hubble scale \(H_{I}\). Excitingly, based on the current constraints, \(H_{I}\) can be as large as \(5\times 10^{13}\) GeV [4]. This fact, coupled with the feature that particles with masses up to order \(H_{I}\) can get quantum mechanically produced during inflation, makes the inflationary era a natural and unique arena to _directly_ probe very high energy particle physics. There are several classes of mechanisms through which heavy particles, which we label as \(\chi\), can be produced during inflation. When their mass \(m_{\chi}\lesssim H_{I}\), quantum fluctuations of the inflationary spacetime itself can efficiently produce the \(\chi\) particles. However, for \(m_{\chi}\gg H_{I}\) this production gets suppressed exponentially as \(e^{-\pi m_{\chi}/H_{I}}\)[5], and other mechanisms are necessary for efficient particle production to occur. To illustrate this, we consider the standard slow-roll inflationary paradigm containing an inflaton field \(\phi\) whose homogeneous component we denote by \(\phi_{0}(t)\). Normalization of the primordial scalar power spectrum requires the 'kinetic energy' of this homogeneous component to be \(|d\phi_{0}/dt|^{1/2}\approx 60H_{I}\)[4]. Therefore, heavy particles, if appropriately coupled to the inflaton kinetic term, can be efficiently produced for \(m_{\chi}\lesssim 60H_{I}\). One class of examples of this involve a coupling of the type \(\partial_{\mu}\phi J^{\mu}\) where \(J^{\mu}\) is a charged current made up of the \(\chi\) field. For some recent work implementing this idea see, e.g, Refs. [6; 7; 8; 9; 10; 11; 12; 13; 14]. In these constructions, heavy particle production happens continuously in time, in a scale-invariant fashion. In other words, the coupling of the inflaton to \(\chi\) particles does not break the shift symmetry, \(\phi\to\phi+\text{constant}\), of the inflaton. A different class of mechanisms can lead to particle production at specific times during the inflationary evolution. This can happen if the shift symmetry of the inflaton is broken in a controlled manner, e.g. to a discrete shift symmetry. This breaking of shift symmetry translates into a violation of scale invariance, and selects out specific time instant(s) when particle production can occur. Examples of such mechanisms appear in Refs. [15; 16; 17; 18; 19; 20], and see Refs. [21; 22] for reviews. A particularly interesting example of this latter mechanism arises in the context of ultra-heavy particles with time-dependent masses. More specifically, suppose \(m_{\chi}\) varies as a function of \(\phi\) in a way such that, as \(\phi\) passes through a specific point \(\phi_{*}\) on the inflaton potential at time \(t_{*}\), \(m_{\chi}(\phi)\) passes through a local minimum. In this case, non-adiabatic \(\chi\) particle production can occur at time \(t_{*}\). Following their production, \(\chi\) particles can again become heavy, \(m_{\chi}\gg|d\phi_{0}/dt|^{1/2}\), and owing to this large mass they can backreact on the inflationary spacetime, contributing to the curvature perturbation around their locations. We can describe the effects of these additional curvature perturbations qualitatively in the following way, leaving the details for the next section. Following their production, the perturbations exit the horizon when their wavelengths become larger than \(1/H_{I}\) and become frozen in time. After the end of inflation, they eventually reenter the horizon and source additional under- or over-densities in the thermal plasma in the radiation dominated Universe. Overdense regions, for example, would trap more plasma, and therefore would emit more photons at the time of CMB decoupling.1 Therefore, we would observe localized regions on the sky where CMB would appear hotter than usual. As we will discuss below, the sizes of these localized'spots' are determined by the size of the comoving horizon, \(\eta_{*}\), at the time of particle production \(t_{*}\). While \(\eta_{*}\) can take any value, for concreteness we will consider \(\eta_{*}\sim 100\) Mpc in this work. This implies that the localized spots would subtend \(\sim 1^{\circ}\) on the CMB sky. Footnote 1: To be more accurate, one also need to take into account the gravitational redshift of the photons as they climb out of the gravitational potential wells. We will compute this effect in the next section. The next question one may ask is what is an efficient strategy to look for such signatures. Since this scenario is associated with a violation of scale invariance, characterized by \(\eta_{*}\), one would expect to see 'features' on the CMB power spectrum or even higher-point correlation functions. However, in the regime we focus on, the total number of produced \(\chi\) particles is still small to the extent that the CMB power spectrum is minimally affected, as we explicitly check later. On the other hand, the spots can still be individually bright enough such that we can look for them directly in position space. Indeed, this class of signatures in the context of heavy particle production were discussed in Refs. [23; 24], and in Ref. [25] the associated CMB phenomenology was described and a simple 'cut-and-count' search strategy was developed. Using the cut-and-count strategy, Ref. [25] constrained the parameter space of ultra-heavy scalars and illustrated regions where a position space search is more powerful than power spectrum-based searches. In more detail, Ref. [25] considered a single instance of particle production during the time when CMB-observable modes exit the horizon. Conservation of momentum implies that such heavy particles are produced in pairs. However, owing to their large mass, the particles do not drift significantly following their production, and it was argued that the separation between the two particles forming a pair can be taken to be a uniformly random number between \(0\) and \(\eta_{*}\). Finally, it was shown that the coupling \(g\) of \(\chi\) to the inflaton determines how hot/cold the associated spot on the CMB is with the heavy particle mass \(m_{\chi}\) determining the total number of such spots on the sky. To summarize, the three parameters determining the hot/cold spot phenomenology are \(\{g,m_{\chi},\eta_{*}\}\), as will be reviewed in more detail in the next section. While both cold or hot spots can arise depending on the value of \(\eta_{*}\), for the choices of \(\eta_{*}\) in this work, only hotspots will appear on the CMB. Therefore, we will often be referring to these localized spots as hotspots, in particular as pairwise hotspots (PHS) since the spots appear in pairs. In the present work, we improve upon Ref. [25] in several important ways. First, in Ref. [25] we only considered hotspots that lie within the last scattering surface, with a thickness of \(\Delta\eta\approx 19\) Mpc [26]. In this work we adopt a more realistic setup and include hotspots that are distributed in a larger region around the last scattering surface. We take this region to have a thickness of \(2\eta_{*}\) and we show in Sec. 2 how hotspots lying outside the \(\Delta\eta\) shell can still affect the CMB. The overall signature of PHS then changes non-trivially. For instance, with the improved treatment we can have one spot of a pair lying on the CMB surface, while the other can lie off the CMB surface, leading to an asymmetric signal. Second, we develop a neural network (NN)-based search for the hotspot profiles. In principle, a neural network is not necessary to search for a profile of known shape which is linearly added to the Gaussian background. In this case, the standard method of constructing a so-called matched filter can be shown to be the optimal statistic to detect the profile (see, e.g., [27]). Matched filter-based searches for radially symmetric profiles in the CMB have been previously reported for example in [28; 29; 30], with the physical motivation of searching for inflationary bubble collisions. Various matched filters have also been used in the Planck Anisotropy and Statistics Analysis [31; 32] without finding a significant excess. However, the signal which we are looking for here is more complicated. Profiles come in pairs (breaking radial symmetry of the profile), they can be overlapping, and, depending on their production time and orientation with respect to the surface of last scattering, their appearance on the CMB changes. While it is in principle possible to cover the entire space of profiles with a very large bank of matched filters, this would be a complicated and computationally challenging approach. A neural network, on the other hand, can learn an effective representation of these filters which interpolates well between all profile shapes, including overlapping ones. We also implement the matched filter method below, and show that in the simplified case with a single profile type, our neural network performs similar to the optimal matched filter. This work is organized as follows. We first describe a simple model of \(\chi\) particle production in Sec. 2 and summarize how the total number of produced particles depends on the model parameters along with various properties of the PHS. We improve the calculation of the hotspot profiles by taking into account the line-of-sight distance to the location of the hotspots which can be off the CMB surface. In Sec. 3, we describe the simulation of the PHS signals and the CMB maps in angular space, assuming that the dominant background to the PHS signal comes from the standard CMB fluctuations. In Sec. 4, we describe the convolutional neural network (CNN) analysis and estimate the sensitivity the CNN can achieve for a PHS search. We then translate this sensitivity to the mass-coupling parameter space of the heavy particles. We also compare the CNN analysis with a matched filter analysis for simplified hotspot configurations. We conclude in Sec. 5. ## 2 Pairwise Hotspot Signals To model heavy particle production, we consider a scenario where the mass of \(\chi\) is inflaton-dependent, \(m_{\chi}(\phi)\). Therefore as \(\phi\) moves along its potential, efficient, non-adiabatic particle production can occur if \(m_{\chi}(\phi)\) varies with \(\phi\) rapidly. With a mass term \(m_{\chi}(\phi)^{2}\chi^{2}\), _pairs_ of \(\chi\) particles would be produced, as required by three-momenta conservation. The phenomenology of such heavy particles depend on their mass, coupling to the inflaton, and the horizon size at the time of their production. We now review these properties more qualitatively, referring to Ref. [25] for a more complete discussion. ### Inflationary Particle Production We parametrize the inflationary spacetime metric as, \[ds^{2}=-dt^{2}+a^{2}(t)d\vec{x}^{2}, \tag{1}\] with the scale factor \(a(t)=e^{H_{I}t}\) and \(H_{I}\) the Hubble scale during inflation that we take to be (approximately) constant. To model particle production in a simple way, we assume \(m_{\chi}(\phi)\) passes through a minimum as \(\phi\) crosses a field value \(\phi_{*}\). Then we can expand \(m_{\chi}(\phi)\) near \(\phi_{*}\) as, \[m_{\chi}(\phi)=m_{\chi}(\phi_{*})+\frac{1}{2}m_{\chi}^{\prime \prime}(\phi_{*})(\phi-\phi_{*})^{2}+\cdots, \tag{2}\] where primes denote derivatives with respect to \(\phi\). Thus the mass term would appear in the potential as, \[m_{\chi}(\phi)^{2}\chi^{2}=m_{\chi}(\phi_{*})^{2}\chi^{2}+m_{\chi}( \phi_{*})m_{\chi}^{\prime\prime}(\phi_{*})(\phi-\phi_{*})^{2}\chi^{2}+\cdots. \tag{3}\] While away from \(\phi_{*}\), \(m_{\chi}(\phi)\) can vary in different ways, most of the important features of particle production are determined by the behavior of \(m_{\chi}(\phi)\) around \(\phi_{*}\). For example, the number density of \(\chi\) particles is determined by \(m_{\chi}(\phi_{*})\), as we will see below. Similarly, the spatial profiles of the hotspots on the CMB is determined by the dependence \((\phi-\phi_{*})^{2}\sim\dot{\phi}_{0}^{2}(t-t_{*})^{2}\sim(\dot{\phi}_{0}/H_{I} )^{2}\log(\eta/\eta_{*})^{2}\), where we have used the relation between \(t\) and conformal time \(\eta\), \(\eta=(-1/H_{I})e^{-H_{I}t}\) (an overdot here denotes a derivative with respect to time). Given the importance of the physics around \(\phi_{*}\), we will denote, \(m_{\chi}(\phi_{*})^{2}\equiv M_{0}^{2}\), \(m_{\chi}(\phi_{*})m_{\chi}^{\prime\prime}(\phi_{*})\equiv g^{2}\), and \(\phi_{*}\equiv\mu/g\), to describe particle production. Thus we will write the Lagrangian for \(\chi\) as, \[\mathcal{L}_{\chi}=-\frac{1}{2}(\partial_{\mu}\chi)^{2}-\frac{1}{ 2}\left((g\phi-\mu)^{2}+M_{0}^{2}\right)\chi^{2}. \tag{4}\] As \(\phi\) nears the field value \(\phi_{*}\), the mass of the \(\chi\) field changes non-adiabatically and particle production can occur. The efficiency of particle production depends on the parameters \(g\), \(M_{0}\), and \(\eta_{*}\), the size of the comoving horizon at the time of particle production. This can be computed using the standard Bogoliubov approach, and resulting probability of particle production is given by [33; 20], \[|\beta|^{2}=\exp\left(-\frac{\pi(M_{0}^{2}-2H_{I}^{2}+k^{2}\eta_ {*}^{2}H_{I}^{2})}{g|\dot{\phi}_{0}|}\right). \tag{5}\] The normalization of the scalar primordial power spectrum, in the context of single-field slow-roll inflation, fixes \(A_{s}=H_{I}^{4}/(4\pi^{2}\dot{\phi}_{0}^{2})\approx 2.1\times 10^{-9}\)[4] which determines \(\dot{\phi}_{0}\approx(58.9H_{I})^{2}\). The above expression (5) characterizes the probability of particle production with physical momentum \(k_{p}=k\eta_{*}H_{I}\). The total number density of particles can then be computed by integrating over all such \(k\)-modes, \[n=\frac{1}{2\pi^{2}}\int_{0}^{\infty}dk_{p}k_{p}^{2}e^{-\pi k_{p }^{2}/(g|\dot{\phi}_{0}|)}e^{-\pi(M_{0}^{2}-2H_{I}^{2})/(g|\dot{\phi}_{0}|)}= \frac{1}{8\pi^{3}}\left(g\dot{\phi}_{0}\right)^{3/2}e^{-\pi(M_{0}^{2}-2H_{I}^ {2})/(g|\dot{\phi}_{0}|)}. \tag{6}\] From an observational perspective, it is more convenient to relate \(n\) to the total number of spots that would be visible on the CMB sky. To that end, we need to specify the associated spacetime volume. Considering a shell of thickness \(\Delta\eta_{s}\) around the CMB surface, the total number of spots in that shell is given by [25], \[N_{\rm spots} =n\times\left(\frac{a_{*}}{a_{0}}\right)^{3}\times 4\pi\chi_{\rm rec }^{2}\Delta\eta_{s}\,,\] \[=\frac{1}{2\pi^{2}}\left(\frac{g\dot{\phi}_{0}}{H_{I}^{2}}\right)^ {3/2}\frac{\Delta\eta_{s}}{\chi_{\rm rec}}(k_{*}\chi_{\rm rec})^{3}e^{-\pi(M_{ 0}^{2}-2H_{I}^{2})/(g|\dot{\phi}_{0}|)}\,, \tag{7}\] \[\approx 4\times 10^{8}\times g^{3/2}\left(\frac{\Delta\eta_{s}}{1 00~{}{\rm Mpc}}\right)\left(\frac{100~{}{\rm Mpc}}{\eta_{*}}\right)^{3}e^{- \pi(M_{0}^{2}-2H_{I}^{2})/(g|\dot{\phi}_{0}|)}\,.\] Here \(a_{*}\) and \(a_{0}=1\) are the scale factors at the time of particle production and today, respectively. The quantity \(\chi_{\rm rec}\) is the distance of the CMB surface from us and approximately equals 13871 Mpc, obtained from Planck's best-fit \(\Lambda\)CDM parameters, and \(k_{*}=a_{*}H_{I}=1/\eta_{*}\) is the mode that exits the horizon at the time of particle production. ### Effect on the CMB We now discuss the detailed properties of the spots and how they modify the CMB. Primordial Curvature Perturbation from Heavy Particles.Owing to their large mass, the heavy particles can backreact on the spacetime metric around their locations, and can give rise to non-trivial curvature perturbations. The profile of such a curvature perturbation can be computed using the in-in formalism and the result is given by [24], \[\langle\zeta_{\rm HS}(r)\rangle=\frac{H_{I}}{8\epsilon\pi M_{\rm pl }^{2}}\begin{cases}M(\eta=-r),&\text{if }r\leq\eta_{*}\\ 0,&\text{if }r>\eta_{*}\end{cases}. \tag{8}\] Here \(\epsilon=|\dot{H}_{I}|/H_{I}^{2}\) is a slow-roll parameter, and we have anticipated that this curvature perturbation would give rise to a hotspot (HS), rather than a coldspot. Importantly, the variation of the mass as a function of conformal time \(\eta\) controls the spatial profile. This variation can be computed from Eq. (4) by noting the slow-roll equation \(\phi-\phi_{*}\approx\dot{\phi}_{0}(t-t_{*})\), which gives \[M(\eta)^{2}=\frac{g^{2}\dot{\phi}_{0}^{2}}{H_{I}^{2}}\ln(\eta/ \eta_{*})^{2}+M_{0}^{2}. \tag{9}\] Here we have used the relation between cosmic time \(t\) and the conformal time \(\eta\), that also determines the size of the comoving horizon, \(t-t_{*}=-(1/H_{I})\ln{(\eta/\eta_{*})}\). Using the slow-roll relation \(\dot{\phi}_{0}^{2}=2\epsilon H_{I}^{2}M_{\rm pl}^{2}\) and the fact that \(M_{0}^{2}\sim g|\dot{\phi}_{0}|\) so that \(N_{\rm spots}\) is not significantly exponentially suppressed (see Eq. (7)), we can drop the contribution of the second term in Eq. (9) away from \(\eta_{*}\). The profile can then be simply written as, \[\langle\zeta_{\rm HS}(r)\rangle=\frac{gH^{2}}{4\pi|\dot{\phi}_{0 }|}\ln(\eta_{*}/r)\theta(\eta_{*}-r). \tag{10}\] Given the typical size of a standard quantum mechanical fluctuation \(\langle\zeta_{q}^{2}\rangle^{1/2}\sim H^{2}/(2\pi\dot{\phi}_{0})\), we see the curvature perturbation associated with a hotspot differs primarily by \(g/2\). In this work we will choose \(g\sim\mathcal{O}(1)\), so the two types of perturbations will be of the same order of magnitude. CMB Anisotropy.After these fluctuation modes reenter the horizon, they source temperature anisotropies and give rise to localized spots on the CMB sky. To compute the resulting anisotropies, we first write metric perturbations, \[ds^{2}=-(1+2\Psi)dt^{2}+a^{2}(t)(1+2\Phi)\delta_{ij}dx^{i}dx^{j}, \tag{11}\] in the Newtonian gauge. The temperature fluctuations of the CMB corresponding to Fourier mode \(\vec{k}\), pointing to direction \(\hat{n}\) in the sky is given by, \[\Theta(\vec{k},\hat{n},\eta_{0})=\sum_{l}i^{l}(2l+1)\mathcal{P}_{l}(\hat{k}\cdot \hat{n})\Theta_{l}(k,\eta_{0}). \tag{12}\] Here the multipole \(\Theta_{l}(k,\eta_{0})\) depends on the primordial perturbation \(\zeta(\vec{k})\) and a transfer function \(T_{l}(k)\) as, \[\Theta_{l}(k,\eta_{0})=T_{l}(k)\zeta(\vec{k}), \tag{13}\] with \(\eta_{0}\) denoting the conformal age of the Universe today. Importantly, for our scenario \(T_{l}(k)\) itself can be computed exactly as in the standard \(\Lambda\)CDM cosmology. It can be computed after taking into account the Sachs-Wolfe (SW), the Integrated Sachs-Wolfe (ISW), and the Doppler (Dopp) effect [34], \[\begin{split}\Theta_{l}(k,\eta_{0})&\simeq\left( \Theta_{0}(k,\eta_{\rm rec})+\Psi(k,\eta_{\rm rec})\right)j_{l}(k(\eta_{0}-\eta _{\rm rec}))\\ &+\int_{0}^{\eta_{0}}d\eta e^{-\tau}\left(\Psi^{\prime}(k,\eta)- \Phi^{\prime}(k,\eta)\right)j_{l}(k(\eta_{0}-\eta))\\ &+3\Theta_{1}(k,\eta_{\rm rec})\left(j_{l-1}(k(\eta_{0}-\eta_{ \rm rec}))-(l+1)\frac{j_{l}(k(\eta_{0}-\eta_{\rm rec}))}{k(\eta_{0}-\eta_{\rm rec })}\right)\\ &\equiv\left(f_{\rm SW}(k,l,\eta_{0})+f_{\rm ISW}(k,l,\eta_{0})+ f_{\rm Dopp}(k,l,\eta_{0})\right)\zeta(\vec{k})\,,\end{split} \tag{14}\] where \(\tau\) is the optical depth. The above expression relates a primordial perturbations \(\zeta\) to a temperature anisotropy \(\Theta_{l}\). Temperature Anisotropy due to Heavy Particles.Regardless of the origin of \(\zeta(\vec{k})\) is, we can compute \(f_{\rm SW}(k,l,\eta_{0})\), \(f_{\rm ISW}(k,l,\eta_{0})\), and \(f_{\rm Dopp}(k,l,\eta_{0})\) as in the standard \(\Lambda\)CDM cosmology. Thus converting the position space profile in Eq. (10) to momentum space and using Eq. (14), we can get the observed profile of a hotspot on the CMB sky. This Fourier transform of the profile (10) can be written as, \[\langle\zeta_{\rm HS}(\vec{k})\rangle=e^{-i\vec{k}\cdot\vec{x}_{\rm HS}}\frac {f(k\eta_{*})}{k^{3}}, \tag{15}\] with a profile function \[f(x)=\frac{gH^{2}}{\dot{\phi}_{0}}({\rm Si}(x)-\sin(x)),\quad{\rm Si}(x)=\int _{0}^{x}dt\sin(t)/t. \tag{16}\] We parametrize the distance to the hotspot as, \[\vec{x}_{0}-\vec{x}_{\rm HS}=-(\eta_{0}-\eta_{\rm HS})\hat{n}_{\rm HS}. \tag{17}\] Here \(\vec{x}_{0}\) and \(\vec{x}_{\rm HS}\) parametrize our and the hotspot locations, respectively, and \(\hat{n}_{\rm HS}\) points to the direction of the hotspot. The quantity \(\eta_{\rm HS}\) denotes the location of the hotspot in conformal time with \(\eta_{0}\) being the size of the present epoch. In the earlier paper, we took the hotspot to be on the CMB surface and hence set \(\eta_{\rm HS}=\eta_{\rm rec}\approx 280\) Mpc. In this work, we allow the hotspots to be away from the last scattering surface with \(\eta_{\rm HS}\) between \(\eta_{\rm rec}-\eta_{*}\) and \(\eta_{\rm rec}+\eta_{*}\), and study their signals on the CMB surface. This set up is summarized in Fig. 1. As derived earlier, the temperature due to the hotspot is given by (dropping \(\eta_{0}\) from the argument), \[\Theta(\vec{x}_{0},\hat{n})=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}e^ {i\vec{k}\cdot(\vec{x}_{0}-\vec{x}_{\rm HS})}\sum_{l}i^{l}(2l+1)\mathcal{P}_{ l}(\hat{k}\cdot\hat{n})\left(f_{\rm SW}(k,l)+f_{\rm ISW}(k,l)+f_{\rm Dop}(k,l) \right)\frac{f(k\eta_{*})}{k^{3}}. \tag{18}\] Here \(\hat{n}\) denotes the direction of observation. The functions \(f_{\rm SW}(k,l)\) and \(f_{\rm ISW}(k,l)\) are extracted from the transfer function using CLASS[35; 36] as in Ref. [25]. Using the plane wave expansion, \[e^{-i\vec{k}\cdot\vec{r}}=\sum_{\ell=0}^{\infty}(-i)^{l}(2l+1)j_ {l}(kr)\mathcal{P}_{l}(\hat{k}\cdot\hat{r}), \tag{19}\] and the relation \[\mathcal{P}_{l}(\hat{k}\cdot\hat{n})=\frac{4\pi}{(2l+1)}\sum_{m=-l }^{l}Y_{lm}(\hat{n})Y_{lm}^{*}(\hat{k}), \tag{20}\] Figure 1: Representation of a hotspot on the CMB sky. Our location and the location of a hotspot are denoted as \(\vec{x}_{0}\) and \(\vec{x}_{\rm HS}\), respectively, defined with respect to an arbitrary coordinate system. The black circle denotes the surface of last scattering, located at \(\eta_{\rm rec}\approx 280\) Mpc in conformal coordinates. Due to momentum conservation, heavy particles are produced in pairs, and the distance between the two members of a pair can vary between 0 and \(\eta_{*}\). Therefore, in our analysis we allow the two members to be anywhere within the gray shaded region. We compute the temperature profile of a hotspot as a function of direction of observation \(\hat{n}\), with the hotspot center in the direction of \(\hat{n}_{\rm HS}\). we get: \[\Theta(\vec{x}_{0},\hat{n},\eta_{\rm HS}) = \frac{1}{2\pi^{2}}\int_{0}^{\infty}\frac{dk}{k}\sum_{l}j_{l}(k(\eta _{0}-\eta_{\rm HS}))(2l+1)\mathcal{P}_{l}(\hat{n}\cdot\hat{n}_{\rm HS})T_{\rm sum }(k,l)f(k\eta_{*}) \tag{21}\] \[T_{\rm sum}(k,l) \equiv f_{\rm SW}(k,l)+f_{\rm ISW}(k,l)+f_{\rm Dopp}(k,l)\,. \tag{22}\] Note \(\Theta(\vec{x}_{0},\hat{n},\eta_{\rm HS})\) depends on \(\eta_{\rm HS}\), the location of the hotspot - which need not be on the last scattering surface as mentioned above. Given the spherically symmetric profile of the hotspot, the Doppler contribution to \(\Theta(\vec{x}_{0},\hat{n},\eta_{\rm HS})\) is small, from now on we only include the SW and ISW corrections for our analysis. Central Temperature.It is useful to compute the temperature anisotropy at the central part of a hotspot. To that end, we set \(\hat{n}=\hat{n}_{\rm HS}\), implying \(\mathcal{P}_{l}(\hat{n}\cdot\hat{n}_{\rm HS})=1\), and \[\Theta_{\rm central}(\vec{x}_{0},\eta_{\rm HS})=\frac{1}{2\pi^{2}}\int_{0}^ {\infty}\frac{dk}{k}\sum_{l}j_{l}(k(\eta_{0}-\eta_{\rm HS}))(2l+1)T_{\rm sum }(k,l)f(k\eta_{*}). \tag{23}\] In Fig. 2 we show the SW and ISW contributions to the central temperature as a function of \(\eta_{\rm HS}\) after multiplying by the average CMB temperature \(T_{0}=2.7\) K for \(\eta_{*}=160\) Mpc. For completeness, we also show the central temperature in Fig. 3, as obtained in [25], as a function of hotspot size \(\eta_{*}\), assuming the the hotspot is located on the surface of last scattering. As we can see, the pair-produced CMB spots are indeed _hot_spots when \(\eta_{*}\lesssim\) Gpc. For \(\eta_{*}>6600\) Mpc _cold_spots as opposed to _hot_spots arise. This is because the negative SW contribution dominates the positive ISW contribution, with the combination being negative. ## 3 Simulation of the CMB and PHS Signals In order to design a PHS search, we simulate the PHS signal and CMB maps so that we can estimate the signal capture rate ('True Positive Rate'), and the background count for a CNN analysis. We notice that there are three types of backgrounds to consider for a PHS search: (i) the noise of the CMB detector, (ii) the astrophysical foreground, and (iii) the background from the standard primordial fluctuations. A realistic analysis needs to take into account detector noise and foregrounds. In our analysis, we consider profiles on relatively large angular scales, \(\ell<1000\). For these scales current CMB temperature data, such as from Planck, is signal-dominated and we thus do not need to add instrumental noise to our simulations. The astrophysical foreground comes from compact objects such as galaxies, galaxy clusters, gas, and dust which can also produce localized signals. Part of these astrophysical foregrounds can be cleaned out due to their frequency dependence (for a review see, e.g., Ref. [37]). For the signal sizes that we consider, corresponding to \(\ell<1000\), we do not expect significant astrophysical contamination after foreground cleaning and masking of the galactic plane, while for significantly smaller scales a detailed study of residual foregrounds and point sources would be required (see, e.g., Planck's component separation analysis [38]). In the following, we therefore only consider the background from the primordial, almost Gaussian, fluctuations when studying the PHS signal. This last type of background is 'irreducible' in the sense that it will always be present, originating from the fluctuations of the inflaton itself. We will assume the CMB maps are masked to reduce the astrophysical foregrounds and badly-conditioned pixels and retain only \(60\%\) of the sky for the analysis. The number is similar to the sky fraction used in the Planck analysis [39]. Unlike the analysis in [25] that was based on a HEALPix[40] simulation, in this work, we use the QuickLens package2 to simulate the CMB maps. QuickLens allows us to work in the 'flat sky approximation', neglecting sky curvature that is irrelevant to the size of the PHS profile we consider, as well as to draw sample maps with periodic boundary conditions to avoid complications due to masking. QuickLens can take a theoretical temperature power spectrum to produce mock flat sky CMB maps. To provide an initial input, we use the CLASS (v3.2) package [35; 36] to compute a temperature anisotropy spectrum \(C_{\ell}^{\rm TT}\) based on the Planck 2018 [41] best fit \(\Lambda\)CDM parameters, Footnote 2: [https://github.com/dhanson/quicklens](https://github.com/dhanson/quicklens) \[\{\omega_{\rm cdm},\omega_{b},h,10^{9}A_{s},n_{s},\tau_{\rm reio}\}=\{0.120, 0.022,0.678,2.10,0.966,0.0543\}\,. \tag{10}\] We will comment on the sensitivity of the CNN analysis to the \(\Lambda\)CDM parameters in Sec. 4.1 and Appendix A. We specify \(\ell_{\rm max}=3500\) in the code for the maximum number of \(\ell\)-modes Figure 2: Central temperature \(\Theta_{\rm central}\times T_{0}\) of a hotspot as a function of the (radial) location of the hotspot. We choose \(\eta_{*}=160\) Mpc and \(g=1\). The dotted gray line indicates the location of the recombination surface. Larger (smaller) \(\eta_{\rm HS}\) implies the hotspots are closer to (further from) us. We also show contribution of the Sachs-Wolfe term (orange) and the Integrated Sachs-Wolfe term (purple) in determining the total temperature (olive). The left and right edges of the plot are at \(\eta_{\rm HS}=\eta_{\rm rec}-\eta_{*}\) and \(\eta_{\rm HS}=\eta_{\rm rec}+\eta_{*}\), respectively. used for the image generation. As explained above, our signal profiles have support on length scales corresponding to an \(\ell<1000\), where instrumental noise is negligible compared to the primary background from CMB and can thus be ignored. An application to significantly smaller angular scales would need to take into account the noise properties of the experiment. We choose the image resolution such that \(1\ \mathrm{pixel}=10^{-3}\) radians to match Planck's angular resolution down to \(\approx 5\) arc minutes [42]. We also use the relation between the angle and the comoving length on the last scattering surface \(\Delta\eta/\chi_{\mathrm{rec}}\).3 For instance, if the separation between two hotspot centers is \(160\ \mathrm{Mpc}\) on the last scattering surface, the two centers are \(12\) pixels away on the image, with \(\chi_{\mathrm{rec}}=13871\ \mathrm{Mpc}\) for Planck's best-fit \(\Lambda\)CDM parameters. Footnote 3: In Ref. [25], the angular size of one pixel was obtained by matching the pixel number to the total degrees of freedom in the \(\ell\)-modes (\(\ell_{\mathrm{max}}^{2}+\ell_{\mathrm{max}}=4\pi/\theta_{\mathrm{pixel}}^{2}\)), together with the approximation \(\ell_{\mathrm{max}}\simeq\eta_{0}/\eta_{\mathrm{pixel}}\). Although the matching reproduces the same angular resolution, the relation between \(\ell_{\mathrm{max}}\) and \(\eta_{\mathrm{pixel}}\) gives \(\Delta\theta=\sqrt{4\pi}\Delta\eta/\chi_{\mathrm{rec}}\). Since \(\ell_{\mathrm{max}}\simeq\eta_{0}/\eta_{\mathrm{pixel}}\) comes from the approximation of the \(k\)-mode integral with \(j_{\ell}(k\,\chi_{\mathrm{rec}})\) and \(k=2\pi/\eta\), the relation between the angle and length is less robust than \(\Delta\theta=\Delta\eta/\chi_{\mathrm{rec}}\). For the CNN analysis, we begin by generating \(360^{2}\) pixel images, corresponding to a \([-10.32^{\circ},10.32^{\circ}]\) region in longitude and latitude (\(n_{x}=360\) in QuickLens ). We then cut out a \(90^{2}\) patch from each of the \(360^{2}\)-sized maps. These non-periodic, smaller maps are then used for further analysis. In particular, for our CNN analysis, we generate \(160\)k training images, \(40\)k validation \(90^{2}\) pixel images, and an additional \(5\)k test images to quantify the Figure 3: Central temperature (green) of a hotspot originating from a heavy particle for \(g=1\), based on Eq. (23) with \(\eta_{\mathrm{HS}}=\eta_{\mathrm{rec}}\). The green line illustrates the variation of the observed anisotropy as a function of the “size” of the hotspot, determined by the comoving horizon \(\eta_{*}\) at the time of particle production. The horizontal gray line gives a rough benchmark of the magnitude of the large-scale temperature anisotropy due to only the standard quantum fluctuations of the inflaton \((1/5)\langle\zeta_{q}^{2}\rangle\), without taking into account acoustic oscillations. The dashed vertical gray lines show the benchmark choices for the hotspot size \(\eta_{*}=50\,,100\,,160\ \mathrm{Mpc}\) chosen in the subsequent discussion. We take the plot from Ref. [25]. network performance. Training the neural network on smaller patches yields better training convergence and does not lead to loss of information as long as the characteristic size of the signal is smaller than the size of the patch. The profile of each of the PHS is described by Eq. (21), where the function depends on the distance to the hotspots (\(\eta_{0}-\eta_{\rm HS}\)) and the angle \(\cos^{-1}(\hat{n}\cdot\hat{n}_{\rm HS})\), as defined in Fig. 1. The overall magnitude of the signal temperature is proportional to the coupling \(g\). When generating the signal, we require both the hotspots to be within a shell \(\pm\eta_{*}\) around the last scattering surface as shown in Fig. 1. For example, when studying the case with \(\eta_{*}=160\) Mpc, we first divide the \(\pm 160\) Mpc region into 50 concentric annuli, each having equal thickness. We then choose the first hotspot from a pair to lie on any of these 50 annuli with equal probability. The second member is then chosen anywhere within a sphere of radius \(\eta_{*}\) centered on the first Figure 4: Radial profile of a single hotspot with the heavy particle position inside (olive), on (orange), and outside (purple) of the last scattering surface. The locations of these hotspots in conformal time are taken to be \(\eta_{\rm rec}+\eta_{*}\), \(\eta_{\rm rec}\), and \(\eta_{\rm rec}-\eta_{*}\), respectively, as denoted by the labels. From upper left to bottom: horizon size for the hotspot production at \(\eta_{*}=50,100,160\) Mpc. The plots assume the inflaton-\(\chi\) coupling \(g=1\). hotspot, again with a uniform random distribution.4 A pair is kept for further analysis only if both the spots of the pair falls within the \(\pm\eta_{*}\) shell of the last scattering surface. Since Figure 5: Example plots of pure background from QuickLens simulation (left), pure signals (middle), and signals with \(g=4\) on top of the simulated background (right). The scalar particles are produced at comoving horizon sizes \(\eta_{*}=50\) Mpc (top), 100 Mpc (middle), \(\eta_{*}=160\) Mpc (bottom). The signals at different benchmark \(\eta_{*}\) have roughly the same size, as the \(\eta_{*}\) dependence only enters logarithmically. The two hot spots are clearly separated for \(\eta_{*}=160\) Mpc and \(\eta_{*}=100\) Mpc, while for \(\eta_{*}=50\) Mpc they overlap. the distribution in a 3D volume allows hotspots to orient along the line-of-sight direction, the average separation between the two hotspots projected on the last scattering surface is smaller than the separation assumed in Ref. [25] that only considered PHS on the last scattering surface. Once we generate PHS images with random orientation and separation between two hotspots, we pixelate them and add the PHS image to the simulated CMB maps to produce the signal image. We follow this procedure for all the signal images in our study. In this work, we study benchmark models with horizon sizes \[\eta_{*}=50,\,100,\,160\,\,\mathrm{Mpc}\,, \tag{3.2}\] and couplings from \(g=1\) to \(4\). Specifying \(g\) and \(\eta_{*}\) sets the overall temperature and the profile of the hotspot, a la Eq. (2.21). Within the approximations we've made in Sec. 2, the remaining model parameter, \(M_{0}\), only affects the overall number of hotspots \(N_{\mathrm{PHS}}\) (through Eq. (2.7)). Going forward, we will compute the number of hotspots that can be hidden within the background fluctuations for given benchmark coupling and \(\eta_{*}\). Then, using Eq. (2.7), the upper bounds on \(N_{\mathrm{PHS}}\) can be translated into lower bounds on \(M_{0}\). As an illustration of what a benchmark PHS looks like, in Fig. 5 we show examples of the CMB background (left), PHS signal (middle), and the signal plus background (right) for \(g=4\) with different choices of \(\eta_{*}\). Note that it is difficult to identify the signals by eye in the plots on the right, even with such a large coupling. Compared to Ref. [25], the benchmark \(\eta_{*}\) values are identical, but we choose smaller values of the coupling \(g\). This is because we find the CNN analysis is much more powerful than the 'cut and count' method adopted in Ref. [25], and therefore capable of identifying fainter hotspots. We chose the benchmark \(\eta_{*}\) values to test out a variety of different PHS; \(\eta_{*}=160\,\mathrm{Mpc}\) hotspots have a very high central temperature (Fig. 3), while \(\eta_{*}=50\,\mathrm{Mpc}\) hotspots are significantly cooler and have smaller inter-spot separation. The choice \(\eta_{*}=100\,\mathrm{Mpc}\) sits between these for comparison. ## 4 Identifying Pairwise Hotspots with CNN In this section we describe the training process for the CNN using \(90^{2}\) pixel images, and discuss some qualitative properties of the training result. We then apply the trained network to a larger sky map and present results on the upper bound on the number of PHS for given values of \(\eta_{*}\) and \(g\). We end the section with some comparisons between the CNN and a matched filter analysis. ### Network Training on Small Sky Patches CNNs are one of the most commonly used deep neural networks specialized for image recognition [44; 45]. In this study, we build the network using PyTorch [46] with the structure shown in Fig. 6. The network takes a CMB or CMB+PHS image as an input and outputs a single value between \(0\) and \(1\), which can be interpreted as the probability of the input image Figure 6: A schematic architecture of the CNN used in this work. We applied two convolutional layers in series; first, 8 kernels with size of \(16\times 16\) and stride of 2 are applied, then, 8 independent kernels size of \(8\times 8\) yields feature map of 40. Next, we apply a max-pooling using the kernel and stride size of \(2\times 2\), which subsequently reduces the image dimension down to \(20\times 20\times 8\). Processed images get further reduced by going through 2D convolution and max-pooling, further reducing the size of the image to \(5\times 5\times 8\). After 4 sets of total convolution followed by average pooling, the final feature maps are flattened to feed into fully connected network form, and the final network ends with single output value which sits between 0 and 1. Throughout the network, we use the rectified linear unit (ReLU) function [43] to introduce non-linearity, except for the output layer which has a sigmoid activation function suitable for the binary classification. Figure 7: Comparison between true and the CNN feature maps with and without implanted signals. The left plots show the PHS signal and signal plus the CMB background. The middle and right plots show feature maps after going through three convolutional layers. The enhanced signal locations on the feature maps on the right align with the true location of the hotspots after rescaling the pixel coordinates with respect to the relative size between the 3rd layer (\(20^{2}\)-pixels) and the original image (\(90^{2}\)-pixels). Here we take \(\eta_{*}=160\) Mpc, \(g=4\), and \(\eta_{\rm HS}=\eta_{\rm rec}\) for both the spots. containing the PHS. We train the network on 160k images (see Sec. 3), half of which contain a single pairwise hotspot profile on top of the CMB and the rest are CMB-only images. For optimization, we use a binary cross entropy loss function, commonly used for binary classification, along with Adam optimizer [47] and \(10^{-4}\) learning rate. We train the network using PHS signals with \(g=3\) for all the three values of \(\eta_{*}\) individually. One may wonder how well a network trained on one \(g\) value will generalize to different values without retraining. As the CNN (unlike the matched filter discussed below) is nonlinear, extrapolation to values of \(g\) other than what was used for training is not guaranteed to be optimal. On the other hand, training a CNN for each possible benchmark input is time- and resource-intense. Empirically, we find that the network trained at \(g=3\) works well over a wide range of \(g\) values, perhaps because the network learns to analyze the shape rather than the amplitude of the profile. In a fully optimal analysis one would want to retrain the neural network over a grid of \(g\) values. To get some idea for how the CNN discriminates between signal and background images, we show the feature maps from the first three convolutions in Fig. 7 for \(\eta_{*}=160\,\mathrm{Mpc}\) and \(g=4\). As we can see proceeding from left to right, the trained network does amplify the signal region compared to the background-only image, and the convolutional layers can emphasize the correct locations of each spot in the feature map. To quantify the performance of the CNN, we generate a test sample of 5k CMB-only maps and 5k CMB+PHS maps, each having \(90^{2}\) pixels. For a CMB+PHS map, we inject one randomly oriented and located PHS in the CMB map. The PHS signal occupies \(\mathcal{O}(50^{2})\) pixels in the examples that we study, and thus the \(90^{2}\)-pixels image is only slightly larger than the signal. When an image has network output \(>0.5\), we count it as an identified signal map. We call the signal capture rate (True Positive Rate, \(\epsilon_{S,90^{2}}\)) as the fraction of CMB+PHS images being correctly identified as signal maps, and define the fake rate (False Positive Rate, \(\epsilon_{B,90^{2}}\)) as the fraction of CMB-only images being wrongly identified as signal maps,5 Footnote 5: In the actual search, there can be more than one PHS in a \(90^{2}\)-pixels region, and the CNN would still count the region to be one signal map. We verify that the signal capture rate would increase if there are more PHS in the image. When we study the sensitivity of the CNN search, having additional PHS around the same location will help the search, and this makes our analysis based on having one PHS in a \(90^{2}\)-pixels image to be conservative. Moreover, given that the CNN search can probe PHS with a small number of signals on the CMB sky, the probability of having additional PHS around the same location is small. Therefore, counting the number of \(90^{2}\)-pixels regions should give a good approximation of the PHS in the analysis. \[\epsilon_{S,90^{2}} = \frac{\text{number of signal-injected images with CNN output }>0.5}{\text{total number of signal-injected images}},\] \[\epsilon_{B,90^{2}} = \frac{\text{number of background-only images with CNN output }>0.5}{\text{total number of background-only images}}. \tag{10}\] In Fig. 8, we show the network output for the 5k images with and without injecting the PHS signal. In the left column we show the result when the PHS are uniformly distributed within a shell of \(\eta_{\text{rec}}\pm\eta_{*}\) around the surface of last scattering, while the right column shows the result when \(\eta_{\text{HS}}=\eta_{\text{rec}}\). The signal capture and background rejection rates in Fig. 8 refer to \(\epsilon_{S,90^{2}}\) and \((1-\epsilon_{B,90^{2}})\). Clearly, for \(g\geq 3\), our CNN setup is highly efficient at separating CMB+PHS images from CMB images alone. For example, for \(g=3\) (the same coupling Figure 8: Network output for 5k images without (blank histogram) and with (colored histograms) PHS signals. We count the image as an identified signal map when the network output \(>0.5\). In the plots we show the background rejection rate from the CMB-only analysis and the signal capture rate from the CMB+PHS images, for different inflaton-\(\chi\) couplings \(g\). The fake rate is defined as (\(1-\)background rejection rate). The plots on the left have both the hotspots distributed uniformly with separation \(\leq\eta_{*}\) and within \(\eta_{\rm HS}=\eta_{\rm rec}\pm\eta_{*}\), which is how we simulate the signal for the rest of the study. The signal capture rate therefore includes possible suppression due to hotspots moving off the last scattering surface. For comparison, we show the training results in the right plots requiring \(\eta_{\rm HS}=\eta_{\rm rec}\). Comparing results obtained from the same study but with different sets of 5k images, we find the efficiency numbers vary by \(\sim 0.1-1\%\). as in the training sample) and \(\eta_{*}=160\,\)Mpc, \(\epsilon_{S,90^{2}}\) is over 73% with \(\epsilon_{B,90^{2}}\) less than 0.1%. For \(\eta_{*}=160\) Mpc and 100 Mpc, the signal capture rate falls if the hotspots are off the last scattering surface but in the \(\eta_{\rm rec}\pm\eta_{*}\) window we consider. When applying the same trained network on dimmer PHS signals (\(g<3\)), \(\epsilon_{S,90^{2}}\) drops, but the background rejection rate remains close to unity. Both \(\epsilon_{S,90^{2}}\) and \(\epsilon_{B,90^{2}}\) vary with the horizon size. Comparing results for \(\eta_{*}=160\,\)Mpc to \(\eta_{*}=50\,\)Mpc, the \(\epsilon_{S,90^{2}}\) values are similar for \(g\geq 3\), but \(\eta_{*}=50\) Mpc case performs much better at weaker coupling (\(\epsilon_{S,90^{2}}=51.2\%\) for \(\eta_{*}=50\,\)Mpc compared to 1.8% for \(\eta_{*}=160\,\)Mpc, both for \(g=1\)). The \(\eta_{*}=50\,\)Mpc case has a larger background fake rate, compared to \(\eta_{*}=160\) Mpc. However, even if we incorporate the background and compare \(\epsilon_{S,90^{2}}/\sqrt{\epsilon_{B,90^{2}}}\) - the efficiency ratio is \({\cal O}(10)\) times larger for the dimmer, \(\eta_{*}=50\) Mpc case. The ability of catching dimmer signals indicates that the network uses additional information than the overall temperature to identify the PHS. Although it is difficult to know exactly how the CNN identifies the PHS, the network seems to more accurately identify PHS with a distinct rim structure compared to just utilizing the fact that there are two hotspots (Fig. 5). One indication that the CNN utilizes the rim structure of the \(\eta_{*}=50\) Mpc signal is that the signal capture rate for that benchmark is insensitive to whether or not the PHS lie on the last scattering surface. We perform the same CNN analysis by having the signal hotspots centered on the last scattering surface (\(\eta_{\rm HS}=\eta_{\rm rec}\) in Eq. (21)) and summarize results in the right column of Fig. 8. For hotspots with temperature profile peaked at center, as we show in the \(\eta_{*}=160\) and 100 Mpc plots in Fig. 4, the highest PHS temperature takes the maximum value when \(\eta_{\rm HS}=\eta_{\rm rec}\) (orange). It then is reasonable to have a larger average signal capture rate when the hotspots center on the last scattering surface. However, as we illustrate in the upper left plot in Fig. 4, the "shell" of the \(\eta_{*}=50\) Mpc signal in 3D always project into a rim with a fixed temperature (at angle \(\approx 0.008\) rad), regardless of the location of the hotspot, \(\eta_{\rm rec},\eta_{\rm rec}+\eta_{*}\), or \(\eta_{\rm rec}-\eta_{*}\). Therefore, if the CNN identifies the \(\eta_{*}=50\) Mpc signal based on the rim structure, \(\epsilon_{S,90^{2}}\) should remain the same even when the PHS are on the last scattering surface. This is indeed what we see on the bottom plots in Fig. 8. Further study on what features the CNN uses to identify the \(\eta_{*}=50\,\)Mpc case can be found in Appendix C. ### Application of the Trained Network to Larger Sky Maps After training the CNN to identify PHS in images with \(90^{2}\) pixels, we look for signals on a larger sky map by applying the same network analysis repeatedly across the larger map. In this way we can analyze, in principle, arbitrarily large maps. A benefit of such a larger map search is that it avoids the loss of sensitivity to signals where a PHS is partially cut out by the boundary of a \(90^{2}\)-pixels region. Such a PHS would be lost had we simply partitioned the sky into non-overlapping \(90^{2}\)-pixels regions. For a concrete application, we study maps with \(720^{2}\) pixels6 using the following steps: (i) we apply the trained network on the upper left corner of the map, obtaining the network output, (ii) we shift the \(90^{2}\)-pixels "window" to the right by 5 pixels and get the network output again, (iii) repeat the process until we hit the right hand side of the large map. Then, return to the upper left corner but slide the widow down by 5 pixels, (iv) continue with these steps until the entire larger map is covered. The result of steps (i) - (iv) result in what we call a "probability map". Starting with an original \(720^{2}\) image and scanning in steps of 5 pixels, the probability map has \(126^{2}\) entries, with each entry showing the probability of having a signal in a \(90^{2}\)-pixels region centered at each pixel. We have tried different step sizes and find that a 5 pixel step size yields nearly identical results to a 1 pixel step size for the following analysis, so we use the 5 pixel step size for improved computational speed. Footnote 1: We use the \(\tau_{\rm{i}}\) to denote the \(\tau_{\rm{i}}\)’s in the \(\tau_{\ surface. As an example, let us take \(\eta_{*}=50\) Mpc and \(g=1\). From Table 1, we see \(\epsilon_{S,720^{2}}=54.6\%\) while \(\epsilon_{B,720^{2}}=1.4\%\). Assuming that only a fraction \(f_{\rm sky}=60\%\) is used for the search, the total number of signals for this benchmark is \(Sig=\epsilon_{S,720^{2}}\,N_{\rm PHS}\,f_{\rm sky}\), while the number of background events is \(Bg=25\,\epsilon_{B,720^{2}}\,f_{\rm sky}\), where the factor of 25 is the number of \(720^{2}\) patches needed to cover the full sky. From the number of signal and background events, we form the log-likelihood ratio [49; 50] and then solve for \(N_{\rm PHS}\) for the desired signal significance. When \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \(\eta=50\) Mpc & \(\eta=100\) Mpc & \(\eta=160\) Mpc \\ \hline \(\epsilon_{B,720^{2}}\) & 1.4 \% & 11 \% & 6.6 \% \\ \hline \(\epsilon_{S,720^{2}}\), \(g=1\) & 54.6 \% & 0.8 \% & 0.5 \% \\ \hline \(\epsilon_{S,720^{2}}\), \(g=2\) & 84.0 \% & 34 \% & 34.6 \% \\ \hline \(\epsilon_{S,720^{2}}\), \(g=3\) & 98.6 \% & 76.8 \% & 71.2 \% \\ \hline \end{tabular} \end{table} Table 1: CNN result from scanning 500 randomly generated CMB or CMB+PHS maps using the network trained in Sec. 4.1. The image size is \(720^{2}\) pixels, and we shift the search window having \(90^{2}\)-pixels by 5 pixel steps. The fake rate is the average number of fake signals from a \(720^{2}\)-pixels map with CMB-only. The signal capture rate is the chance of identifying each input PHS signal. Comparing results obtained from the same study but with different sets of 500 images, we find the efficiency numbers vary by \(\sim 0.1-1\%\). Figure 9: _Left_: PHS signals that are implanted on CMB map. _Right_: Probability map from scanning the same \(720^{2}\) image plus the CMB with the CNN search of \(90^{2}\)-pixels region shifting in steps of 1 pixel. The true and fake signals show up as clusters in the processed image. We further suppress the number of fake signals in the following analysis by applying cuts on the network output of each pixel and the pixel number in each cluster. We find the analysis from shifting the search window in steps of 5 pixels produce similar results to the steps of 1 pixel and therefore use 5 pixel steps for the rest of the analysis. calculating the \(2\sigma\) exclusion bound, we require \[\sigma_{exc}\equiv\sqrt{-2\,\ln\left(\frac{L(Sig\!+\!Bg|Bg)}{L(Bg|Bg)}\right)} \geq 2,\quad\text{ with }\;\;L(x|n)=\frac{x^{n}}{n!}e^{-x}\,. \tag{4.3}\] Note that this is the expected bound, as we are taking simulated CMB background to be the number of observed events (\(n\) in Eq. (4.3)). The resulting values of \(N_{\text{PHS}}\) are given in the left panel of Table. 2. It is also interesting to determine how many PHS would be needed for discovery at each benchmark point. We calculate the expected discovery reach using \[\sigma_{dis}\equiv\sqrt{-2\,\ln\left(\frac{L(Bg|Sig\!+\!Bg)}{L(Sig\!+\!Bg|Sig\! +\!Bg)}\right)}\geq 5\,. \tag{4.4}\] The results are collected in Table 3. We can further obtain the minimum mass \(M_{0}\) of the heavy particle corresponding to \(\sigma_{exc}\) and \(\sigma_{dis}\) using Eq. (2.7) and \(\Delta\eta=2\eta_{*}\).7 In Table 2 and 3, we show the bounds (or reach) on the number of PHS and \(M_{0}/H_{I}\). Due to the energy injection from the dynamics of the inflaton, we can probe scalar particles with masses up to \(\approx 260H_{I}\). In the bottom right tables, we show that the mass bounds correspond up to \(\approx 2.6\) times the mass-changing rate caused by the inflaton rolling (\(\sqrt{g\dot{\phi_{0}}}\) ), which dominates the exponential suppression in Eq. (2.7). We also plot the \(2\sigma\) lower bound on \(M_{0}/H_{I}\) in Fig. 10. Since the \(N_{\text{PHS}}\) depends on \(M_{0}\) exponentially, a slightly lower scalar mass than the \(2\sigma\) bound leads to a \(5\sigma\) discovery of the PHS. Footnote 7: One subtlety in solving the mass bound is that when simulating the PHS signals, we require both hot spots to be within \(\pm\eta_{*}\) around the last scattering surface. Hence, the simulation excludes PHS with one of the hot spots outside of the shell region that would be harder to see by the CNN. However, when solving the upper bound on the PHS density using Eq. (2.7), we take into account the signals that are partially outside of the shell region, leading to an over-estimate of the signal efficiency and a stronger upper bound on the number density. From checking the hot spot distribution numerically, we find that \(\approx 17\%\) of the PHS in our examples can be partially outside of the \(\pm\eta_{*}\) region. Fortunately, since the size of \(M_{0}\) only depends on the number density bound logarithmically, the error only changes the \(M_{0}\) bound by up to \(1\%\). This is acceptable for the accuracy we want for the concept study. These bounds are significantly improved compared to the previous analysis in Ref. [25]; this is not surprising given that the analysis in Ref. [25] was very simplistic, utilizing only a single temperature cut to separate signal from background. Using the CNN, we can now obtain meaningful bounds for \(g=1\), \(2\) - cases for which the PHS were rather invisible before. For hotter signals, e.g. \(g=3\), the CNN analysis beats the past result by \(\Delta M_{0}\approx 60H_{I}\). This is a notable improvement given that the PHS density is exponentially sensitive to the scalar mass (squared). Finally, to show that the CNN search of localized objects gives a better probe of heavy particle production than the measurement of CMB temperature power spectra, we plot the corrections to the \(\Lambda\)CDM \(D_{\ell}^{\text{TT}}\) spectrum in Appendix B, including the same number of PHS in Table. 2. For example, for \(g=1,\;\eta_{*}=160\) Mpc, we see from Table 2 that the \(2\sigma\) bound on \(N_{\rm PHS}\) from our CNN analysis is 1162 hotspot pairs. Injecting 1162 hotspots into the sky,8 we find a correction to \({\cal D}_{\ell}^{\rm TT}\) of \(\Delta\chi^{2}=0.3\) - well within the \(1\sigma\) band on Planck 2018 temperature power spectrum. Repeating this exercise with the other benchmarks in Table 2, yields \(\Delta\chi^{2}\) values that are even smaller. Footnote 8: For simplicity, we restrict all hotspots to the last scattering surface. This somewhat overemphasizes the PHS correction to the power spectrum, as scenarios with both particles fixed to the last scattering surface are, on average, brighter than when \(\eta_{HS}\) varies. ### Comparison with a Matched Filter Analysis Matched filter analysis is a standard tool for identifying localized signals on a CMB map. Given a 2D power spectrum of the CMB, \(P(k)\), we can obtain a filtered map \(\psi(\vec{r})\) in position space from a convolution between the original image (signal plus background) \(\zeta(\vec{k})\) and a \begin{table} \begin{tabular}{|c|c|c|c|} \hline Number of PHS & \(\eta=50\) & \(\eta=100\) & \(\eta=160\) \\ \hline \(g=1\) & 8 & 840 & 1162 \\ \hline \(g=2\) & 5 & 20 & 17 \\ \hline \(g=3\) & 4 & 9 & 8 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(M_{0}/(g\dot{\phi}_{0})^{1/2}\) & \(\eta=50\) & \(\eta=100\) & \(\eta=160\) \\ \hline \(g=1\) & 2.5 & 2.0 & 2.0 \\ \hline \(g=2\) & 2.6 & 2.4 & 2.4 \\ \hline \(g=3\) & 2.6 & 2.5 & 2.4 \\ \hline \end{tabular} \end{table} Table 2: _Upper: \(2\sigma\) upper bound on the number of PHS in the whole CMB sky with both hotspot centers located within \(\eta_{\rm rec}\pm\eta_{*}\) window around the last scattering surface. In the calculation we assume sky fraction \(f_{\rm sky}=60\%\). Lower left: lower bounds on the bare mass of the heavy scalar field in units of the Hubble scale during the inflation. Lower right: lower bounds on the bare mass in units of the rate of the mass, \((g\dot{\phi}_{0})^{1/2}\), owing to the inflaton coupling._ \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Number of PHS & \(\eta=50\) & \(\eta=100\) & \(\eta=160\) \\ \hline \(g=1\) & 16 & 2047 & 2757 \\ \hline \(g=2\) & 10 & 48 & 40 \\ \hline \(g=3\) & 9 & 21 & 19 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(M_{0}/(g\dot{\phi})^{1/2}\) & \(\eta=50\) & \(\eta=100\) & \(\eta=160\) \\ \hline \(g=1\) & 2.4 & 2.0 & 1.9 \\ \hline \(g=2\) & 2.5 & 2.3 & 2.3 \\ \hline \(g=3\) & 2.6 & 2.4 & 2.4 \\ \hline \end{tabular} \end{table} Table 3: Same as Table 2 but for the \(5\sigma\) discovery reach. signal filter \(h(\vec{k})\) (the Fourier transform of a profile \(h(\vec{r})\) in position space), \[\psi(\vec{r})=\int\frac{d^{2}\vec{k}}{(2\pi)^{2}}\left(\frac{\zeta(\vec{k})h( \vec{k})}{P(k)}\right)\,e^{i\vec{k}\cdot\vec{r}}. \tag{4.5}\] If the signal is spherically symmetric, the filter simplifies to \(h(\vec{k})=h(k)\). From the filtered map \(\psi(\vec{r})\) one can construct an optimal likelihood ratio test between the Gaussian null hypothesis and the existence of the signal (see e.g. [27]), making the matched filter ideal for picking out single (or more generally, non-overlapping) localized signals. As we have seen, while the individual hotspots are spherically symmetric, they often overlap (at least for the range of parameters we are interested in), leading to a net signal in the sky that is no longer spherical. Additionally, the random separation between the initial heavy particles means the resulting PHS are not uniform. The unusual shape and variability among signals make the PHS less suitable for a vanilla matched filter analysis. While it may be possible to design a complicated and large bank of matched filters to cover the space of possible signal templates, the CNN analysis can effectively learn a set of flexible filters to enhance the signal over background even with varying and non-spherical signal shapes. Even if the matched filter analysis defined in Eq. (4.5) is not optimal for the full pairwise hotspot signal, it is still instructive to compare a few examples of the matched filter analysis versus the CNN. For this comparison, we consider PHS that lie only on the last scattering surface. The combined signal from the PHS will still be non-spherical, but restricting all PHS to the last scattering surface does take away some of the variability among signals.9 While each hotspot in a pair will "pollute" the other - meaning that it appears as a background Figure 10: Bound on the heavy scalar mass for \(\eta_{*}=50\) Mpc, \(\eta_{*}=100\) Mpc, and \(\eta_{*}=160\) Mpc. In the region above the ‘% Backreaction’ line, the backreaction to the inflationary dynamics due to particle production is smaller than a percent (see Ref. [25] for a more detailed discussion). The light blue lines show various contours of \(N_{\rm PHS}\). We notice that the projected CNN search is able to cover most of the parameter space up to the target \(N_{\rm PHS}=1\) contour. that is different from the CMB fluctuations - each of the two hotspots can still be picked up effectively by the single spot template \(h(k)\). We perform the comparison using \(90^{2}\) pixel images with one PHS injection. We use QuickLens to generate the CMB maps, which follows periodic boundary condition and thereby ensures the separation between \(k\)-modes in the 2D power spectrum \(P(k)\) of the CMB image. The CNN results for this signal set have already been shown in Sec. 4.1 and can be found in the right hand panels of Fig. 8; the background rejection is above 99% for all benchmark points, while the signal capture rate varies from a few percent to 100% depending on \(\eta_{*}\) and \(g\). For the matched filter analysis, we obtain \(P(k)\) from the average of the discrete Fourier transform of 500 simulated images. We also apply discrete Fourier transform on the profile of a single hotspot in the PHS, and use it as \(h(k)\) in the convolution. Carrying out the integral in Eq. (4.5), we obtain the processed maps \(\psi(\vec{r})\). An example of the signal processing is shown in Fig. 11, where the plot on the left is the PHS signal (\(\eta_{*}=160\) Mpc and \(g=2\)), the middle is the signal plus background, and the right plot is the output image \(\psi(\vec{r})\). We see that the filter can indeed pick up the signal hidden inside the background. As one way to quantify the matched filter results, in Fig. 12 we show the distribution of largest \(\psi(\vec{r})\) values in each of the 500 maps generated with (blue) and without (red) PHS signals with \(\{\eta_{*},g\}=\{160\,{\rm Mpc},2\}\) (left) and \(\{100\,{\rm Mpc},2\}\) (right). From this perspective, the matched filter clearly separates the signal and background for the two cases. We also perform the same analysis for the \(\eta_{*}=50\) Mpc signals (which have much lower temperatures). In this case, the overlap between signal and background in the \(\psi\) distribution is large, and a simple \(\psi\) cut is not the optimal way to separate the signal and background. For this reason, we only consider the \(\eta_{*}=100\) and 160 Mpc examples in the following discussion. To provide a rough numerical comparison between the matched filter and the CNN analysis, we apply a \(\psi_{\rm max}\) cut in each of the matched filter histograms in Fig. 12. We choose the \(\psi_{\rm max}\) cut value to equal the background rejection rate in the CNN analysis, then compare Figure 11: Example images from the matched filter analysis. _Left:_ PHS with \(\eta_{*}=160\) Mpc and \(g=2\). _Middle:_ Signal plus the background. _Right:_ Filtered map from the convolution integral Eq. (4.5). signal capture rates in the two analyses. For the \(\eta_{*}=160\) Mpc example, the signal capture rate is about 5% and 74% for \(g=1\) and 2, while the match filter analysis performs slightly better, capture rates 8% and 98% respectively. For \(\eta_{*}=100\) Mpc, the CNN signal capture rates are \(\sim 10\%\) and \(\sim 69\%\) for \(g=1\) and 2, while the match filter analysis rates are slightly lower, 4% and 50%. In summary, we find that the CNN performs very close to the matched filter analysis, suggesting that it is near optimal. The advantage of the CNN, as we have discussed, is that it can learn to interpolate between all signal shapes that appear in our model.10 Footnote 10: We believe the small differences between the CNN and matched filter signal rates are due to the simplicity of the analysis – where \(\psi_{\rm max}\) is used as a proxy for the matched filter performance. ## 5 Discussion and Conclusion In this work, we show that Convolutional Neural Networks (CNN) provide a powerful tool to identify pairwise hotspots (PHS) on the CMB sky. These PHS can originate from superheavy particle production during inflation. We improve the previous analysis of Ref. [25] by more accurately modeling the distribution of PHS on the CMB sky and by developing a CNN-based signal search strategy. To accurately model the PHS distribution, we include the possibility that PHS are distributed along the line-of-sight direction, rather than fixed to the last scattering surface. As a result, the average inter-spot separation within a PHS, when projected onto the CMB, is smaller than in Ref. [25]. For PHS with small values of \(\eta_{*}\), such as \(\eta_{*}=50\) Mpc, the two Figure 12: Maximum pixel distribution in filtered maps, where the value \(\psi\) of the pixels on the filtered map is defined in Eq. (4.5). We use 500 CMB-only and 500 CMB+PHS maps and plot the distribution of the maximum \(\psi\) of each filtered map to show the separation between the CMB and CMB+PHS results. We used feature scaling also known as min-max normalization for \(\psi_{max}\), so that the smallest value is zero and the largest value is 1. hotspots in a PHS significantly overlap with each other, and the resulting PHS look like a single object, but with a distinct angular profile (Fig. 5). For the signal search, we construct a CNN to identify PHS from within the CMB, the standard fluctuations of which act as backgrounds for the signal. The network is trained on \(90^{2}\) pixel images with and without PHS injected in them (both with hotspots distributed in 3D, and with hotspots fixed on the last scattering surface). During training we choose a coupling \(g=3\), but the trained CNN can still identify PHS for smaller values of \(g\) with a significant signal capture rate and small background fake rate. We find that the CNN actually performs better for the smaller \(\eta_{*}\) benchmark, even though the hotspots are dimmer. We believe this is due to the distinctive ring structure the PHS have when \(\eta_{*}=50\,\mathrm{Mpc}\), as evidenced by comparing PHS signals distributed in 2D versus in 3D, and by studies testing the CNN on 'dot' and 'ring' test signals (Appendix C). After developing the CNN for \(90^{2}\) pixel images, we apply it to larger \(720^{2}\) pixel maps, sliding \(90^{2}\) 'templates' in 5 pixel steps across the larger images to generate a probability map. In the probability map, each pixel is evaluated by the network multiple times. As a final step, we filter the probability map, only retaining clusters - groups of positive network outcomes - of a certain size. The benefit of the sliding template search is that it less sensitive to the exact position of the hotspot within the \(90^{2}\) pixel region. Applied in this manner, we find that the CNN can efficiently discern the presence of hotspots, even if the signal temperature is much smaller than the CMB temperature fluctuations. In particular, the CNN can even identify \(\mathcal{O}(10)\) number of PHS on the CMB sky for \(g=1\) and \(\eta_{*}=50\) Mpc, a signal that has a temperature \(\approx 20\) times colder than the average CMB temperature fluctuations. Translated into model parameters, for the benchmark models we study using mock CMB maps, we project that a CNN search can set a lower bound on the mass of heavy scalars \(M_{0}/H_{I}\gtrsim 110-260\), with the precise value depending on the time of particle production and coupling to the inflaton. These numbers are a significant improvement over the simplistic analysis in Ref. [25] that used single temperature cut to separate signal from the background. Compared to the standard matched filter analysis, the CNN is more versatile in identifying non-rotationally symmetric signals with varying shapes and temperatures that arise in the context of PHS. We performed a simplified comparison between the CNN and matched filter analysis by considering PHS with a fixed profile and located on the last scattering surface to show that the match filter analysis can provide comparable signal capture and fake rates to the CNN search for PHS with \(\eta_{*}=160\) Mpc and \(100\) Mpc. For dimmer PHS (\(\eta_{*}=50\) Mpc), more analysis is required to separate the signal and background in the filtered map. We leave a more detailed comparison to the matched filter method with a bank of filters to cover the signal space to future work. Several future directions remain to be explored. It would be interesting to apply our methodology to actual Planck CMB maps to search for PHS. In the absence of a detection, we can still set a lower bound on the masses of ultra-heavy particles which are otherwise very difficult to discover or constrain. This, however, requires a subtraction of the astrophysical foregrounds and knowing if the CNN can distinguish PHS from the compact objects in the foreground. Since the distortion of the curvature perturbation from particle production also modifies structure formation at late times, it would also be interesting to see if the current or future Large Scale Structure (LSS) surveys can identify the resulting signals localized in position space. A neural network like the one used here can learn to incorporate the non-linear physics of structure formation if trained on suitable simulations. Related to localized PHS signatures, similar types of cosmological signals from topological defects [51] or bubble collisions [28; 29; 30] can also arise and these may also be identified by a CNN search. From a more theoretical perspective, it would also be useful to write down a complete inflationary model that incorporates inflaton coupling to heavy fields and leads to particle production as described here. We leave these directions for future work. We thank Raphael Flauger, Daniel Green, Matthew Johnson, Kin-Wang Ng, Bryan Ostdiek, LianTao Wang, Yiming Zhong for useful conversations. TK, AM, and YT are supported by the U.S. National Science Foundation (NSF) grant PHY-2112540. JK is supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1C1C1005076), and in part by the international cooperation program managed by the National Research Foundation of Korea (No. 2022K2A9A2A15000153, FY2022). SK is supported in part by the NSF grant PHY-1915314 and the U.S. Department of Energy (DOE) contract DE-AC02-05CH11231. MM is supported by the U.S. Department of Energy, Office of Science, under Award Number DE-SC0022342. ## Appendix A Sensitivity to the \(\Lambda\)CDM Parameters Our analysis uses \(\Lambda\)CDM parameters in Eq. (10) to simulate the CMB. As the \(\Lambda\)CDM parameters come with uncertainties, we should check how sensitive the signal capture rate is to the variation of the parameters. In Table 4, we show the background rejection and signal capture rate using the same trained network for Fig 8_left_ with \(g=3\) and \(\eta_{*}=160\) Mpc but on CMB maps simulated with variations of \(\Lambda\)CDM parameters. As we see, when changing the \(\{A_{s},\Omega_{b},\Omega_{\rm CMB},n_{s}\}\) one by one with twice the \(1\sigma\) uncertainty reported in [39], the signal capture rate only changes by \(\mathcal{O}(\text{few}\,\%)\), comparable to the variations in our CNN analysis due to finite sampling. The consistent search results show the robustness of the network's ability to identify PHS against the uncertainty of \(\Lambda\)CDM parameters. ## Appendix B PHS Corrections to the CMB Power Spectrum Here we show the corrections on the CMB power spectrum when the number of PHS in the full sky saturates the bounds in Table 2. We show examples with the coupling \(g=1\) and horizon sizes \(\eta_{*}=100\) Mpc (\(N_{\rm PHS}=840\)) and \(160\) Mpc (\(N_{\rm PHS}=1162\)), assuming the centers of all the hotspots are located on the last scattering surface. Notice that the latter assumption of fixing \(\eta_{\rm HS}=\eta_{\rm rec}\) makes the average PHS temperature higher compared to the main analysis that allows \(\eta_{\rm HS}\) to vary. However, the assumption simplifies the power spectrum calculation and gives a more conservative result by exaggerating the PHS correction to the power spectrum. We also check results for different \(g\) and \(\eta_{*}\), but, following Table 2, with much smaller \(N_{\rm PHS}\). The corrections to the power spectrum for the other benchmarks are even smaller. To see how the excesses appear on the power spectrum, we utilize Hierarchical Equal Area isoLatitude Pixelization, HEALPix[40], based on the \(C_{\ell}^{\rm TT}\) spectrum computed from the CLASS package using the same \(\Lambda\)CDM parameters in Eq.(28). HEALPix pixelates a sphere \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(\omega_{b}\) & \(\omega_{\rm cdm}\) & \(10^{9}A_{s}\) & \(n_{s}\) & \(\tau_{re}\) & Bg rejection & Sig capture \\ \hline \hline Planck18 & 0.0224 & 0.120 & 2.10 & 0.966 & 0.0543 & 99.8\% & 74.0\% \\ \hline Case 1 & & +0.004 & & & 99.8\% & 72.3\% \\ \hline Case 2 & & & +0.07 & & & 99.2\% & 74.1\% \\ \hline Case 3 & & & & +0.01 & & 99.6\% & 69.9\% \\ \hline Case 4 & +0.0003 & & & & & 99.8\% & 73.4\% \\ \hline Case 5 & & & & & +0.014 & 99.2\% & 74.4\% \\ \hline Case 6 & +0.0003 & \(-0.004\) & +0.05 & \(-0.01\) & \(-0.014\) & 99.8\% & 72.4\% \\ \hline \end{tabular} \end{table} Table 4: The response of the signal capture and background rejection rates with varying \(\Lambda\)CDM parameters, labeled with the difference to the \(\Lambda\)CDM parameters. The variation of the rates is comparable to the fluctuations in our CNN analysis due to finite sampling and therefore is insignificant. For this test, we used \(g=2\) and \(\eta_{*}=160\) Mpc for the PHS signal. Figure 13: CMB temperature power spectrum using best fit \(\Lambda\)CDM input parameters in Eq. (28) with (red lines) and without (blue lines) PHS signals implemented on the full sky using a resolution parameter \(N_{\rm side}=2048\). Here, we assume that all PHS signals are on the last scattering surface. The differences between the two distributions are shown in green lines, and the gray shaded regions denote \(1\sigma\) uncertainty, taken from the Planck 2018 data. in an equal area where the lowest resolution consists of 12 baseline pixels. The resolution is increased by dividing each pixel into four partitions which can be parameterized as \(N_{\rm pixels}=12N_{\rm side}^{2}\) where \(N_{\rm side}\) is a power of 2. We choose the resolution parameter \(N_{\rm side}=2048\). Since the total number of pixels in a sphere characterizes the total number of independent \(\ell\) modes in \(C_{\ell}^{\rm TT}\), which is given by \(\sum_{\ell=0}^{\ell_{\rm max}}(2\ell+1)=(\ell_{\rm max}+1)^{2}\), our benchmark resolution parameter \(N_{\rm side}=2048\) corresponds to the maximum multipole number \(\ell_{\rm max}\simeq 3500\). Figure 13 shows \(\mathcal{D}_{\ell}^{\rm TT}\) spectra for the \(\Lambda\)CDM model (blue) and the \(\Lambda\)CDM+PHS (red) with \(\eta_{*}=100\) Mpc and \(\eta_{*}=160\) Mpc. The difference between the red and blue spectra is shown on the lower panel (green), with the \(1\sigma\) error bar (gray) taken from the Planck 2018 result [39]. For both scenarios, the excesses are well below the error bar indicating that the power spectrum analysis will not be able to resolve them. We also show \(\Delta\chi^{2}\) to quantify the deviations with respect to the \(\Lambda\)CDM spectrum using the same Planck 2018 binning intervals in \(\ell\). The total \(\Delta\chi^{2}\) for both cases is negligible compared to the number of parameters we have. ## Appendix C Shape Analysis for the \(\eta_{*}=50\) Mpc Signal In our earlier results, we found that the CNN's performance for \(\eta=50\) Mpc PHS exceeds the other benchmarks, despite the fact that the hotspots at \(\eta=50\) Mpc are much cooler. We surmise that the result is due to the distinct shape of the profile - a rim structure with central peak. As a simple test of this hypothesis, we formed a signal set of PHS decomposed into two separate features, an inner peak and an outer rim. We then ran each piece through a network trained on the complete shape of the \(\eta=50\) Mpc spots. Figure 14: In the left panel we show the trimmed inner piece of a hotspot signal, while in the right we show the output after 500 CMB + inner hotspot images are run through a network trained on full (untrimmed) \(\eta_{*}=50\,{\rm Mpc},g=3\) hotspots. We ran 500 CMB + deconstructed PHS test samples through the network, using a variety of \(g\) values but always with both located on the last scattering surface. The results, along with sample images of the deconstructed signals, are shown in Figs. 14 and 15. Comparing the right hand panels in Figs. 14 and 15, we see that the network is much more efficient at capturing the ring portion, e.g. 88% capture for \(g=3\) compared to 27% for the central spot. From this test we conclude that the ring shape is crucial to the CNN's performance at low \(\eta_{*}\) (note that the signal capture for the ring nearly matches the capture rate for the full signal (Fig. 8)).
2302.07873
Separating Technological and Clinical Safety Assurance for Medical Devices
The safety and clinical effectiveness of medical devices are closely associated with their specific use in clinical treatments. Assuring safety and the desired clinical effectiveness is challenging. Different people may react differently to the same treatment due to variability in their physiology and genetics. Thus, we need to consider the outputs and behaviour of the device itself as well as the effect of using the device to treat a wide variety of patients. High-intensity focused ultrasound systems and radiation therapy machines are examples of systems in which this is a primary concern. Conventional monolithic assurance cases are complex, and this complexity affects our ability to address these concerns adequately. Based on the principle of separation of concerns, we propose separating the assurance of the use of these types of systems in clinical treatments into two linked assurance cases. The first assurance case demonstrates the safety of the manufacturer's device independent of the clinical treatment. The second demonstrates the safety and clinical effectiveness of the device when it is used in a specific clinical treatment. We introduce the idea of these separate assurance cases, and describe briefly how they are separated and linked.
Spencer Deevy, Tiago de Moraes Machado, Amen Modhafar, Wesley O'Beirne, Richard Paige, Alan Wassyng
2023-02-15T20:34:29Z
http://arxiv.org/abs/2302.07873v1
# Separating Technological and Clinical Safety Assurance for Medical Devices ###### Abstract The safety and clinical effectiveness of medical devices are closely associated with their specific use in clinical treatments. Assuring safety and the desired clinical effectiveness is challenging. Different people may react differently to the same treatment due to variability in their physiology and genetics. Thus, we need to consider the outputs and behaviour of the device itself as well as the effect of using the device to treat a wide variety of patients. High-intensity focused ultrasound systems and radiation therapy machines are examples of systems in which this is a primary concern. Conventional monolithic assurance cases are complex, and this complexity affects our ability to address these concerns adequately. Based on the principle of separation of concerns, we propose separating the assurance of the use of these types of systems in clinical treatments into two linked assurance cases. The first assurance case demonstrates the safety of the manufacturer's device independent of the clinical treatment. The second demonstrates the safety and clinical effectiveness of the device when it is used in a specific clinical treatment. We introduce the idea of these separate assurance cases, and describe briefly how they are separated and linked. _Keywords_: Assurance Case Separation of Concerns Medical Devices Safety-Critical Systems Software-Intensive Systems Safety Certification Focused Ultrasound ## 1 Introduction Modern medical devices are complex due to their intensive use of software. There are many kinds of medical devices. The interplay of treatment, medical indications, and inter-patient physiological variability introduces significant complexity with respect to safety when compared with other types of safety-critical system [1]. There is also the tension between ensuring the safety of the device in terms of it not harming people, and the fact that not using the device may also lead to harm. To achieve the intended safety and the intended clinical effectiveness, engineers must design, develop, manufacture and maintain their systems following the best safety engineering practices at hand. They also need to meet stringent functional safety standards such as IEC 62304 [2] and ISO 14971 [3], as well as satisfy regulatory requirements. Safety cases, a precursor of assurance cases, were introduced more than 50 years ago to help manufacturers document a structured, explicit argument that the system of interest is safe [4]. Modern assurance cases have the same intent, but are used to document that a system possesses properties of concern, including but not limited to safety. Despite the benefits that assurance cases can bring to help develop safe systems as well as assure that they are safe, the adoption of assurance cases in medical device development varies by country and regulator. The McMaster Centre for Software Certification (McSCert) and Arrayus Technologies Inc. have been collaborating on the safety assurance of Arrayus's therapeutic focused ultrasound (FUS) system. The device emits FUS energy waves to deliver precise treatment for several medical conditions, including uterine fibroids and pancreatic cancer. It uses an external magnetic resonance imaging (MRI) system for guidance of the treatment and monitoring of the patient. The combination of such non-invasive and non-ionizing technologies forms a system of systems commonly known as a Magnetic Resonance-guided Focused Ultrasound (MRgFUS) system [5]. One of the most difficult aspects of assuring safety of such a device is that there are so many variations in how different people react to the same treatment, even when using the same outputs from the medical device. To address this we suggest an approach for assuring safety and effectiveness of such devices. We propose separating the assurance into _Technological Assurance_ and _Clinical Assurance_. Technological Assurance refers to a medical device system viewed solely as a machine that produces deterministic outputs given specific inputs. This is independent of the effect of these outputs on a patient during clinical treatment. Clinical Assurance refers to how those outputs from the machine affect patients within the clinical treatment. This separation seems to be effective in reducing the complexity of the assurance. ## 2 Proposed Assurance Case Separation for Medical Devices ### Goal Structuring Notation In discussion related to assurance cases and in the structure of the assurance case figures, we have used Goal Structuring Notation (GSN) [6] with some minor changes in terminology. For example, we prefer to talk about _claims_ and _evidence_ rather than _goals_ and _solutions_. ### The Monolithic Assurance Case We typically create a single, comprehensive assurance case (AC) for engineered systems in non-medical domains. In these assurance cases, safety is considered in relation to the overall behaviour of the system and the respective effects produced in a given environment. This strategy has been applied to medical devices as well, and has been effective for many of them, but is problematic for those that have to reckon with the fact that people come in many shapes and sizes: their bodies can respond differently to the same clinical treatment protocol. This variability, however, is not limited to patient physiology. It may also extend to the types of treatments a single medical device can perform and the different regions of the body that can be treated with that medical device. Therapeutic MRgFUS and radiation therapy machines are examples of how a complex system may be used to treat a wide variety of medical conditions, over several regions of the body. All of this makes it difficult to document a compelling assurance argument. The argument must demonstrate that the machine works as intended, delivers the correct outputs within a safe range, and if any output happens to be delivered outside of the intended location, it will not cause unacceptable harm to the patient, and will not be harmful to the environment - all while achieving the desired physiological response for a particular patient. #### Brief Example: We now consider the top level of a monolithic assurance case for a FUS device that provides clinical treatment for uterine fibroids, achieved by thermally ablating problematic tissue. This is shown in Figure 1, in which the top claim is shown along with its top-level GSN decomposition. The GSN components are labelled as follows: C indicates a _Claim_; S indicates a _Strategy_; and X indicates _conteXt_. We have removed Assumptions and Justifications in the interest of saving space. The claims with the tabs on the top left edge are _modules_. The lower levels of the argument are contained within those modules. The evidence that supports terminal claims in the argument are visible only in the content of those modules, and are not described in this paper. ### Separating Technological and Clinical Assurance Cases Recently we realized an analogy between this situation and the complexity inherent in very sophisticated control systems that are also safety-critical. For years the nuclear industry has relied on separation of concerns to deal with such problems. Many countries have mandated the separation of control and safety. This results in much simpler safety systems that can then be built and certified to be safe, independent of what the control system does. This separation is enforced in the system itself. It occurred to us that for some medical systems, separation of concerns to control complexity could be applied to the assurance case itself. We first need to define what we mean by _Technological effects_ and _Clinical effects_. #### 2.3.1 Technological effects When considering the _technological effects_ of the medical device, we consider the device solely as a machine that produces deterministic outputs given specific inputs. It does not include how the output of the medical device affects patients. #### 2.3.2 Clinical effects The _clinical effects_ of the device refers to the physiological response of a human patient to the use of the medical device and its operating procedures during a specific clinical treatment. This is meant to cope with the fact that different people can react differently to the exact same treatment. #### 2.3.3 Splitting the Argument Based on Technological Effects and Clinical Effects Instead of constructing a monolithic assurance case, we propose splitting the argument of safety and effectiveness for certain medical devices into two linked assurance cases: one based on the _"Technological effects"_ as defined above, and another based on the _"Clinical effects"_, also defined above. The former presents the argument pertaining to the safety and the effectiveness of the medical device and the therapy-agnostic operating procedures in relation to the medical device's ability to deliver its promised behaviour independent of any clinical context. The latter presents the argument pertaining to the safety and the effectiveness of the medical device and therapy-specific operating procedures/treatment plans in relation to achieving the intended biological/physiological response required to treat a particular medical indication. Overall assurance of the medical device used for a specific therapy is obtained by the combination of two linked ACs. The final assurance is documented in the AC that is focused on the clinical effects. We call this AC the Clinical Assurance Case (CAC)). The CAC is dependent on assurance provided by the AC that focuses on the technological effects. We call this AC the (Technological Assurance Case (TAC)). We now compare the monolithic AC example with the proposed separation of assurance cases in the following sections. Figure 1: Top-level of a monolithic assurance case for a MRgFUS system #### 2.3.4 Technological Assurance Case The top claim and top-level decomposition of the _TAC_ is shown in Figure 2. This is slightly different from the monolithic version in that it disregards the clinical application and treats the safety of the system according to its capability in delivering the technological effects, or outputs, independent of any clinical effect. The device must be able to focus and deliver ultrasonic energy to a particular location in space from its ultrasonic transducers within the required specifications. Such performance-based properties of the technological effects are referred to as _technological effectiveness_. The medical device must also handle safety concerns that affect everyone using the system. These safety concerns are identified through hazard analyses and deal with the intended behaviour of the system as well as how the machine interfaces with its environment and interacts with other medical devices. These safety concerns are also independent of any specific clinical effects. The safety-based properties of the technological effects are referred to as _technological safety_. Our standard practice in structuring the argument is described in the Strategy, S. The context components Xa, Xb and Xc define technological effects, and the effectiveness and safety properties associated with the technological effects, and all three of these terms reinforce that the effect of the outputs of the medical device on the patient is not considered within this argument. Those safety and effectiveness concerns are to be separated out and dealt with in the CAC. #### 2.3.5 Clinical Assurance Case The second assurance case we have called the _CAC_ and its top-level is shown in Figure 3. We use the same notation in the CAC as we did in the TAC in Figure 2. As one can see, the top claim in the CAC is different from the top claim in the TAC, since the meaning of safety and effectiveness for the CAC now addresses the intended clinical effects relevant to the clinical application of the medical device. Figure 2: Top-level of the TAC for a MRgFUS system We use a modified version of our standard practice in the strategy S, where the first subclaim involves showing that the system requirements result in a safe and effective treatment (clinical effects), where we build off of the work that the TAC demonstrated to show that the system is clinically sufficient for a given treatment. Xa, Xb, and Xc give the corresponding definitions for clinical effects, _clinical effectiveness_, and _clinical safety_. These differ from their corresponding definitions in the TAC in that they are now in the context of a clinical setting, and more importantly, how we use the medical device to achieve the intended physiological response in a patient safely and effectively. #### 2.3.6 How the TAC and CAC are Linked We can see in Figure 3 that claims C4 and C5 are similar to claims C2 and C3 in the monolithic assurance case, and are dealt with by reference to the associated TAC. They do not have to be argued in the CAC! Clearly the CAC is dependent on its associated TAC in that the safety of the machine itself in delivering its functional outputs is documented in the TAC. This implies that the outputs of the medical system required for a clinical treatment must be documented explicitly in the CAC, and then verified as provided by the system as documented in the TAC. In general, the CAC may reference any items in the TAC. However, it is crucial that there are no references from the TAC to any dependent CAC. The diamonds below the claims C4 and C5 are GSN symbols to indicate that the claims are not further developed in the CAC. The required information is documented in context nodes that support claims C4 and C5. ## 3 Conclusion In practice, we can assure the safety of the medical system independent of its clinical effects in a TAC, and the safety of its clinical application in an associated CAC. The assurance cases are linked, and both are needed to provide full assurance for a particular treatment. (There is always the option of combining multiple clinical treatments in a single Figure 3: Top-level of the CAC for a MRgFUS system CAC, or developing separate CACs for different clinical applications.) Every CAC builds on the assurance documented in its associated TAC. As long as the device documented in the TAC has the capability to perform the clinical application, the TAC does not have to be modified. This presents the basic idea behind the separation of the TAC and CAC. We have shown that separation of concerns can be used in assurance cases to reduce the complexity of demonstrating safety and effectiveness for software-intensive medical systems, such as the MRgFUS. By separating the demonstration of system safety in producing the intended deterministic machine output independent of clinical safety, we believe we can significantly reduce the overall complexity of the safety assurance argument. It is common to find that a particular medical device is used for different clinical procedures as is the case for the MRgFUS. If we assure the technological safety and effectiveness of the device independent of its clinical effects, then it raises the possibility that a single TAC could be linked with multiple CACs to assure the safety and effectiveness of the device used in multiple clinical treatments.
2310.08951
Unsupervised Log Anomaly Detection with Few Unique Tokens
This article introduces a method to detect anomalies in the log data generated by control system nodes at the European XFEL accelerator. The primary aim of this proposed method is to provide operators a comprehensive understanding of the availability, status, and problems specific to each node. This information is vital for ensuring the smooth operation. The sequential nature of logs and the absence of a rich text corpus that is specific to our nodes poses significant limitations for traditional and learning-based approaches for anomaly detection. To overcome this limitation, we propose a method that uses word embedding and models individual nodes as a sequence of these vectors that commonly co-occur, using a Hidden Markov Model (HMM). We score individual log entries by computing a probability ratio between the probability of the full log sequence including the new entry and the probability of just the previous log entries, without the new entry. This ratio indicates how probable the sequence becomes when the new entry is added. The proposed approach can detect anomalies by scoring and ranking log entries from European XFEL nodes where entries that receive high scores are potential anomalies that do not fit the routine of the node. This method provides a warning system to alert operators about these irregular log events that may indicate issues.
Antonin Sulc, Annika Eichler, Tim Wilksen
2023-10-13T08:49:25Z
http://arxiv.org/abs/2310.08951v2
# Log Anomaly Detection on Evafel Nodes ###### Abstract This article introduces a method to detect anomalies in the log data generated by control system nodes at the European XFEL accelerator. The primary aim of this proposed method is to provide operators a comprehensive understanding of the availability, status, and problems specific to each node. This information is vital for ensuring the smooth operation. The sequential nature of logs and the absence of a rich text corpus that is specific to our nodes poses significant limitations for traditional and learning-based approaches for anomaly detection. To overcome this limitation, we propose a method that uses word embedding and models individual nodes as a sequence of these vectors that commonly co-occur, using a Hidden Markov Model (HMM). We score individual log entries by computing a probability ratio between the probability of the full log sequence including the new entry and the probability of just the previous log entries, without the new entry. This ratio indicates how probable the sequence becomes when the new entry is added. The proposed approach can detect anomalies by scoring and ranking log entries from EuXFEL nodes where entries that receive high scores are potential anomalies that do not fit the routine of the node. This method provides a warning system to alert operators about these irregular log events that may indicate issues. ## 1 Introduction The stability and reliability of the European XFEL facility are essential for a successful operation. To facilitate this, a network of watchdog nodes is continuously monitoring the health state of the facility's essential components. These nodes, numbering in the hundreds, act as monitoring technology, ensuring the proper functionality of crucial European XFEL accelerator elements. Within their logs lie valuable information about the health state that can signal any potential problems with specific components or parts that could impact the entire facility. Automating the costly task of monitoring these lengthy and often redundant logs becomes especially important in guaranteeing the optimal performance of all nodes. The logs contain a wealth of information concerning the system's status, encompassing error messages, anomalies, and other factors that could affect the system or its associated components. By exploiting language embedding and anomaly detection techniques on these logs, we can efficiently identify and address issues or errors at the earliest possible stage when they occur in logs. This proactive approach empowers us to pinpoint potential problems before they escalate, enabling prompt measures to be taken to resolve ongoing issues. Furthermore, it facilitates timely intervention and the implementation of preventive measures to mitigate potential problems from arising. Monitoring the logs of the watchdog nodes by textual analysis of their logs not only provides an automated means of comprehending the European XFEL accelerator system conditions but also enables early detection and resolution of issues that would otherwise only gain significance in the event of a specific node failure. The structure of the paper is the following: First, we summarize the related work in log anomaly detection. In the next section, we show four main steps of our approach with important justifications and examples. Lastly, we show several examples and sketch a potential future work in this field. ## 2 Related Work A common approach to detecting anomalies in logs is to manually define rule-based systems. For example, Cinque et al. [1] and Yen et al. [2] have developed rule-based methods that scan logs for predefined patterns indicative of anomalies. However, these approaches rely heavily on expert knowledge to construct effective rules, which can be labor-intensive. To overcome this limitation, more automated techniques have emerged leveraging machine learning to discover anomalies. With the increasing popularity of machine learning (ML) models, deep learning-based approaches gave the potential to perform a thorough log analysis under the presence of a large log corpus, often also accompanied by laboriously made labels. Long-term-short-term (LSTM) recurrent neural networks [3, 4, 5] turned out to be popular for log-anomaly detection due to its ability to handle sequential data. Recently transformers [6] were deployed in training to detect anomalies in logs [7]. In [7] they used a BERT [8] model for log-anomaly detection. However, their reliance on large training datasets and millions of parameters can limit their applicability in resource-constrained scenarios like ours. For a more comprehensive survey of ML log analysis, see [9]. Bertero et al. [10] propose an approach that treats logs as natural text and leverages word vector Word2Vec representations [11, 12] to perform automated word embedding. This technique maps words to a vector space, enabling the use of off-the-shelf classifiers for anomaly detection. A major drawback is that their approach still relies on manual labeling to train the classifier, which can be prohibitively expensive in our scenario. Additionally, they treat each log entry independently, ignoring the sequential nature of consecutive log message relationships. To mitigate the need for labeled data, other works like [13, 14] have explored unsupervised learning techniques. These methods apply text mining to logs and employ clustering approaches to identify anomalies without relying on manual labels. However, they still consider logs in isolation rather than leveraging contextual information across log sequences. In this work, we propose an alternative approach to detect anomalies without any labels or extensive user intervention. Our method is designed to adapt to novel log messages while also capturing the sequential nature of log analysis, overcoming the limitations of prior techniques. Specifically, we faced challenges from the limited diversity of log entries, which does not provide sufficient training data for standard ML models. Inspired by [10], we employ word vector embeddings to represent log entries in a high-dimensional space, mitigating data scarcity. However, instead of relying on supervised classifiers, we take a sequential modeling approach by treating logs from each source as temporal event streams. Our key insight is to focus on modeling patterns of occurrences within these streams, rather than just individual log entries. To achieve this, we introduce an unsupervised technique based on Hidden Markov Models operating on the embedded log sequences. By learning sequential regularities, anomalies can be detected from deviations in context rather than content. This probabilistic approach requires estimating only a minimal number of parameters, enabling robust detection even with limited training data. ## 2 Method In this section, we explain our proposed approach for scoring individual log entries to detect anomalies. The approach involves four main steps. First, we perform pre-processing on the raw log entries to replace redundant patterns and minimize the effect of unique token sparsity. Pre-processing transforms the text into consistent tokenized forms. Next, we generate embeddings for each log entry using Word2Vec. We calculate a mean vector of the word vectors for all terms in the entry. This provides a dense numeric representation capturing the contextual meaning of individual words in the log entry. Third, we fit an HMM model on sequences of these log entry embeddings from past observed logs. The HMM learns a probability distribution over likely sequences of log entries. Finally, we score new log entries by computing their probability under the trained HMM. Low probability entries deviating from the learned sequential patterns are identified as anomalies. The key advantage of our approach is that it relies solely on sequence modeling of log embeddings, without needing content analysis or keyword matching rules. ### Preprocessing and Tokenization In this section, we detail the preprocessing steps applied to the raw log text before analysis. First, we separated the log entries by identifying timestamp delimiters and newline characters in the raw messages. This extracted the individual log entries. Next, we tokenized the log entries using the NLTK tokenizer [15], splitting them into individual tokens. The following transformations are then applied to each token: 1. Special characters are removed, except for numeric, alphabetic, and forward slash (/) characters. 2. Tokens potentially containing server or device names are replaced with placeholders, including those starting with xfel or ending in svr or server. 3. Numeric tokens are replaced with placeholders like $nz for non-zero numbers and $zero for zeros. 4. Entire log entry is converted to lowercase. 5. English stop words [15] are removed. Preprocessing significantly reduced sparseness in the log entries by converting them into consistent tokenized forms. The key steps of entry extraction, tokenization, entity masking, and stop word removal help prevent overfitting minor textual variations. This enables more robust sequence modeling in later stages. Figure 1: To demonstrate the anomaly detection capabilities of our HMM model, we examine three cases on a synthetic log sequence containing observable events \(o_{1}\), \(o_{2}\), and an anomalous event \(o_{a}\). The HMM parameters were estimated using the sequence excluding the last entry (the minimum required to avoid overfitting). First (left), a sequence with a repetitive \(o_{1}\), \(o_{2}\) pattern where the HMM likelihood score \(s\) fluctuates around very low values, as expected. Second (center), swapping \(o_{2}\) and \(o_{1}\) in the last position disrupts the pattern, increasing the score \(s\) noticeably. Finally, inserting the anomalous \(o_{a}\) event at the end causes a substantial \(s\) spike, clearly detecting the improbable observation (right). This shows that the model detects anomalies from both unlikely or novel log messages and unexpected sequencing of normal events. Small disruptions in patterns or particularly improbable observations increase the model’s likelihood scores. ### Embedding In our approach, we use Word2Vec[11, 12] to represent log entries numerically in a \(N\)-dimensional space. Word2Vec is based on the idea that words appearing in similar contexts likely have similar meanings. It trains a shallow neural network to reconstruct word contexts, learning embeddings that capture semantic relationships from the surrounding words. We employ continuous bag-of-words (CBOW) introduced in [12] to train the Word2Vec. CBOW uses the context to predict a target word omitted in the input, alternatively, a skip-gram training can be used, which does the reverse. The linear Word2Vec mapping learns vectors where similarity in embedding space correlates to semantic similarity. For log analysis, Word2Vec can learn relationships between terms that often co-occur, capturing the context. An important capability is that arithmetic operations can be performed on the embedded vectors. For example, adding the embeddings for disk and space yields vectors close to related terms like available and lack. Furthermore, combining linux and mac embeddings produces vectors near other operating system terms like windows and os, see Fig. 2. The additive property is important for representing multi-word log entries by taking the mean of the token embeddings. While more complex pooling techniques exist [16, 17], mean pooling proved sufficient for our needs. ### Anomaly Detection with HMM We borrow the notation from [19]. Consider a set \(\{q_{1},\ldots q_{N}\}\) of hidden states, and a sequence of observations \((o_{1},\ldots o_{T})\), each one drawn from a vocabulary \(V\). We make two assumptions: first, that a state \(q_{i}\) depends only on the previous state \(q_{i-1}\), i.e. \(p\) (\(q_{i}|q_{1},\ldots q_{i-1})=p\) (\(q_{i}|q_{i-1}\)) (first-order Markov assumption), second a probability of \(o_{i}\) depends only on state that produced the observation \(q_{i}\) and not on any other states or observations, i.e. \(p\) (\(o_{i}|q_{1},\ldots q_{i}\ldots q_{T},o_{1},\ldots,o_{i},\ldots,o_{T})=p\) (\(o_{i}|q_{i}\)). The above-stated assumptions can be represented via hidden Markov Models (HMM). In our model, the observations are vector representations of log entries, obtained through preprocessing, tokenization, and embedding into an \(N\)-dimensional space. The hidden states represent the unknown underlying state of the system generating the logs. Given a sequence of observed log vectors \((o_{1},\ldots,o_{i-1})\), our goal is to estimate the probability of a new vector \(o_{i}\) and compare how probable his occurrence is considering previously observed vectors \((o_{1},\ldots,o_{i-1})\) \[\begin{array}{rcl}s_{i}&=&\log\frac{p_{\theta}(o_{1},\ldots,o_{i-1})}{p_{ \theta}(o_{1},\ldots,o_{i})}\\ &=&\log p_{\theta}\left(o_{1},...,o_{i-1}\right)-\log p_{\theta}\left(o_{1},...,o_{i}\right).\end{array} \tag{1}\] The score \(s_{i}\) is quantifying the anomaly level \(s_{i}\) of the new entry \(o_{i}\) based on parameters \(\theta\). HMM parameters \(\theta\) can be estimated from inputs before \(o_{i}\), i.e.(\(o_{1},\ldots,o_{i-1}\)) or a sub-sequence (e.g. sliding window). We discuss estimation in the following section. A key requirement in log anomaly detection is handling novel entries [5]. Although our method cannot fully generalize to completely new logs, it focuses more on sequence modeling than individual semantics. This enables detecting anomalies based on contextual irregularities and variations rather than content. We observed that anomalies manifest more as unusual sequences than specific terms. By scoring based on sequence likelihood rather than keyword rules, even new log messages can be assigned anomaly scores using their contextual deviation. This differentiates our technique from [3, 4, 5, 7, 10], which relies on supervised classification. Instead, we take an unsupervised sequential approach to assess entries based on context rather than predefined labels. However, false alarms may still occur if natural fluctuations also deviate from learned patterns, as we show in Fig. 8. This strategy demonstrated satisfactory efficiency given the hardware, as the Baum-Welch algorithm for HMM training scales linearly with sequence length. However, some stations had tens of thousands of messages, potentially causing unreasonable computational time growth. We explored two non-overlapping strategies to mitigate this: 1. Using a sliding window subset of the sequence, as shown in Fig. 1. Since the HMM has few parameters, it remains stable even on shorter training sequences. 2. Initializing with parameters from the previous iteration, rather than full retraining. This avoids deviating far from a previous parameter estimate. The sliding window approach dynamically focuses on recent local context, while parameter reuse leverages past parameters as context and can increase stability. Both maintain reasonable training times but have trade-offs we intend to evaluate further in future work. Full sequence training sufficed initially, but scaling necessitates more efficient training strategies. The sliding window and reuse techniques present two promising directions for larger-scale logs. Unlike the minimal example in Fig. 1 where the anomaly was excluded from training, Fig. 3 shows the performance when the full log sequence including the anomaly is used for HMM estimation. Despite the anomalous entries being present during estimation, the model still detects the disruption in the learned sequence patterns, as evidenced by the spike in anomaly scores. This highlights an important capability of our approach - the ability to identify anomalies even when trained on logs containing anomalies. By relying on sequential deviations rather than content matching, the presence of irregular entries in the training logs does not prevent detecting more such deviations at test time. This enables post-mortem analysis scenarios where clean training data is not available. Even if the training logs contain some anomalies, new anomalies can still be flagged based on their contextual irregularity. The model detects breaks in sequential patterns irrespective of whether anomalous logs were seen during training. ### An Example - Sequence Anomaly Detection To demonstrate how HMM can detect different types of anomalies, consider a simple HMM with two hidden states \(q_{1},q_{2}\). We examine two scenarios with different observable outputs: * The observable vocabulary contains two common outputs \(o_{1},o_{2}\) along with one anomalous event \(o_{a}\) * The observable vocabulary only contains standard outputs \(o_{1},o_{2}\). However, one of these common outputs appears in an unexpected sequence position. In the first scenario, the anomalous output \(o_{a}\) will decrease the likelihood and thus increase the anomaly score when it appears, allowing it to be flagged as anomalous. In the second scenario, although the observed output is familiar, its occurrence in an unlikely position based on the learned sequence dynamics will also increase the anomaly score. To demonstrate, we created a minimal 8-event example where disrupting the pattern impacts the scores, see Fig. 1. Critically, the HMM parameters are estimated excluding the last (anomalous) event, and this approach succeeds even if the anomalous sequence was included in parameter estimation, as Fig. 3 shows. This example shows that the model detects anomalies by identifying disruptions in expected patterns, even with limited and corrupted input data. We show that HMM can detect anomalies either due to unlikely/novel log messages themselves or due to standard messages appearing in surprising positions that break the expected sequencing patterns. The HMM assigns lower scores when observations diverge from the learned distributions. The examples in Fig. 1 and Fig. 3 also demonstrate that finding parameters of HMM requires only a few events and inputs can even be corrupted with noise. ### Analysis - Word Embedding In this section, we will perform a more detailed analysis of the embeddings produced by our corpus to demonstrate robustness even though the log entries are not natural language. Processing logs with Word2Vec embedding presents an interesting language task because the corpus contains only a few words (475 unique tokens) and after pre-processing and tokenization, there are less than 1000 unique log messages, see Fig. 4. The absence of diversity of messages and words also justifies why more parameter-rich approaches are unfeasible. Fig. 2 shows an embedding with some points being distinguished to demonstrate that semantically similar words are embedded closely. Furthermore, in Fig. 4, we show embedding of averaged entry vectors to further underline the challenge in the lack of diversity of log entries, which form only a few packed clusters. ## Implementation The code was implemented in Python 3.9. For embedding log messages with Word2Vec, the Gensim library was used [20]. For modeling the sequences, the Hmmlearn package was used [21]. ## Results We selected four instances to show. Input logs and their computed scores are shown in Figures 5, 6, 7 and 8. escalation in scores in the bottom left chart. This confirms the method's ability to identify potential issues. Fig. 7 showcases more nuance and complexity. Score spikes at rows 9, 11, and 14 reveal multiple potential anomalies according to the method. However, the spike at 14 stands out as most prominent when viewed in the context of the previous 50 entries in the bottom right chart. This example illustrates that while the method can flag multiple possibilities, further verification may be needed to determine the most significant anomaly. Finally, Fig. 8 highlights some limitations and challenges. The high baseline of scores makes it harder to discern anomalies from typical background noise for this log. Also, frequent errors, even if minor, generate many potential false positives. Despite these difficulties, a slight increase in the score at row 17 still suggests the method can detect likely anomalies if we take a closer look at the results. When errors produce clear spikes as in the first two examples in Fig. 5 and Fig. 6, the method reliably flags issues. With more nuanced cases as in Fig. 7, multiple possibilities may need further validation. For completeness, we also pointed out an example where the proposed approach does not work as reliably, shown in Fig. 8, but even with noisy baseline data, salient anomalies can emerge. ## 5 Future Work While this work demonstrates preliminary anomaly detection capabilities, there are several possibilities for improvement in the future. More advanced techniques like [22] could provide greater accuracy in identifying anomalies. Furthermore, increasing the log verbosity may lead to generating more data that could allow us to deploy more parameter-rich anomaly detection algorithms, as mentioned in related work. Incorporating additional node data beyond just log messages holds promise for improving detection performance. Characteristics like CPU, memory, network, and disk usage contain valuable information but effectively combining such asynchronous numerical time series data with log messages poses modeling challenges. Developing algorithms to jointly analyze these diverse data sources represents the next milestone. Furthermore, cybersecurity factors merit consideration given rising threats. Our knowledge of infrastructure specifics alongside network traffic flow logs could enable modeling and identifying security-related anomalies. ## 6 Conclusion This work presents a novel unsupervised approach for detecting anomalies in log data. By representing log entries with Word2Vec embeddings and modeling sequences as HMMs, the method identifies anomalies by calculating the likelihood of new log messages given history. The results on real logs from European XFEL demonstrate the capability to flag potential issues via salient score spikes corresponding to errors or disruptions of typical patterns. The approach detects anomalies without requiring labeled data or extensive training and relies on modeling the behavior of the node log via HMM. However, challenges remain in handling noise and minimizing false positives, as evidenced by certain noisy logs. Figure 4: Distribution of unique log entries (sum token embeddings) embedded in 16 dimensions and then projected to 2 dimensions with UMAP [18]. The absence of uniform distribution of the embedded log messages shows that there are only very few clusters of messages. Figure 3: To demonstrate the anomaly detection capabilities of our HMM model and its capacity to detect anomalies of log entries which are also a part of parameter estimation, but are not so frequent we show results of anomaly detection of identical sequences like it Fig. 1, but we use the entire sequence for training. The results show that the model detects anomalies from both unlikely or novel log messages, even when the anomalous data are a part of the parameter estimation of the HMM. Code is available at [https://github.com/sulcantonin/LOG_ICALEPCS23](https://github.com/sulcantonin/LOG_ICALEPCS23) We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for its support in providing resources and infrastructure. Furthermore, we would like to thank all colleagues of the MCS and MSK groups as well as the European XFEL team and management for their contributions to this work and help in preparing this paper.
2305.15208
Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation
Simulation-based inference (SBI) enables amortized Bayesian inference for simulators with implicit likelihoods. But when we are primarily interested in the quality of predictive simulations, or when the model cannot exactly reproduce the observed data (i.e., is misspecified), targeting the Bayesian posterior may be overly restrictive. Generalized Bayesian Inference (GBI) aims to robustify inference for (misspecified) simulator models, replacing the likelihood-function with a cost function that evaluates the goodness of parameters relative to data. However, GBI methods generally require running multiple simulations to estimate the cost function at each parameter value during inference, making the approach computationally infeasible for even moderately complex simulators. Here, we propose amortized cost estimation (ACE) for GBI to address this challenge: We train a neural network to approximate the cost function, which we define as the expected distance between simulations produced by a parameter and observed data. The trained network can then be used with MCMC to infer GBI posteriors for any observation without running additional simulations. We show that, on several benchmark tasks, ACE accurately predicts cost and provides predictive simulations that are closer to synthetic observations than other SBI methods, especially for misspecified simulators. Finally, we apply ACE to infer parameters of the Hodgkin-Huxley model given real intracellular recordings from the Allen Cell Types Database. ACE identifies better data-matching parameters while being an order of magnitude more simulation-efficient than a standard SBI method. In summary, ACE combines the strengths of SBI methods and GBI to perform robust and simulation-amortized inference for scientific simulators.
Richard Gao, Michael Deistler, Jakob H. Macke
2023-05-24T14:45:03Z
http://arxiv.org/abs/2305.15208v2
# Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation ###### Abstract Simulation-based inference (SBI) enables amortized Bayesian inference for simulators with implicit likelihoods. But when we are primarily interested in the quality of predictive simulations, or when the model cannot exactly reproduce the observed data (i.e., is misspecified), targeting the Bayesian posterior may be overly restrictive. Generalized Bayesian Inference (GBI) aims to robustify inference for (misspecified) simulator models, replacing the likelihood-function with a cost function that evaluates the goodness of parameters relative to data. However, GBI methods generally require running multiple simulations to estimate the cost function at each parameter value during inference, making the approach computationally infeasible for even moderately complex simulators. Here, we propose amortized cost estimation (ACE) for GBI to address this challenge: We train a neural network to approximate the cost function, which we define as the expected distance between simulations produced by a parameter and observed data. The trained network can then be used with MCMC to infer GBI posteriors for any observation without running additional simulations. We show that, on several benchmark tasks, ACE accurately predicts cost and provides predictive simulations that are closer to synthetic observations than other SBI methods, especially for misspecified simulators. Finally, we apply ACE to infer parameters of the Hodgkin-Huxley model given real intracellular recordings from the Allen Cell Types Database. ACE identifies better data-matching parameters while being an order of magnitude more simulation-efficient than a standard SBI method. In summary, ACE combines the strengths of SBI methods and GBI to perform robust and simulation-amortized inference for scientific simulators. ## 1 Introduction Mechanistic models expressed as computer simulators are used in a wide range of scientific domains, from astronomy, geophysics, to neurobiology. The parameters of the simulator, \(\mathbf{\theta}\), encode mechanisms of interest, and simulating different parameter values produces different outputs, i.e., \(\text{sim}(\mathbf{\theta}_{i})\rightarrow\mathbf{x}_{i}\), where each model-simulation \(\mathbf{x}_{i}\) can be compared to experimentally observed data, \(\mathbf{x}_{o}\). Using such simulators, we can quantitatively reason about the contribution of mechanisms behind experimental measurements. But to do so, a key objective is often to find all those parameter values that can produce simulations consistent with observed data. One fruitful approach towards this goal is simulation-based inference (SBI) (Cranmer et al., 2020), which makes it possible to perform Bayesian inference on such models by interpreting simulator outputs as samples from an implicit likelihood (Diggle and Gratton, 1984), \(\mathbf{x}\sim p(\mathbf{x}|\boldsymbol{\theta})\). Standard Bayesian inference targets the parameter posterior distribution given observed data, i.e., \(p(\boldsymbol{\theta}|\mathbf{x}_{o})=\frac{p(\mathbf{x}_{o}|\boldsymbol{\theta })p(\boldsymbol{\theta})}{p(\mathbf{x}_{o})}\), where \(p(\boldsymbol{\theta})\) captures prior knowledge and constraints over model parameters, and the likelihood function \(p(\mathbf{x}_{o}|\boldsymbol{\theta})\) is evaluated as a function of \(\boldsymbol{\theta}\) for a fixed \(\mathbf{x}_{o}\). SBI methods can differ in whether they aim to approximate the likelihood (Wood, 2010; Papamakarios et al., 2019; Hermans et al., 2020; Thomas et al., 2022) or the posterior directly (Papamakarios and Murray, 2016; Greenberg et al., 2019; Csillery et al., 2010), and can be _amortized_, i.e., do not require new simulations and retraining for new data (Goncalves et al., 2020; Radev et al., 2020). In the end, each method provides samples from the posterior, which are all, in theory, capable of producing simulations that are _identical_ to the observation we condition on. Furthermore, by definition, the posterior probability of drawing a sample scales as the product of its prior probability and, critically, the likelihood that this sample can produce a simulation that is _exactly equal_ to the observation. However, targeting the exact posterior may be overly restrictive. In many inference scenarios, modelers are primarily interested in obtaining a diverse collection of parameter values that can explain the observed data. This desire is also reflected in the common usage of posterior predictive checks, where seeing predictive simulations that resemble the data closely (in some specific aspects) is used to gauge the success of the inference process. In particular, it is often clear that the scientific model is only a coarse approximation to the data-generating process, and in some cases even cannot generate data-matching simulations, i.e., is misspecified (Walker, 2013). For example, in the life-sciences, it is not uncommon to use idealized, theoretically motivated models with few parameters, and it would be unrealistic to expect that they _precisely_ capture observations of highly complex biological systems. In such cases, or in cases where the model is fully deterministic, it is nonsensical to use the probability of exactly reproducing the data. In contrast, it would still be useful to find parameter values that produce simulations which are 'close enough', or as close as possible to the data. Therefore, instead of sampling parameters according to _how often_ they produce simulations that match the data _exactly_, many use cases call for sampling parameters according to _how closely_ their corresponding simulations reproduce the observed data. **Generalized Bayesian Inference (GBI)**(Bissiri et al., 2016) offers a principled way to do so by replacing the (log) likelihood function with a cost function that simply scores a parameter given an observation, such as the expected distance between \(\mathbf{x}_{o}\) and all possible simulations \(\mathbf{x}\) produced by \(\boldsymbol{\theta}_{i}\) (Fig. 1). Several recent works have leveraged this framework to perform inference for models with implicit or intractable likelihoods, especially to tackle model misspecification: Matsubara et al. (2021) use a Stein Discrepancy as the cost function (which requires the evaluation of an unnormalized likelihood and multiple i.i.d. data samples), and Cherief-Abdellit and Alquier (2020) and Dellaporta et al. (2022) use simulator samples to estimate maximum mean discrepancy and directly optimize over this cost function via stochastic gradient descent (which requires a differentiable simulator). More broadly, cost functions such as scoring rule estimators have been used to generalize approximate Bayesian computation (ABC) (Schmon et al., 2020) and synthetic likelihood approaches (Wood, 2010; Pacchiardi and Dutta, 2021), where the Monte Carlo estimate requires (multiple) simulations from \(p(\mathbf{x}|\boldsymbol{\theta})\). Thus, existing GBI approaches for SBI either require many simulations to be run during MCMC sampling of the posterior (similar to classical ABC methods), or are limited to differentiable simulators. Moreover, performing inference for new observations requires re-running simulations, rendering Figure 1: **Estimating cost from simulations.** Using the expected distance between simulated and target data as the cost function, GBI assigns high probability to parameter values that, on average, produce simulations that are close—but not necessarily equal—to the observation. such methods simulation-inefficient and expensive at inference-time, and ultimately impractical for scientific simulators with even moderate computational burden. We here propose to perform GBI for scientific simulators with **amortized cost estimation (ACE)**, which inherits the flexibility of GBI but amortizes the overhead of simulations by training a neural network to predict the cost function for any parameter-observation pair. We first outline the GBI formalism in Sec. 2, then introduce ACE in Sec. 3. In Sec. 4, we show that ACE provides GBI posterior predictive simulations that are close to synthetic observations for a variety of benchmark tasks, especially when the simulator is misspecified. We showcase its real-world applicability in Sec. 5: using experimental data from the Allen Cell Types Database, ACE successfully infers parameters of the Hodgkin-Huxley single-neuron simulator with superior predictive performance and an order of magnitude higher simulation efficiency compared to neural posterior estimation (Goncalves et al., 2020). Finally, we discuss benefits and limitations of GBI and ACE, and related work (Sec. 6). ## 2 Background To construct the GBI posterior, the likelihood, \(p(\mathbf{x}_{o}|\mathbf{\theta})\), is replaced by a 'generalized likelihood function', \(L(\mathbf{\theta};\mathbf{x}_{o})\), which does not need to be a probabilistic model of the data-generating process, as long as it can be evaluated for any pair of \(\mathbf{\theta}\) and \(\mathbf{x}_{o}\). Following the convention in Bissiri et al. (2016), we define \(L(\mathbf{\theta};\mathbf{x}_{o})\equiv e^{-\beta\ell(\mathbf{\theta};\mathbf{x}_{o})}\), where \(\ell(\mathbf{\theta};\mathbf{x}_{o})\) is a cost function that encodes the quality of \(\mathbf{\theta}\) relative to an observation \(\mathbf{x}_{o}\), and \(\beta\) is a scalar inverse temperature hyperparameter that controls how much the posterior weighs the cost relative to the prior. Thus, the GBI posterior can be written as \[p(\mathbf{\theta}|\mathbf{x}_{o})\propto e^{-\beta\ell(\mathbf{\theta};\mathbf{x}_{o} )}p(\mathbf{\theta}). \tag{1}\] As noted previously (Bissiri et al., 2016), if we define \(\ell(\mathbf{\theta};\mathbf{x}_{o})\equiv-\log p(\mathbf{x}_{o}|\mathbf{\theta})\) (i.e., self-information) and \(\beta=1\), then we recover standard Bayesian inference. The advantage of GBI is that, instead of adhering strictly to the (implicit) likelihood, the user is allowed to choose arbitrary cost functions to rate the goodness of \(\mathbf{\theta}\) relative to an observation \(\mathbf{x}_{o}\), which is particularly useful when the simulator is misspecified. Previous works have referred to \(\ell(\mathbf{\theta};\mathbf{x}_{o})\) as risk function (Jiang and Tanner, 2008), loss function (Bissiri et al., 2016), or (proper) scoring rules when they satisfy certain properties (Gneiting and Raftery, 2007; Pacchiardi and Dutta, 2021) (further discussed in Section 6.1). Here we adopt 'cost' to avoid overloading the terms 'loss' and'score' in the deep learning context. ## 3 Amortized Cost Estimation for GBI ### Estimating cost function with neural networks In this work, we consider cost functions that can be written as an expectation over the likelihood: \[\ell(\mathbf{\theta};\mathbf{x}_{o})\equiv\mathbb{E}_{p(\mathbf{x}|\mathbf{\theta})}[ d(\mathbf{x},\mathbf{x}_{o})]=\int_{\mathbf{x}}d(\mathbf{x},\mathbf{x}_{o})p( \mathbf{x}|\mathbf{\theta})d\mathbf{x}. \tag{2}\] Many popular cost functions and scoring rules can be written in this form, including the average mean-squared error (MSE) (Bissiri et al., 2016), the maximum mean discrepancy (MMD\({}^{2}\)) (Gretton et al., 2012), and the energy score (ES) (Gneiting and Raftery, 2007) (details in Appendix A2). While \(\ell(\mathbf{\theta};\mathbf{x}_{o})\) can be estimated via Monte Carlo sampling, doing so in an SBI setting is simulation-inefficient and time-intensive, since the inference procedure must be repeated for every observation \(\mathbf{x}_{o}\), and simulations must be run in real-time during MCMC sampling of the posterior. Furthermore, this does not take advantage of the structure in parameter- or data-space around neighboring points that have been simulated. We propose to overcome these limitations by training a regression neural network (NN) to learn \(\ell(\mathbf{\theta};\mathbf{x}_{o})\). Our first insight is that cost functions of the form of Eq. 2 can be estimated from a dataset consisting of pairs of parameters and outputs--in particular, from _a single_ simulation run per \(\mathbf{\theta}\) for MSE, and finitely many simulation runs for MMD\({}^{2}\) and ES. Specifically, we leverage the well-known property that NN regression converges to the conditional expectation of the data labels given the data: **Proposition 1**.: _Let \(p(\mathbf{\theta},\mathbf{x})\) be the joint distribution over parameters and data. Let \(\ell(\mathbf{\theta};\mathbf{x}_{o})\) be a cost function that can be written as \(\ell(\mathbf{\theta};\mathbf{x}_{o})=\mathbb{E}_{p(\mathbf{x}|\mathbf{\theta})}[d( \mathbf{x},\mathbf{x}_{o})]=\int_{\mathbf{x}}d(\mathbf{x},\mathbf{x}_{o})p( \mathbf{x}|\mathbf{\theta})d\mathbf{x}\) and let \(f_{\phi}(\cdot)\) be a function parameterized by \(\phi\). Then, the loss function \(\mathcal{L}=\mathbb{E}_{p(\mathbf{\theta},\mathbf{x})}[(f_{\phi}(\mathbf{\theta})-d( \mathbf{x},\mathbf{x}_{o}))^{2}]\) is minimized if and only if, for all \(\mathbf{\theta}\in\text{supp}(p(\mathbf{\theta}))\), \(f_{\phi}(\mathbf{\theta})=\mathbb{E}_{p(\mathbf{x}|\mathbf{\theta})}[d(\mathbf{x}, \mathbf{x}_{o})]\)._ Proof in Appendix A.1. Proposition 1 states that, if we compute the distances \(d(\mathbf{x},\mathbf{x}_{o})\) between a single observation \(\mathbf{x}_{o}\) and every \(\mathbf{x}\) in our dataset, then a neural network \(f_{\phi}(\cdot)\) trained to predict the distances given parameters \(\mathbf{\theta}\) will denoise the noisy distance labels \(d(\mathbf{x},\mathbf{x}_{o})\) and converge onto the desired cost \(f_{\phi}(\mathbf{\theta})\rightarrow\ell(\mathbf{\theta};\mathbf{x}_{o})=\mathbb{E}_{ p(\mathbf{x}|\mathbf{\theta})}[d(\mathbf{x},\mathbf{x}_{o})]\), approximating the cost of any \(\mathbf{\theta}\) relative to \(\mathbf{x}_{o}\). ### Amortizing over observations As outlined above, a regression NN will converge onto the cost function \(\ell(\mathbf{\theta};\mathbf{x}_{o})\) for a particular observation \(\mathbf{x}_{o}\). However, naively applying this procedure would require retraining of the network for any new observation \(\mathbf{x}_{o}\), which prevents application of this method in time-critical or high-throughput scenarios. We propose to _amortize_ cost estimation over a target distribution \(p(\mathbf{x}_{t})\): **Proposition 2**.: _Let \(p(\mathbf{\theta},\mathbf{x})\) be the joint distribution over parameters and data and let \(p(\mathbf{x}_{t})\) be a distribution of target samples. Let \(\ell(\mathbf{\theta};\mathbf{x}_{o})\) be a cost function and \(f_{\phi}(\cdot)\) a parameterized function as in proposition 1. Then, the loss function \(\mathcal{L}=\mathbb{E}_{p(\mathbf{\theta},\mathbf{x})p(\mathbf{x}_{t})}[(f_{\phi} (\mathbf{\theta},\mathbf{x}_{t})-d(\mathbf{x},\mathbf{x}_{t}))^{2}]\) is minimized if and only if, for all \(\mathbf{\theta}\in\text{supp}(p(\mathbf{\theta}))\) and all \(\mathbf{x}_{t}\in\text{supp}(p(\mathbf{x}_{t}))\) we have \(f_{\phi}(\mathbf{\theta},\mathbf{x}_{t})=\mathbb{E}_{p(\mathbf{x}|\mathbf{\theta})}[ d(\mathbf{x},\mathbf{x}_{t})]\)._ Proof in Appendix A.2. Proposition 2 states that a NN which receives as input a parameter \(\mathbf{\theta}\) and an independently sampled target datapoint \(\mathbf{x}_{t}\) will converge to \(\ell(\mathbf{\theta};\mathbf{x}_{t})\) for all \(\mathbf{x}_{t}\) on the support of the target distribution (Fig. 2a,b), enabling estimation of the cost function for any pair of \((\mathbf{\theta},\mathbf{x}_{t})\). Naturally, we use the already simulated \(\mathbf{x}\sim p(\mathbf{x})\) as target data during training, and therefore do not require further simulations. In order to have good accuracy on potentially misspecified observations, however, such datapoints should be within the support of the target distribution. Thus, in practice, we augment this target dataset with noisy simulations to broaden the support of \(p(\mathbf{x}_{t})\). Furthermore, if the set of observations (i.e., real data) is known upfront, they can also be appended to the target dataset during training. Lastly, to keep training efficient and avoid quadratic scaling in the number of simulations, we randomly subsample a small number of \(\mathbf{x}_{t}\) per \(\mathbf{\theta}\) in each training epoch (2 in our experiments), thus ensuring linear scaling as a function of simulation budget. Fig. 2a,b summarizes dataset construction and network training for ACE (details in Appendix A.4.1). ### Sampling from the generalized posterior Given a trained cost estimation network \(f_{\phi}(\cdot,\cdot)\), an observed datapoint \(\mathbf{x}_{o}\), and a user-selected inverse temperature \(\beta\), the generalized posterior probability (Eq. 1) can be computed up to proportionality for any \(\mathbf{\theta}\): \(p(\mathbf{\theta}|\mathbf{x}_{o})\propto\exp(-\beta\cdot f_{\phi}(\mathbf{\theta}, \mathbf{x}_{o}))p(\mathbf{\theta})\) and, thus, this term can be sampled with MCMC (Fig. 2c). The entire algorithm is summarized in Algorithm 1. Figure 2: **Schematic of dataset construction, network training, and inference.****(a-b)** The neural network is trained to predict the distance between pairs of \(\mathbf{x}\) (red) and \(\mathbf{x}_{t}\) (green), as a noisy sample of the cost function (i.e., expected distance) evaluated on \(\mathbf{\theta}\) (grey) and \(\mathbf{x}_{t}\). **(c)** At inference time, the trained ACE network predicts the cost for any parameter \(\mathbf{\theta}\) given observation \(\mathbf{x}_{o}\) (top row), which is used to evaluate the GBI posterior under different \(\beta\) (bottom row, darker for larger \(\beta\)) for MCMC sampling without running additional simulations. The distance is well-defined and can be approximated even when the simulator is misspecified (dashed lines). ``` Inputs: prior \(p(\mathbf{\theta})\), simulator with implicit likelihood \(p(\mathbf{x}|\mathbf{\theta})\), number of simulations \(N\), feedforward NN \(f_{\phi}\) with parameters \(\phi\), NN learning rate \(\eta\), distance function \(d(\cdot,\cdot)\), noise level \(\sigma\), number of noise-augmented samples \(S\), inverse temperature \(\beta\), number of target datapoints per \(\mathbf{\theta}\)\(N_{\text{target}}\), \(K\) observations \(\mathbf{x}_{o}^{(1,\ldots,K)}\). Outputs:\(M\) samples from generalized posteriors given \(K\) observations. Generate dataset: sample prior and simulate: \(\mathbf{\theta},\mathbf{x}\leftarrow\{\mathbf{\theta}_{i}\sim p(\mathbf{\theta}),\mathbf{x} _{i}\sim p(\mathbf{x}|\mathbf{\theta}_{i})\}_{i:1\ldots N}\) add noise and concatenate: \(\mathbf{x}_{\text{target}}=[\mathbf{x},\ \mathbf{x}_{1\ldots S}+\mathbf{\epsilon},\ \mathbf{x}_{o}],\mathbf{ \epsilon}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) Training: whilenot convergeddo for\((\mathbf{\theta},\mathbf{x})\) in batchdo \(\mathbf{x}_{t}^{\text{used}}\leftarrow\) sample \(N_{\text{target}}\) datapoints from \(\mathbf{x}_{\text{target}}\) for\(\mathbf{x}_{t}\) in \(\mathbf{x}_{t}^{\text{used}}\)do \(\mathcal{L}\leftarrow\mathcal{L}+(f_{\phi}(\mathbf{\theta},\mathbf{x}_{t})-d( \mathbf{x},\mathbf{x}_{t}))^{2}\) \(\phi\leftarrow\phi-\eta\cdot\text{Adam}(\nabla_{\phi}\mathcal{L})\) ; // and reset L to zero Sampling: for\(k\in[1,...,K]\)do Draw \(M\) samples, with MCMC, from: \(\exp(-\beta\cdot f_{\phi}(\mathbf{\theta},\mathbf{x}_{o}^{(k)}))\ p(\mathbf{\theta})\) ``` **Algorithm 1**Generalized Bayesian Inference with Amortized Cost Estimation (ACE) ## 4 Benchmark experiments ### Experiment setup TasksWe first evaluated ACE on four benchmark tasks (modified from Lueckmann et al. (2021)) with a variety of parameter- and data-dimensionality, as well as choice of distance measure: (1) **Uniform 1D**: 1D \(\mathbf{\theta}\) and \(\mathbf{x}\), the simulator implements an even polynomial with uniform noise likelihood, uniform prior (Fig. 2c); (2) **2 Moons**: 2D \(\mathbf{\theta}\) and \(\mathbf{x}\), simulator produces a half-circle with constant mean radius and radially uniform noise of constant width, translated as a function of \(\mathbf{\theta}\), uniform prior; (3) **Linear Gaussian**: 10D \(\mathbf{\theta}\) and \(\mathbf{x}\), Gaussian model with mean \(\mathbf{\theta}\) and fixed covariance, Gaussian prior; (4) **Gaussian Mixture**: 2D \(\mathbf{\theta}\) and \(\mathbf{x}\), simulator returns five i.i.d. samples from a mixture of two Gaussians, both with mean \(\mathbf{\theta}\), and fixed covariances, one with broader covariance than the other, uniform prior. For the first three tasks, we use the mean-squared error between simulation and observation as the distance function. For the Gaussian Mixture task, we use maximum mean discrepancy (MMD\({}^{2}\)) to measure the statistical distance between two sets of five i.i.d. samples. Importantly, for each of the four tasks, we can compute the integral in Eq. 2 either analytically or accurately capture it with quadrature over \(\mathbf{x}\). Hence, we obtain the true distance \(\ell(\mathbf{\theta};\mathbf{x}_{o})\) and use that to draw, for each value of \(\beta\) and \(\mathbf{x}_{o}\), 5000 samples from the 'ground-truth' GBI posterior (GT-GBI, black in Fig. 3). See Appendix A.4.2 for more detailed descriptions of tasks and distance functions. Training dataFor each task, we simulate 10,000 pairs of \((\mathbf{\theta},\mathbf{x})\) and construct the target dataset as in Fig. 2a, with 100 additional noise-augmented targets and 20 synthetic observations--10 well-specified and 10 misspecified--for a total of 10120 \(\mathbf{x}_{t}\) data points. Well-specified observations are additional prior predictive samples, while misspecified observations are created by moving prior predictive samples outside the boundaries defined by the minimum and maximum of 100,000 prior predictive simulations (e.g., by successively adding Gaussian noise). Test dataTo evaluate inference performance, we use ACE to sample approximate GBI posteriors conditioned on 40 different synthetic observations, 20 of which were included in the target dataset \(\mathbf{x}_{t}\), and 10 additional well-specified and misspecified observations which were not included in the target dataset. We emphasize that including observations in the target data is not a case of test data leakage, but represents a real use case where some experimental data which one wants to perform inference on are already available, while the network should also be amortized for unseen observations measured after training. Nevertheless, we report in Fig. 3 results for 'unseen' observations, i.e., not in the target dataset. Results are almost identical for those that were in the target dataset (Appendix A1). We drew 5000 posterior samples per observation, for 3 different \(\beta\) values for each task. MetricsWe are primarily interested in two aspects of performance: approximate posterior predictive distance and cost estimation accuracy. First, as motivated above, we want to find parameter configurations which produce simulations that are as close as possible to the observation, as measured by the task-specific distance function. Therefore, we simulate using each of the 5000 ACE GBI posterior samples, and compute the average distance between predictive simulations and the observation. Mean and standard deviation are shown for well-specified and misspecified observations separately below (Fig. 3, 1st and 2nd columns). Second, we want to confirm that ACE accurately approximates \(\ell(\mathbf{\theta};\mathbf{x}_{o})\), which is a prerequisite for correctly inferring the GBI posterior. Therefore, we compare the ACE-predicted and true cost across 5000 samples from each GBI posterior, as well as the classifier 2-sample test (C2ST, (Lopez-Paz and Oquab, 2016; Lueckmann et al., 2021)) score between the ACE approximate and ground-truth GBI posterior (Fig. 3, 3rd and 4th columns). Note that cost estimation accuracy can be evaluated for parameter values sampled in any way (e.g., from the prior), but here we evaluate accuracy as samples become more concentrated around good parameter values, i.e., from GBI posteriors with increasing \(\beta\). We expect that these tasks become increasingly challenging with higher values of \(\beta\), since these settings require the cost estimation network to be highly accurate in tiny regions of parameter space. Other algorithmsAs a comparison against SBI methods that target the standard Bayesian posterior (but which nevertheless might produce good predictive samples), we also tested approximate Bayesian computation (ABC), neural posterior estimation (NPE, (Papamakarios and Murray, 2016)), and neural likelihood estimation (NLE, (Papamakarios et al., 2019; Lueckmann et al., 2019)) on the same tasks. NPE and NLE were trained on the same 10,000 simulations, and 5000 approximate posterior samples were obtained for each \(\mathbf{x}_{o}\). We used the amortized (single-round) variants of both as a fair comparison against ACE. For ABC, we used the 10,000 training samples as a reference set, from which 50 were drawn as posterior samples with probability scaling inversely with the distance between their corresponding simulation and the observation, i.e., ABC with acceptance kernel (Sisson and Fan, 2010). ### Benchmark results Overall, we see that for well-specified \(\mathbf{x}_{o}\) (i.e., observations for which the simulator is well-specified), ACE obtains GBI posterior samples that achieve low average posterior predictive simulation distance across all four tasks, especially at high values of \(\beta\) (Fig. 3, 1st column). In comparison, ABC is worse for the Linear Gaussian task (which has a higher parameter dimensionality than all other tasks), whereas NLE and NPE achieve similarly low posterior predictive distances. On misspecified observations, across all tasks and simulation-budgets (with the exception of Gaussian mixture on 10k simulations) we see that ACE achieves lower or equally low average posterior predictive simulation distance as both neural SBI methods, even at moderate values of \(\beta\) (Fig. 3, 2nd column, Figs. A3, A4). This is in line with our intuition that ACE returns a valid and accurate cost even if the simulator is incapable of producing data anywhere near the observation, while Bayesian likelihood and posterior probabilities estimated by NLE and NPE are in these cases nonsensical (Grunwald and Langford, 2007; Grunwald and Van Ommen, 2017; Cannon et al., 2022; Ward et al., 2022). Therefore, we see that ACE can perform valid inference for a broad range of simulators, obtaining a distribution of posterior samples with predictive simulations close to observations, and is automatically robust against model-misspecification. For both well-specified and misspecified observations, ACE-GBI samples achieve posterior predictive distance very close to ground-truth (GT)-GBI samples, at all values of \(\beta\) (Fig. 3, 1st and 2nd column), suggesting that ACE is able to accurately predict the expected distance. Indeed, especially for low to moderate values of \(\beta\), the ACE-predicted cost closely matches the true cost (Fig. 3, 3rd column, light blue for specified \(\mathbf{x}_{o}\), Fig. A2 for misspecified). For higher values of \(\beta\), ACE-predicted cost is still similar to true cost, although the error is, as expected, larger for very large \(\beta\) and, thus, highly concentrated posteriors (Fig. 3, 3rd column, dark blue). This is similarly reflected in the classifier 2-sample score between ACE and GT GBI posteriors (Fig. 3, 4th column): ACE posterior samples are indistinguishable from GT samples at low \(\beta\), even for the 10D Linear Gaussian task, but becomes less accurate with increasing \(\beta\). Nevertheless, predictive simulation distance dramatically increases with \(\beta\) even when ACE is less accurate, suggesting that sampling to explicitly minimize a cost function which targets parameters with data-similar simulations is a productive goal. Relative performance results across algorithms are qualitatively similar when using a training simulation budget of 200 (Fig. A3) and 1000 (Fig. A4), but ABC required a sufficiently high simulation budget and performed poorly for 1000 training simulations or less. ## 5 Hodgkin-Huxley inference from Allen Cell Types Database recordings Finally, we applied ACE to a commonly used scientific simulator and real data: we used a single-compartment Hodgkin-Huxley (HH) simulator from neuroscience and aimed to infer eight parameters of the simulator given electrophysiological recordings from the Allen Cell Types Database [for Brain Science, 2016, Teeter et al., 2018, Pospischil et al., 2008]. While this simulator can generate a broad range of voltage traces, it is still a crude approximation to _real_ neurons: it models only a subset of ion channels, it ignores the spatial structure of neurons, and it ignores many intracellular mechanisms [Brette, 2015]. It has been demonstrated that parameters of the HH-model given _synthetic_ recordings can be efficiently estimated with standard NPE [Goncalves et al., 2020], but estimating parameters given _experimental_ recordings has been challenging [Tolley et al., 2023] and has required ad-hoc changes to the inference procedure (e.g., Bernaerts et al. [2023] added noise to the summary statistics, and Goncalves et al. [2020] used a custom multi-round scheme with a particular choice of density estimator). We will demonstrate that ACE can successfully perform simulation-amortized inference given experimental recordings from the Allen Cell Types Database (Fig. 4a). Figure 3: **Performance on benchmark tasks.** ACE obtains posterior samples with low average distance **to** observations, and accurately estimates cost function. **Rows:** results for each task. **Columns:** average predictive distance compared to SBI methods and GT (1st and 2nd), cost estimation accuracy evaluated on ACE posterior samples for different \(\beta\) (lighter blue shades are lower values of \(\beta\)) (3rd), and C2ST accuracy relative to GT GBI posterior (4th, lower is better). We trained NPE and ACE given 100K prior-sampled simulations (details in Appendix A4.4). After training, ACE accurately predicts the true cost of parameters given experimental observations (Fig. 4b). We then used slice sampling to draw samples from the GBI posterior for three different values of \(\beta=\{25,50,100\}\) and for ten observations from the Allen Cell Types Database. Interestingly, the marginal distributions between NPE and ACE posteriors are very similar, especially for rather low values of \(\beta\) (Fig. 4c, cornerplot in Appendix Fig. A6). The quality of posterior predictive samples, however, strongly differs between NPE and ACE: across the ten observations from the Allen Cell Types database, only 35.6% of NPE posterior predictives produced more than five spikes (all observations have at least 12 spikes), whereas the ACE posterior predictives closely match the data, even for low values of \(\beta\) (Fig. 4d, samples for all observations and all \(\beta\) values in Figs. A7,A8,A9 and for NPE in Fig. A10. 66% (\(\beta=25\)), 87% (\(\beta=50\)), 96% (\(\beta=100\)) of samples have more than five spikes). Indeed, across all ten observations, the average posterior predictive distance of ACE was significantly smaller than that of NPE, and for large values of \(\beta\) the distance is even less than half (Fig. 4e). Finally, for rejection-ABC, only the top \(35\) samples (out of the full training budget of 100K simulations) had a distance that is less than the _average_ posterior predictive distance achieved by ACE. To investigate these differences between NPE and ACE, we also evaluated NPE posterior predictive performance on synthetic data (prior predictives) and found that it had an average predictive distance of 0.189, which roughly matches the performance of ACE on the experimental observations (0.174 for \(\beta\)=50). This suggest that, in line with previous results [Bernaerts et al., 2023], NPE indeed struggles with _experimental_ observations. We then trained NPE with 10 times more simulations (1M in total). With this increased simulation budget, NPE performed significantly better than with 100K simulations, but still produced poorer predictive samples than ACE trained with 100K simulations (for \(\beta=\{50,100\}\)), although the marginals were similar between NPE (1M) and ACE (100K) (Fig. A5, samples for all observations in Appendix Fig. A11). Overall, these results demonstrate that ACE can successfully be applied to real-world simulators on which vanilla NPE fails. On the Hodgkin-Huxley simulator, ACE generates samples with improved predictive accuracy despite an order of magnitude fewer simulations and despite the marginal distributions being similar to those of NPE. Figure 4: **Applicaton of ACE to Allen data.****(a)** Three observations from the Allen Cell Types Database. **(b)** True cost (evaluated as Monte-Carlo average over 10 simulations) per \(\mathbf{\theta}\) vs ACE-predicted cost. Colors are different observations. **(c)** Marginals of posterior distributions for NPE (orange) and ACE (shades of blue. Light blue: \(\beta=25\), medium blue: \(\beta=50\), dark blue: \(\beta=100\)). **(d)** Top: Two GBI predictive samples for each observation. Bottom: Two NPE predictive samples. Additional samples in Appendix A7-A10. **(e)** Average predictive distance to observation for NPE and ACE with \(\beta=\{25,50,100\}\). Discussion We presented ACE, a method to perform distance-aware inference for scientific simulators within the Generalized Bayesian Inference (GBI) framework. Contrary to'standard' simulation-based inference (SBI), our method does not target the Bayesian posterior, but replaces the likelihood function with a cost function. For real-world simulators, doing so can provide practical advantages over standard Bayesian inference: First, the likelihood function quantifies the probability that a parameter generates data which _exactly_ matches the data. However, in cases where the model is a rather crude approximation to the real system being studied, scientists might well want to include parameters that can generate data that is sufficiently close (but not necessarily identical) in subsequent analyses. Our method makes this possible, and is advantageous over other GBI-based methods since it is amortized over observations and the inverse temperature \(\beta\). Second, many simulators are formulated as noise-free models, and it can be hard to define appropriate stochastic extensions (e.g., (Goldwyn and Shea-Brown, 2011)). In these cases, the likelihood function is ill-defined and, in practice, this setting would require'standard' SBI methods, whose density estimators are generally built to model continuous distributions, to model discrete jumps in the posterior density. In contrast, our method can systematically and easily deal with noise-free simulators, and in such situations more closely resembles parameter-fitting algorithms. Lastly, standard Bayesian inference is challenging when the model is misspecified, and the performance of neural network-based SBI methods can suffer drastically in this scenario (Cannon et al., 2022). ### Related work GBI for Approximate Bayesian ComputationSeveral studies have proposed methods that perform GBI on simulators with either an implicit (i.e., simulation-based) likelihood or an unnormalized likelihood. Wilkinson (2013) argued that rejection-ABC performs exact inference for a modified model (namely, one that appends an additive uniform error) instead of approximate inference for the original model. Furthermore, ABC with arbitrary probabilistic acceptance kernels can also be interpreted as having different error models, and Schmon et al. (2020) integrate this view to introduce generalized posteriors for ABC, allowing the user to replace the hard-threshold kernel (i.e., \(\epsilon\)-ball of acceptance) with an arbitrary loss function that measures the discrepancy between \(\mathbf{x}\) and \(\mathbf{x}_{o}\) for MCMC-sampling of the approximate generalized posterior. Other recent GBI methods require a differentiable simulator (Dellaporta et al., 2022; Cherief-Abdellatif and Alquier, 2020) or build tractable cost functions that can be sampled with MCMC (Matsubara et al., 2021; Pacchiardi and Dutta, 2021), but this still requires running simulations _at inference time_ (i.e., during MCMC) and does not amortize the cost of simulations and does not reuse already simulated datapoints. Finally, Bayesian Optimization for Likelihood-free Inference (BOLFI, (Gutmann and Corander, 2016)) and error-guided LFI-MCMC (Begy and Schikuta, 2021) are not cast as generalized Bayesian inference approaches, but are related to ACE. Similarly as in ACE, they train models (for BOLFI, a Gaussian process and, for error-guided LFI-MCMC, a classifier) to estimate the discrepancy between observation and simulation. In BOLFI, the estimator is then used to iteratively select new locations at which to simulate. However, contrary to ACE, neither of these two methods amortizes the cost of simulations over observations. Misspecification-aware SBISeveral other methods have been proposed to overcome the problem of misspecification in SBI: For example, Bernaerts et al. (2023) add noise to the summary statistics in the training data, Ward et al. (2022) use MCMC to make the misspecified data well-specified, and Kelly et al. (2023) introduce auxiliary variables to shift the (misspecified) observation towards being well-specified. All of these methods, however, maintain that the inference result should be an 'as close as possible' version of the posterior distribution. Contrary to that, our method does _not_ aim to obtain the Bayesian posterior distribution (which, for misspecified models, can often be nonsensical or even undefined if the evidence is zero), but is specifically targeted towards parameter regions that are a specified distance from the observation. ### Limitations While our method amortizes the cost of simulations and of training, it still requires another method to sample from the posterior distribution. We used multi-chain slice-sampling (Neal, 2003), but other methods such as variational inference could also be employed (Wigqvist et al., 2021; Glockler et al., 2022). While sampling incurs an additional cost, this cost is generally small in comparison to potentially expensive simulations. In addition, our method can perform inference for distance functions which can be written as expectations over the likelihood. As we demonstrated, this applies to many popular and widely used distances. Our method can, however, not be applied to arbitrary distance functions (e.g., the minimum distance between all simulator samples and the observation). We note that, while the distances we investigated here are certainly useful to practioners, they do not necessarily fulfill the criterion of being 'proper' scoring rules (Gneiting and Raftery, 2007; Pacchiardi and Dutta, 2021). Lastly, in comparison to'standard' SBI, GBI introduces an additional hyperparameter to the inference procedure, the inverse temperature \(\beta\). This hyperparameter has to be set by the user and its choice strongly affects inference behaviour: low values of \(\beta\) will include regions of parameter-space whose data do not necessarily match the observation closely, whereas high values of \(\beta\) constrain the parameters to only the best-fitting parameter values. Our method is amortized over \(\beta\), which makes exploration of different \(\beta\) values possible, and which could simplify automated methods for setting \(\beta\), similar to work where \(\beta\) is taken as the exponent of the likelihood function (Wu and Martin, 2023). ## 7 Conclusion We presented a method that performs generalized Bayesian inference with amortized cost estimation. Our method produces good predictive samples on several benchmark tasks and we showed that it allows amortized parameter estimation of Hodgkin-Huxley models given experimental recordings from the Allen Cell Types Database. ## 8 Acknowledgements RG is supported by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 101030918 (AutoMIND). MD is supported by the International Max Planck Research School for Intelligent Systems (IMPRS-IS). RG, MD, JHM are members of the Machine Learning Cluster of Excellence, EXC number 2064/1-390727645. This work was supported the German Federal Ministry of Education and Research (BMBF): Tubingen AI Center, FKZ: 01IS18039A. We would like to thank Jan Boelts, Janne Lappalainen, and Auguste Schulz for feedback on the manuscript, and Julius Vetter for feedback and discussion on proper scoring rules, as well as Poornima Ramesh and Mackelab members for extensive discussions throughout the project.
2310.09497
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
We propose a novel zero-shot document ranking approach based on Large Language Models (LLMs): the Setwise prompting approach. Our approach complements existing prompting approaches for LLM-based zero-shot ranking: Pointwise, Pairwise, and Listwise. Through the first-of-its-kind comparative evaluation within a consistent experimental framework and considering factors like model size, token consumption, latency, among others, we show that existing approaches are inherently characterised by trade-offs between effectiveness and efficiency. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. Our Setwise approach, instead, reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, compared to previous methods. This significantly improves the efficiency of LLM-based zero-shot ranking, while also retaining high zero-shot ranking effectiveness. We make our code and results publicly available at \url{https://github.com/ielab/llm-rankers}.
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
2023-10-14T05:20:02Z
http://arxiv.org/abs/2310.09497v2
# A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models ###### Abstract. Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness. Large Language Model for Zero-shot ranking, setwise prompting, sorting algorithm + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal of the Journal of comparisons required; this leads to substantial savings in computational resources. Furthermore, beyond the adjustment to _Pairwise_ approaches, _Setwise_ prompting allows the utilization of model output logits to estimate the likelihood of ranks of document labels, a capability not feasible in existing _Listwise_ approaches, which solely rely on document label ranking generation -- a process that is slow and less effective. We evaluate our _Setwise_ approach along with other existing approaches under the same experimental setting. Our results show that the incorporation of our _Setwise_ prompting substantially improves the efficiency of both _Pairwise_ and _Listwise_ approaches. In addition, Setwise sorting enhances _Pairwise_ and _Listwise_ robustness to variations in the internal ordering quality of the initial rankings: no matter what the initial ordering of the top-k documents to rank is, our method provides consistent and effective results. This is unlike other methods that are highly susceptible to such initial ordering. To conclude, this paper makes three key contributions to our understanding of LLM-based zero-shot ranking approaches: 1. We conduct a systematic examination of all existing LLM-based zero-shot ranking approaches and our novel _Setwise_ approach under strict and consistent experimental conditions, including efficiency comparisons which have been overlooked in the literature. Our comprehensive empirical evaluation on popular zero-shot document ranking benchmarks offers valuable insights for practitioners. 2. We introduce an innovative _Setwise_ prompting approach that enhances the sorting algorithms employed in the _Pairwise_ method, resulting in highly efficient zero-shot ranking with LLMs. 3. We further adapt how our _Setwise_ prompting approach computes rankings to the _Listwise_ approach, leveraging the model output logits to estimate the likelihood of rankings. This leads to a more effective and efficient _Listwise_ zero-shot ranking. ## 2. Background & Related Work There are three main prompting approaches for zero-shot document ranking employing LLMs: _Pointwise_(K query (Kang et al., 2017; Kang et al., 2018). To re-rank all candidate documents, a basic method, called _AllPairs_, involves generating all possible permutations of document pairs from the candidate set. Pairs are independently then fed into the LLM, and the preferred document for each pair is determined. Subsequently, an aggregation function is employed to assign a score to each document based on the inferred pairwise preferences, and the final ranking is established based on the total score assigned to each document (Kang et al., 2017). However, this aggregation-based approach suffers from high query latency: LLM inference on all document pairs can be computationally expensive. To address this efficiency issue in pairwise approaches, prior studies have introduced sampling (Kang et al., 2018; Kang et al., 2018) and sorting (Kang et al., 2018) algorithms. In this paper, we focus on sorting algorithms because, assuming an LLM can provide ideal pairwise preferences, the sorting algorithms offer the theoretical assurance of identifying the top-\(k\) most relevant documents from the candidate pool. In prior work (Kang et al., 2018), two sorting algorithms (Kang et al., 2018), _heap sort_ and _bubble sort_, were employed. Unlike _AllPairs_, these algorithms leverage efficient data structures to selectively compare document pairs, which can quickly pull the most relevant documents out from the candidate pool and place them at the top of the final ranking. This is particularly suitable for the top-\(k\) ranking task, where only a ranking of the \(k\) most relevant documents is needed. These sorting algorithms provide a stopping mechanism that prevents the need to rank all candidate documents. From a theoretical standpoint the differences and relative advantages among these three families of zero-shot document ranking that employ LLMs are clear. However, from an empirical standpoint there has been no fair and comprehensive evaluation of these techniques in terms of effectiveness vs. efficiency, and across factors such as sizes of LLMs, benchmarks, and computational resources. ## 3. **Setwise Ranking Prompting** ### Limitations of Current Approaches The efficiency of LLM-based zero-shot ranking methods hinges on two critical dimensions. First, the number of LLM inferences significantly impacts efficiency. Given that LLMs are large neural networks with billions of parameters, inference is computationally intensive. Hence, an increased number of LLM inferences introduces a considerable computational overhead. This is notably observed in the current _Pairwise_ approach, which is inefficient due to the extensive need for inferring preferences for the many document pairs. While sorting algorithms offer some relief, they do not entirely mitigate the efficiency issue. Second, the number of LLM-generated tokens per inference plays a pivotal role. LLMs employ a transformer decoder for autoregressive token generation, where the next token generation depend on previously tokens generated. Each additional generated token requires an extra LLM inference. This accounts for the inefficiency of the existing _Listwise_ approach, which relies on generating an entire ranking of document label lists, often requiring a substantial number of generated tokens. ### Speeding-up Pairwise with Setwise To solve the inefficiency issue of these approaches, we propose a novel _Setwise_ prompting approach. Our prompt, as illustrated in Figure 1d, instructs the LLM to select the most relevant document for the given query from a set of documents, hence the term _Setwise_ prompting. We specifically treat the collection of documents as an unordered set and later experiments will show that _Setwise_ prompting is quite robust to document ordering. Figure 2. Illustration of the impact of Setwise Prompting vs. Pairwise Prompting on Sorting Algorithms. Nodes are documents, numbers in nodes represent the level of relevance assigned by the LLM (higher is more relevant). With our prompt, sorting-based _Pairwise_ approaches can be considerably accelerated. This is because the original _heap sort_ and _bubble sort_ algorithm used in the _Pairwise_ approach only compares a pair of documents at each step in the sorting process, as illustrated in Figure 1(a) and 1(c). These sorting algorithms can be sped up by comparing more than two documents at each step. For example, in the _heap_ sort algorithm, the "heapify" function needs to be invoked for each subtree, where the parent node must be swapped with the child node with the highest value if it exceeds the parent value. In the case of Figure 1(a), to perform "heapify" with pairwise prompting, a minimum of 6 comparisons (each root node paired with each child node) are required. Conversely, if we increase the number of child nodes in each subtree to 3 and can compare 4 nodes at a time, only 2 comparisons are needed to "heapify" a tree with 9 nodes, as illustrated in Figure 1(b). Similarly, for the _bubble sort_ algorithm, if we can compare more than a pair of documents at a time, each "bubbling" process will be accelerated. For instance, in Figure 1(c), there are 4 comparisons in total, but in Figure 1(d), with the ability to compare 3 documents at once, only 2 comparisons are required to be able to bring the node with the largest value to the top. Our _Setwise_ prompting is designed to instruct LLMs to compare the relevance of multiple documents at a time, making it well-suited for this purpose. ### Listwise Likelihoods with Setwise Our _Setwise_ prompting can also accelerate the ranking process for the _Listwise_ approach. The original _Listwise_ method relies on the LLM's next token generation to produce the complete ordered list of document labels at each step of the sliding window process, as illustrated in Figure 0(b). As we discussed, generating the document label list is computationally intensive, because the LLM must do one inference for each next token prediction. On the other hand, the LLM may generate results in an unexpected format or even decline to generate the desired document label list (Kal * The average number of LLM inferences per query. LLMs have limited input length. Thus, to re-rank 100 documents, multiple LLM inferences are often needed. It's important to note that an increased number of LLM inferences translates to higher computational demands. Thus, we regard this as an efficiency metric worth considering. * The average number of prompt tokens inputted to the LLMs per query. This metric takes into account the actual average quantity of input tokens required in the prompts for each method to re-rank 100 documents per query. Given that self-attention mechanisms in transformer-based LLMs become prohibitively costly for a large number of input tokens (Krizhevsky et al., 2017), an increase in tokens within the prompts also translates to higher computational demands. Notably, numerous LLM web API services, including OpenAI APIs, charge based on the number of input tokens in the API calls. As such, we deem this metric valuable in assessing efficiency. * The average number of generated tokens outputted by LLMs per query. Much like the assessment of average prompt tokens, this metric provides an evaluation of computational efficiency, but from a token generation perspective. Instead of focusing on the number of tokens in the prompt, it takes into account the number of tokens generated. This is particularly significant because transformer-based generative LLMs produce content token-by-token, with each subsequent token relying on the generation of preceding ones. Consequently, an increase in number of generated tokens leads to a corresponding increase in the computational cost, as each additional generated token implies another LLM forward inference. In fact, OpenAI applies a pricing structure wherein the cost for the number of generated tokens is twice that of the number of prompt tokens for their LLM APIs 1. This underscores the substantial impact that generated tokens can have on computational expenses. Footnote 1: [https://openai.com/pricing](https://openai.com/pricing), last visited 12 October 2023. * The average query latency. We evaluate the run time efficiency of all the methods with average query latency. To conduct this assessment, a single GPU is employed, and queries are issued one at a time. The per-query latency is then averaged across all the queries in the dataset. It's important to highlight that for methods that support batching we always employ the maximum batch size to optimize GPU memory usage and parallel computation, thus maximizing efficiency for these particular methods. This approach ensures that the evaluation is conducted under conditions most favourable for efficiency gains. It is important to acknowledge that while other methods may not be able to use the batching strategy for individual queries, they do have the capability to utilize batching and parallel computing across various user queries in real-world scenarios. However, this lies more in engineering efforts and falls outside the scope of this paper: as such, we do not investigate this perspective. ### Implementation details To establish the initial BM25 first-stage ranking for all datasets, we employed the Pyserini Python library (Pyserini, 2018) with default settings. For LLM-based zero-shot re-rankers, we followed the prompts recommended in existing literature to guide Flan-t5 models of varying sizes (Flan-t5-large with 780M parameters, Flan-t5-xl with 3B parameters, and Flan-t5-xxl with 11B parameters) in executing the zero-shot ranking task. Specifically, for the _pointwise.qlm_ method, we adopted the prompt suggested by Sachan et al. (Sachan et al., 2018). For _pointwise.yes_no_, we use the prompt provided by Qin et al. (Qin et al., 2018). For _listwise.generate_, we utilized the prompt designed by Sun et al. (Qin et al., 2018). As for _pairwise.allpair_, _pairwise.heapsort_, and _pairwise.bubblesort_, we relied on the prompts from the original paper by Qin et al. (Qin et al., 2018). For methods leveraging our _Setwise_ promptting (i.e. _listwise.likelihood_, _setwise.heapsort_, and _setwise.bubblesort_), we employed the prompts detailed in Section 3. In the case of _Listwise_ approaches, we configure the window size (\(w\)) to contain 4 documents, each capped at a maximum of 100 tokens. The step size (\(s\)) is set to 2, and the number of repetitions (\(r\)) is set to 5. These settings take into account the token limitations imposed by Flan-t5 models, which have an input token cap of 512. A window size of 4 documents appears reasonable as it aligns well with the prompt capacity. Additionally, a step size of 2, combined with 5 repetitions, has theoretical guarantees of bringing the 10 most relevant documents to the top. For our _Setwise_ approaches, we set the number of compared documents \(c\) in each step to 3 for the main results. We further investigate the impact of \(c\) in Section 5.4. For all other methods, we truncate the documents with the maximum number of tokens to 128. We note that, among all the methods capable of utilizing both model output logits and generation outputs, we exclusively employ the latter. This choice is made in favor of a more general approach that allows for leveraging generation APIs across a wider range of closed-source LLMs. Nevertheless, we investigate the difference between using model output logits and generation outputs for our _Setwise_ approaches in Section 5.1. We carried out the efficiency evaluations on a local GPU workstation equipped with an AMD Ryzen Threadripper PRO 3955WX 16-Core CPU, a NVIDIA RTX A6000 GPU with 49GB of memory, and 128GB of DDR4 RAM. ## 5. Results and Analysis ### Effectiveness Results Table 2 presents results for both ranking effectiveness and efficiency on TREC DL datasets. In regards to ranking effectiveness, it is notable that all LLM-based zero-shot ranking approaches demonstrate a significant improvement over the initial BM25 ranking. The only exception to this trend is the _pointwise.qlm_ approach on DL2019 across all models and DL2020 with the Flan-t5-xxl model. Interestingly, as the LLM size increases, the effectiveness of _pointwise.qlm_ decreases. This finding is particularly unexpected, given the common assumption that larger LLMs tend to be more effectiveness. On the other hand, _pointwise.yes_no_ method achieved a decent NDCG@10 score with Flan-t5-large when compared to other methods. However, effectiveness also did not increase as model size increased. These unexpected results for both _Pointwise_ methods might be attributed to the requirement of a more refined model output calibration process, ensuring their suitability for comparison and sorting across different documents (Pyserini, 2018). The _Listwise_ approaches (_listwise.generation_) are far less effective when tested with Flan-t5-large and Flan-t5-xl. However, _listwise.generation_ shows some improvement with Flann-t5-xxl. These results may be attributed to the fact that generating a ranking list requires fine-grained relevance preferences across multiple documents, a task that may exceed the capabilities of smaller models. In contrast, the _listwise.likelihood_ approach, empowered by our _Setwise_ prompt, markedly enhances the ranking effectiveness of the _Listwise_ approach, even when utilizing smaller models. We acknowledge however that _listwise.likelihood_ requires access to the model output logits, whereas _listwise.generation_ does not. In the case of _Pairwise_ and _Setwise_ approaches, they consistently exhibit good ranking effectiveness across various model sizes and datasets. In Table 3, we present the zero-shot ranking effectiveness of all methods (with the exception of _pairwise.allpair_ due to its computationally intensive nature) across 9 widely-used BEIR datasets. Notably, we identify several different trends that deviate from observations made on the TREC DL datasets. Firstly, _pointwise.qlm_ exhibits a slightly higher average NDCG@10 score compared to _pointwise.yes_no_. Moreover, the effectiveness of _pointwise.qlm_ remains stable even as the model size increases. Secondly, _listwise.generation_ demonstrates comparable effectiveness to _listwise.likelihood_, with the majority of gains obtained in the Touche dataset, where other methods perform worse. Lastly, both _Pairwise_ and _Setwise_ methods that leverage the bubble sort algorithm consistently demonstrate higher average NDCG@10 compared to when they utilize the heap sort algorithm, regardless of the model size. Overall, the variety of results we observe across different experimental settings shows the importance of not drawing conclusions about effectiveness from single datasets or model sizes. ### Efficiency Results Regarding computational and runtime efficiency, the results presented in Table 2 indicate that both _Pointwise_ methods exhibit fewest inference, prompt tokens, and no generated tokens. Furthermore, their computational efficiency and query latency are optimized due to efficient GPU-based batched inference. It is worth noting, however, that these methods do come with certain limitations. Specifically, they require access to the model output logits (thus currently limiting their use to just open source LLMs) and are less effective when used with larger models. In contrast, _pairwise.allpair_ appears to be the most expensive method that consumes the most number of prompt tokens and generated tokens due to the large number of document pair preferences needed to be inferred. Hence, even with GPU batching, _pairwise.allpair_ still has the worst query latency. In constrast, approaches utilizing our _Setwise_ prompting--namely, _listwise.likelihood_, _setwise.heapsort_, and _setwise.bubblesort_, are far more efficient than their counterparts, _listwise.generate_, _pairwise.heapsort_, and _pairwise.bubblesort_ respectively. Notably, these improvements \begin{table} \begin{tabular}{c c|l|l l l l l l l l l} \hline \hline & & & \multicolumn{4}{c|}{**TREC DL 2019**} & \multicolumn{4}{c}{**TREC DL 2020**} \\ \hline \# & **Methods** & **NDCG@10** & **\#Inferencees** & **Pro. tokens** & **Gen. tokens** & **Latency(s)** & **NDCG@10** & **\#Inferencees** & **Pro. tokens** & **Gen. tokens** & **Latency(s)** \\ \hline \(a\) & BM25 &.506 & - & - & - & - &.480 & - & - & - & - \\ \hline \(b\) & pointwise.qlm & 557 & 100 & 15211.6 & - & 0.6 & 567\({}^{a}\) & 100 & 15285.2 & - & 0.5 \\ \(c\) & pointwise.yes & no & 654\({}^{abd}\) & 100 & 16111.6 & - & 0.6 & 615\({}^{a}\) & 100 & 16185.2 & - & 0.6 \\ \(d\) & listwise.generation & 561\({}^{a}\) & 245 & 119210.8 & 2581.35 & 54.2 & 547\({}^{a}\) & 245 & 119269.6 & 2460.1 & 52.0 \\ \(e\) & listwise.likelihood & _669\({}^{abd}\)_ & 245 & 94200.7 & - & 10.0 & 626\({}^{abd}\) & 245 & 95208.3 & - & 10.0 \\ \(f\) & pairwise.allpair & _666\({}^{abd}\)_ & 9900 & 3014383.1 & 49500 & 109.6 & 622\({}^{abd}\) & 9900 & 3014232.7 & 49500 & 108.9 \\ \(g\) & pairwise.heapsort & 657\({}^{abd}\) & 230.3 & 104952.5 & 2303.3 & 16.1 & 619\({}^{abd}\) & 226.8 & 104242.1 & 2268.3 & 16.1 \\ \(h\) & pairwise.hubblesort & 636\({}^{abd}\) & 844.2 & 381386.3 & 8441.6 & 58.3 & 589\({}^{a}\) & 778.5 & 35735.5 & 7785.4 & 54.1 \\ \(i\) & stewise.heapsort & _670\({}^{abd}\)_ & 125.4 & 40460.6 & 626.9 & 8.0 & 618\({}^{abd}\) & 124.2 & 40362.0 & 621.1 & 8.0 \\ \(j\) & stewise.hubblesort & **678\({}^{abd}\)** & 460.5 & 147777.1 & 23023.2 & 29.1 & 624\({}^{abd}\) & 457.4 & 148947.3 & 2287.1 & 28.9 \\ \hline \(b\) & pointwise.qlm & 542 & 100 & 15211.6 & - & 1.4 & 542\({}^{a}\) & 100 & 15285.2 & - & 1.4 \\ \(c\) & pointwise.yes & no & 650\({}^{abd}\) & 100 & 16111.6 & - & 1.5 & 636\({}^{abd}\) & 100 & 16185.2 & - & 1.5 \\ \(d\) & listwise.generation & 559\({}^{a}\) & 245 & 119163.0 & 2910 & 71.4 & 547\({}^{a}\) & 245 & 119814.3 & 2814.7 & 69.0 \\ \(e\) & listwise.likelihood & 689\({}^{abd}\) & 689\({}^{abd}\) & 245 & 94446.1 & - & 12.5 & 672\({}^{abd}\) & 245 & 95298.7 & - & 12.6 \\ \(f\) & pairwise.allpair & **7.13\({}^{abd}\)** & 9900 & 2953436.2 & 49500 & 254.9 & 682\({}^{abd}\) & 9900 & 2949457.6 & 49500 & 254.8 \\ \(g\) & pairwise.heapsort & 705\({}^{abd}\) & 241.9 & 110126.9 & 2418.6 & 20.5 & **692\({}^{abd}\)** & 244.3 & 1113141 & 2443.3 & 20.8 \\ \(h\) & pairwise.hubblesort & 683\({}^{abd}\) & 886.9 & 400367.1 & 8869.1 & 75.1 & 662\({}^{abd}\) & 863.9 & 399452.4 & 2683.5 & 74.3 \\ \(i\) & stewise.heapsort & 693\({}^{abd}\) & 129.5 & 41665.7 & 647.4 & 9.6 & 678\({}^{abd}\) & 127.8 & 41569.1 & 638.9 & 9.7 \\ \(j\) & stewise.hubblesort & 705\({}^{abd}\) & 466.9 & 149949.1 & 2334.5 & 35.2 & 676\({}^{abd}\) & 463.5 & 151249.8 & 2317.6 & 35.3 \\ \hline \(b\) & pointwise.qlm & 506 & 100 & 15211.6 & - & 3.7 & 492 & 100 & 15285.2 & - & 3.7 \\ \(c\) & pointwise.yes.no & 644\({}^{ab}\) & 100 & 16111.6 & - & 3.9 & 632\({}^{ab}\) & 100 & 16185.2 & - & 3.9 \\ \(d\) & listwise.generation & 662\({}^{ab}\) & 245 & 119334.7 & 2824 & 100.1 & 637\({}^{ab}\) & 245 & 119951.6 & 2707.9 & 97.3 \\ \(e\) & listwise.likelihood & 701\({}^{abd}\) & 245 & 94537.5 & - & 36.6 & 609\({}^{abd}\) & 245 & 95482.7 & - & 36.9 \\ \(f\) & pairwise.allpair & 699\({}^{abd}\) & 9900 & 2794942.6 & 49500 & 750.2 & 688\({}^{abd}\) & 9900 & 2794928.4 & 49500 & 730.5 \\ \(g\) & pairwise.hubblesort & 708\({}^{abd}\) & 239.4 & 109492 & 2394 & 45.0 & **699\({}^{abd}\)** & 240.5 & 110211.8 & 2404.8 & 45.2 \\ \(h\) are achieved without compromising effectiveness. Section 5.4 will discuss further approaches on improving efficiency. Table 5 shows calculations for the estimated cost of API calls; this estimation is obtained using the OpenAI GPT-4 cost structure, and applying this same structure to the number of tokens measured in our experiments. At time of writing, OpenAI costs were $0.03/1,000 prompt tokens and $0.06/1,000 generated tokens. To estimate the token count if GPT-4 were used, we average the number of prompt tokens and generated tokens from Table 2 across Flan-T5 models. The _setwise.bottlesort_ and _pairwise.heapsort_ methods show comparable NDCG@10, but _pairwise.heapsort_ is cheaper. On the other \begin{table} \begin{tabular}{c|l|l|l l l l l l l l|l} \hline \hline \multicolumn{1}{c|}{\#} & \multicolumn{1}{c|}{**Methods**} & \multicolumn{1}{c|}{**Covid**} & \multicolumn{1}{c|}{**NFCorpus**} & \multicolumn{1}{c|}{**Touche**} & \multicolumn{1}{c|}{**DBPedia**} & \multicolumn{1}{c|}{**SeiFact**} & \multicolumn{1}{c|}{**Signal**} & \multicolumn{1}{c|}{**News**} & \multicolumn{1}{c|}{**Robust04**} & \multicolumn{1}{c|}{**Avg**} \\ \hline a & BM25 &.595 &.322 &.442 &.318 &.679 & 331 &.395 &.407 &.436 \\ \hline \multirow{8}{*}{ \begin{tabular}{c} b \\ c \\ \end{tabular} } & b & pointwise.qlm &.664\({}^{a}\) &.322 &.260 &.305 &.644\({}^{c}\) &.314\({}^{c}\) &.413\({}^{c}\) &.439\({}^{d}\) &.420 \\ & c & pointwise.yes\_no &.664\({}^{a}\) &.308 &.238 &.296 &.504 &.275 &.346 &.456\({}^{d}\) &.386 \\ & d & listwise.generation &.692\({}^{a}\) &.333\({}^{a}\) &.441\({}^{b}\)/\({}^{a}\) &.391\({}^{a}\)\({}^{a}\)\({}^{a}\) &.650\({}^{c}\) &.343\({}^{a}\)\({}^{a}\)\({}^{a}\) &.428\({}^{a}\) &.441\({}^{d}\) &.465 \\ & e & listwise.likelihood &.756\({}^{a}\)/\({}^{a}\) &.334\({}^{a}\) &.329\({}^{c}\) &.448\({}^{a}\)/\({}^{a}\) &.639\({}^{c}\) &.308\({}^{e}\) &.453\({}^{a}\)\({}^{a}\)\({}^{a}\) &.475\({}^{a}\)/\({}^{d}\) &.467 \\ & f & pairwise.heapsort &.761\({}^{a}\)/\({}^{a}\) &.336\({}^{a}\) &.318\({}^{a}\) &.414\({}^{a}\)/\({}^{a}\)\({}^{ hand, our _setwise.heapsort_ provides a reduction of \(\approx 62\%\) in cost by only marginally reducing NDCG@10 (a 0.8% loss). ### Impact of using Outputs Logits on Setwise Similar to _Pairwise_ methods, if the model output logits are accessible, our _Setwise_ approaches can also utilize these logits to estimate the likelihood of the most relevant document label. This approach eliminates the need for token generation, requiring only a single LLM forward inference to yield the output results, thus offering a more efficient process. To assess the impact of incorporating model output logits in our _Setwise_ approaches, we conducted experiments on the TREC DL 2019 dataset, with results presented in Table 4. The findings indicate that using model logits resulted in no change in ranking effectiveness, but did lead to lower query latency. This improvement stems from the absence of generated tokens for likelihood estimation. Hence, we conclude that if access to the model output is available, employing likelihood can further enhance the efficiency for our _Setwise_ approach. ### Effectiveness and Efficiency Trade-offs Our _Setwise_ prompting is characterized by a hyperparameter \(c\) controlling the number of compared documents within the prompt for each step in the sorting algorithms. In the previous experiments, we always set \(c=3\). Adjusting this hyperparameter allows one to further enhance efficiency by incorporating more compared documents into the prompt, thereby reducing the number of LLM inference calls. However, we acknowledge that there is an input length limitation to LLMs (in our experiments this is 512 prompt tokens) and setting \(c\) to a large value may require more aggressive document truncation, likely impacting effectiveness. To investigate the trade-off between effectiveness and efficiency inherent in our _Setwise_ approach, we set \(c=3,5,7,9\) while truncating the documents in the prompt to \(128,85,60,45\) tokens2, respectively. The NDCG@10, along with query latency for all models while varying \(c\), is visualized in Figure 2(a) for the TREC DL datasets. As expected, larger \(c\) reduces query latency but often degrades effectiveness. Notably, the heap sort algorithm consistently proves more efficient than bubble sort. For instance, with Flan-15-xl and \(c=9\), heap sort achieves strong NDCG@10 with a query latency of \(\approx\)3 seconds. When compared to the other methods outlined in Table 2, this represents the lowest query latency, except for the _Pointwise_ approaches with Flan-15-large, albeit with superior ranking effectiveness. It's worth noting that the ranking effectiveness decline with larger \(c\) values could also be attributed to the increased truncation of passages. LLMs with extended input length capacity might potentially yield improved ranking effectiveness for larger \(c\). This area warrants further exploration in future studies. Footnote 2: This reduction in document length is necessary to ensure prompt size is not exceeded. Similarly, the _Listwise_ balance effectiveness and efficiency through the adjustment of the repetition count \(r\) for the sliding window. In our prior experiments, we consistently set \(r=5\) to ensure that at least 10 of the most relevant documents can be brought to the top. In Figure 2(b), we investigate the influence of varying \(r\) on _Listwise_ approaches. Latency exhibits a linear relationship with \(r\), which aligns with expectations. A larger value of \(r\) can enhance the effectiveness of _listwise_,_generate_, and beyond \(r>5\), the improvement levels off. Conversely, the _listwise.likelihood_ approach, which leverages our _Setwise_ prompting, showcases notably higher effectiveness and efficiency. Even with a small value of \(r\) the performance of _listwise.likelihood_ exceeds that of _listwise.generate_, with the highest performance achieved around \(r=5\). ### Sensitivity to the Initial Ranking The ranking effectiveness of the original _Listwise_ and _Pairwise_ methods is influenced by the initial ranking order (Kang et al., 2018; Wang et al., 2019). To investigate this aspect in relation to our approach, we consider different orderings of the initial BM25 list; specifically, 1) initial BM25 ranking; 2) inverted BM25 ranking; and 3) random shuffled BM25 ranking. Each of these initial rankings was used to test different reranking methods using Flan-15-large. The results are presented in Figure 4. Different initial ranking orders negatively impact _listwise.generate, pairwise.heapsort_ and _pairwise.bubblesort_; _pairwise.heapsort_ is the most robust method. These findings align with the literature (Kang et al., 2018; Wang et al., 2019). In contrast, _Setwise_ prompting is far more robust to variations in the initial ranking order. Both _listwise.likelihood_ and _setwise.bubblesort_ exhibit large improvements over _listwise.generate_ and _pairwise.bubblesort_ in the case of the inverted BM25 ranking and randomly shuffled BM25 ranking. Moreover, they demonstrate a similar level of robustness to _pairwise.heapsort_. This leads us to the conclusion that our \begin{table} \begin{tabular}{l l r r r r} \hline \hline **Methods** & **NDCG@10** & **tfInf. Pro. tokens** & **Gen. tokens** & **Lat.(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\ _Setwise_ prompting approach substantially enhances the zero-shot re-ranking with LLMs in relation to the initial ranking. ## 6. Conclusion We undertook a comprehensive study of existing LLM-based zero-shot document ranking methods, employing strict and consistent experimental conditions. Our primary emphasis was on evaluating both their ranking effectiveness and their efficiency in terms of computational efficiency and runtime latency - factors that are often disregarded in previous studies. Our findings unveil some unforeseen insights, and effectiveness-efficiency trade-offs between different methods. This information equips practitioners with valuable guidance when selecting the most appropriate method for their specific applications. To further boost efficiency of LLM-based zero-shot document ranking, we introduced an innovative _Setwise_ prompting strategy. _Setwise_ has the potential to enhance both effectiveness and efficiency for _Listwise_ approaches provided the model logits are accessible. _Setwise_ also notably enhances the efficiency of sorting-based _Pairwise_ approaches. Furthermore, _Setwise_ prompting offers a straightforward way to balance effectiveness and efficiency by incorporating more documents for comparison in the prompt. Additionally, approaches equipped with _Setwise_ prompting demonstrated strong robustness to variation in the initial retrieval set used for reranking. Future work should focus on evaluating the _Setwise_ prompting approach on a wider array of LLMs, including LLaMA models (Zhou et al., 2022; Zhou et al., 2022) as well as the OpenAI LLM APIs. Additionally, recent advanced self-supervised prompt learning techniques (Zhou et al., 2022; Zhou et al., 2022) could be used to refine the _Setwise_ approach. We make our code and results publicly available at [https://github.com/ielab/llm-rankers](https://github.com/ielab/llm-rankers).
2308.13685
The local solubility for homogeneous polynomials with random coefficients over thin sets
Let $d$ and $n$ be natural numbers greater or equal to $2$. Let $\langle \boldsymbol{a}, \nu_{d,n}(\boldsymbol{x})\rangle\in \mathbb{Z}[\boldsymbol{x}]$ be a homogeneous polynomial in $n$ variables of degree $d$ with integer coefficients $\boldsymbol{a}$, where $\langle\cdot,\cdot\rangle$ denotes the inner product, and $\nu_{d,n}: \mathbb{R}^n\rightarrow \mathbb{R}^N$ denotes the Veronese embedding with $N=\binom{n+d-1}{d}$. Consider a variety $V_{\boldsymbol{a}}$ in $\mathbb{P}^{n-1}$, defined by $\langle \boldsymbol{a}, \nu_{d,n}(\boldsymbol{x})\rangle=0.$ In this paper, we examine a set of these varieties defined by $$\mathbb{V}^{P}_{d,n}(A)=\{ V_{\boldsymbol{a}}\subset \mathbb{P}^{n-1}|\ P(\boldsymbol{a})=0,\ \|\boldsymbol{a}\|_{\infty}\leq A\},$$ where $P\in \mathbb{Z}[\boldsymbol{x}]$ is a non-singular form in $N$ variables of degree $k$ with $2 \le k\leq C({n,d})$ for some constant $C({n,d})$ depending at most on $n$ and $d$. Suppose that $P(\boldsymbol{a})=0$ has a nontrivial integer solution. We confirm that the proportion of varieties $V_{\boldsymbol{a}}$ in $\mathbb{V}^{P}_{d,n}(A)$, which are everywhere locally soluble, converges to a constant $c_P$ as $A\rightarrow \infty.$ In particular, if there exists $\boldsymbol{b}\in \mathbb{Z}^N$ such that $P(\boldsymbol{b})=0$ and the variety $V_{\boldsymbol{b}}$ in $\mathbb{P}^{n-1}$ admits a smooth $\mathbb{Q}$-rational point, the constant $c_P$ is positive.
Heejong Lee, Seungsu Lee, Kiseok Yeon
2023-08-25T21:57:56Z
http://arxiv.org/abs/2308.13685v1
# The local solubility for homogeneous polynomials with random coefficients over thin sets ###### Abstract. Let \(d\) and \(n\) be natural numbers greater or equal to \(2\). Let \(\langle\mathbf{a},\nu_{d,n}(\mathbf{x})\rangle\in\mathbb{Z}[\mathbf{x}]\) be a homogeneous polynomial in \(n\) variables of degree \(d\) with integer coefficients \(\mathbf{a}\), where \(\langle\cdot,\cdot\rangle\) denotes the inner product, and \(\nu_{d,n}:\mathbb{R}^{n}\to\mathbb{R}^{N}\) denotes the Veronese embedding with \(N=\binom{n+d-1}{d}\). Consider a variety \(V_{\mathbf{a}}\) in \(\mathbb{P}^{n-1}\), defined by \(\langle\mathbf{a},\nu_{d,n}(\mathbf{x})\rangle=0.\) In this paper, we examine a set of these varieties defined by \[\mathbb{V}_{d,n}^{P}(A)=\{V_{\mathbf{a}}\subset\mathbb{P}^{n-1}|\ P(\mathbf{a})=0,\ \|\mathbf{a}\|_{\infty}\leq A\},\] where \(P\in\mathbb{Z}[\mathbf{x}]\) is a non-singular form in \(N\) variables of degree \(k\) with \(2\leq k\leq C(n,d)\) for some constant \(C(n,d)\) depending at most on \(n\) and \(d\). Suppose that \(P(\mathbf{a})=0\) has a nontrivial integer solution. We confirm that the proportion of varieties \(V_{\mathbf{a}}\) in \(\mathbb{V}_{d,n}^{P}(A)\), which are everywhere locally soluble, converges to a constant \(c_{P}\) as \(A\to\infty.\) In particular, if there exists \(\mathbf{b}\in\mathbb{Z}^{N}\) such that \(P(\mathbf{b})=0\) and the variety \(V_{\mathbf{b}}\) in \(\mathbb{P}^{n-1}\) admits a smooth \(\mathbb{Q}\)-rational point, the constant \(c_{P}\) is positive. Key words and phrases:Projective variety, Local solubility 2020 Mathematics Subject Classification: 11E76,14G25 ## 1. Introduction In this article, we study the \(p\)-adic and real solubility for projective varieties defined by forms with integer coefficients. Denote by \(f(\mathbf{x})\in\mathbb{Z}[x_{1},x_{2},\ldots,x_{n}]\) a homogeneous polynomial of degree \(d\) and let us write \(V\subset\mathbb{P}^{n-1}\) for a projective variety defined by \(f(\mathbf{x})=0\). A conjecture of Artin [1] asserts that the variety \(V\) admits a \(\mathbb{Q}_{p}\)-point for every \(p\) if \(n\geq d^{2}+1\). However, this conjecture is known to be false in general, and the first counter example has been founded by Terjanian [25]. Alternatively, one may expect that there exists \(n_{0}:=n_{0}(d)\in\mathbb{N}\) such that whenever \(n>n_{0}(d)\), the variety \(V\) admits a \(\mathbb{Q}_{p}\)-point. The current knowledge is accessible to this expectation. In particular, it is known by Wooley [26] that it suffices to take \(n_{0}=d^{2^{d}}\) (see also [6], [12], [16]). As a different approach concerning the \(p\)-adic solubility for the variety \(V\), the Ax-Kochen theorem [2] shows that a homogeneous polynomial \(f(\mathbf{x})\in\mathbb{Z}[x_{1},\ldots,x_{n}]\) of degree \(d\) with \(n\geq d^{2}+1\) has a solution in \(\mathbb{Q}_{p}^{n}\setminus\{\mathbf{0}\}\) for sufficiently large prime \(p\) with respect to \(d\) (see also [7]). It is also known that if \(f(\mathbf{x})\in\mathbb{Z}[x_{1},x_{2},\ldots,x_{n}]\) is an absolutely irreducible form of degree \(d\) over \(\mathbb{F}_{p}\) with a prime \(p\) sufficiently large in terms of degree \(d\), it follows by applying the Lang-Weil estimate (see [17] and [28, Theorem 3]) and the Hensel's lemma that the equation \(f(\boldsymbol{x})=0\) has a solution in \(\mathbb{Q}_{p}^{n}\setminus\{\boldsymbol{0}\}\). As for the real solubility for the variety \(V\), one sees that if the degree \(d\) is odd, the variety \(V\) always admits a real point. When \(d\) is even, the real solubility for the variety \(V\) merely depends on the choice of the coefficients of \(f(\boldsymbol{x})\). One infers from the previous paragraph that if the number of variables \(n\) is not large enough, the main difficulties for verifying the \(p\)-adic solubility for the variety \(V\) occur from the case of small prime \(p.\) Nevertheless, thanks to the density lemma introduced in [19, Lemma 20], we are capable of obtaining some information about the \(p\)-adic solubility for the variety \(V\) even with small primes \(p.\) In order to describe this information, we temporarily pause and introduce some notation. Let \(n\) and \(d\) be natural numbers with \(d\geq 2.\) Let \(N:=N_{d,n}=\binom{n+d-1}{d}.\) Let \(\nu_{d,n}:\mathbb{R}^{n}\to\mathbb{R}^{N}\) denote the Veronese embedding, defined by listing all the monomials of degree \(d\) in \(n\) variables with the lexicographical ordering. We denote a homogeneous polynomial in \(n\) variables of degree \(d\) with integer coefficients, by \(\langle\boldsymbol{a},\nu_{d,n}(\boldsymbol{x})\rangle\) with integer vectors \(\boldsymbol{a}\in\mathbb{Z}^{N}\) where \(\langle\cdot,\cdot\rangle\) is the inner product. Here and throughout this paper, we write \(f_{\boldsymbol{a}}(\boldsymbol{x})=\langle\boldsymbol{a},\nu_{d,n}( \boldsymbol{x})\rangle\) for simplicity. Consider a variety \(V_{\boldsymbol{a}}\subset\mathbb{P}^{n-1}\) defined by \(f_{\boldsymbol{a}}(\boldsymbol{x})=0\). For \(A\in\mathbb{R}_{>0}\), we define a set of varieties \(V_{\boldsymbol{a}}\) given by \[\mathbb{V}_{d,n}(A):=\{V_{\boldsymbol{a}}\subset\mathbb{P}^{n-1}|\ \boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N}\}. \tag{1.1}\] For any \(V_{\boldsymbol{a}}\), write \(V_{\boldsymbol{a}}\bigg{(}\mathbb{R}^{n}\times\prod_{p\text{ prime}}\mathbb{Z} _{p}^{n}\bigg{)}\) for the set of points in \(\mathbb{R}^{n}\times\prod_{p\text{ prime}}\mathbb{Z}_{p}^{n}\) that the variety \(V_{\boldsymbol{a}}\) admits. Define a quantity \[\varrho_{d,n}^{\text{loc}}(A):=\frac{\#\bigg{\{}V_{\boldsymbol{a}}\in\mathbb{ V}_{d,n}(A)\bigg{|}\ V_{\boldsymbol{a}}\bigg{(}\mathbb{R}^{n}\times\prod_{p\text{ prime}}\mathbb{Z}_{p}^{n}\bigg{)}\neq\emptyset\bigg{\}}}{\# \mathbb{V}_{d,n}(A)}. \tag{1.2}\] We notice here that the quantity \(\varrho_{d,n}^{\text{loc}}(A)\) is the proportion of varieties \(V_{\boldsymbol{a}}\) in \(\mathbb{V}_{d,n}(A)\) that are _locally soluble_, i.e. that admit a real point and a \(p\)-adic point for all primes \(p\). Thus, the behavior of \(\varrho_{d,n}^{\text{loc}}(A)\) provides the information about \(p\)-adic solubility for the varieties \(V_{\boldsymbol{a}}\) in \(\mathbb{V}_{d,n}(A)\) even with small primes \(p\). We also define \(c_{\infty}\) (resp. \(c_{p}\)) to be the proportion of varieties \(V_{\boldsymbol{a}}\) with \(\boldsymbol{a}\) varying in \([-1,1]^{N}\) (resp. in \(\mathbb{Z}_{p}^{N}\)) admitting a real point (resp. a \(p\)-adic point). By using the density lemmas [19, Lemmas 20 and 21], Poonen and Voloch [20, Theorem 3.6] proved that whenever \(n,d\geq 2\), one has \[\lim_{A\to\infty}\varrho_{d,n}^{\text{loc}}(A)=c, \tag{1.3}\] where \(c\) is the product of \(c_{\infty}\) and \(c_{p}\) for all primes \(p\). In this paper, we investigate the proportion of locally soluble varieties \(V_{\boldsymbol{a}}\) in a thinner set than the set \(\mathbb{V}_{d,n}(A).\) To describe the thin set of our interest, let \(P(\mathbf{t})\in\mathbb{Z}[t_{1},\ldots,t_{N}]\) be a non-singular form in \(N\) variables of degree \(k\geq 2\). On recalling the definition of \(V_{\boldsymbol{a}}\subset\mathbb{P}^{n-1}\), we examine a set of varieties \(V_{\boldsymbol{a}}\) defined by \[\mathbb{V}_{d,n}^{P}(A):=\{V_{\boldsymbol{a}}\subset\mathbb{P}^{n-1}|\ \boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N},\ P( \boldsymbol{a})=0\}.\] Analogous to the quantity \(\varrho_{d,n}^{\text{loc}}(A)\), we have the proportion of locally soluble varieties \(V_{\boldsymbol{a}}\) of our concern defined as \[\varrho_{d,n}^{P,\text{loc}}(A):=\frac{\#\bigg{\{}V_{\boldsymbol{a}}\in \mathbb{V}_{d,n}^{P}(A)\bigg{|}\ V_{\boldsymbol{a}}\bigg{(}\mathbb{R}^{n}\times \prod_{p\text{ prime}}\mathbb{Z}_{p}^{n}\bigg{)}\neq\emptyset\bigg{\}}}{\# \mathbb{V}_{d,n}^{P}(A)}. \tag{1.4}\] In order to describe our main theorems, we define \[T_{\infty}:=\left\{\boldsymbol{a}\in[-1,1]^{N}\cap\mathbb{R}^{N}\big{|}\ \exists\ \boldsymbol{x}\in\mathbb{R}^{n}\setminus\{\boldsymbol{0}\}\text{ such that }f_{\boldsymbol{a}}(\boldsymbol{x})=0\right\} \tag{1.5}\] and \[T_{p}:=\left\{\boldsymbol{a}\in\mathbb{Z}_{p}^{N}\big{|}\ \exists\ \boldsymbol{x}\in \mathbb{Z}_{p}^{n}\setminus\{\boldsymbol{0}\}\text{ such that }f_{\boldsymbol{a}}( \boldsymbol{x})=0\right\}, \tag{1.6}\] for every prime \(p.\) Let \(\mu_{p}\) denote the Haar measure on \(\mathbb{Z}_{p}^{N}\) normalized to have total mass \(1\). For given measurable sets \(S_{p}\subseteq\mathbb{Z}_{p}^{N}\) and \(S_{\infty}\subseteq\mathbb{R}^{N}\) with the Haar measure \(\mu_{p}\) and the Lebesgue measure, we define \[\sigma_{p}(S_{p}):=\lim_{r\to\infty}p^{-r(N-1)}\#\left\{\boldsymbol{a}\ (\text{ mod }p^{r})\right|\ \boldsymbol{a}\in S_{p}\text{ and }P(\boldsymbol{a})\equiv 0\ (\text{ mod }p^{r})\right\}\] and \[\sigma_{\infty}(S_{\infty})=\lim_{\eta\to 0+}(2\eta)^{-1}V_{\infty}(\eta),\] where \(V_{\infty}(\eta)\) is the volume of the subset of \(\boldsymbol{y}\in S_{\infty}\) satisfying \(|P(\boldsymbol{y})|<\eta\). Furthermore, for \(d_{1},d_{2}\in\mathbb{N}\), we define \(C_{n,d}(d_{1},d_{2})\) a rational number such that \[C_{n,d}(d_{1},d_{2})=\frac{d!(n+d_{1}-1)!}{d_{1}!(n+d-1)!}+\frac{d!(n+d_{2}-1)!}{d_{2}!(n+d-1)!}.\] \(C_{n,d}(d_{1},d_{2})\) belongs to \((0,1)\) and maximizes at \((1,d-1)\) and \((d-1,1)\), see Lemma 2.5. Our first main theorem shows that the proportion of locally soluble varieties \(V_{\boldsymbol{a}}\) in the thin set \(\mathbb{V}_{d,n}^{P}(A)\) converges to the product of "local proportions" as \(A\to\infty\). **Theorem 1.1**.: _Suppose that \(P(\mathbf{t})\in\mathbb{Z}[t_{1},\ldots,t_{N}]\) is a non-singular form in \(N\) variables of degree \(k\) with \(2\leq k<\lfloor(1-C_{n,d}(1,d-1))N\rfloor\) and \((k-1)2^{k}<N\) where \(C_{n,d}(d_{1},d_{2})\) is as above. Suppose that \(P(\mathbf{t})=0\) has a nontrivial integer solution. Then, whenever \(n\geq 2\) and \(d\geq 2\), one has_ \[\lim_{A\to\infty}\varrho_{d,n}^{P,\text{loc}}(A)=c_{P},\] _where_ \[c_{P}:=\frac{\sigma_{\infty}(T_{\infty})\cdot\prod_{p}\sigma_{p}(T_{p})}{ \sigma_{\infty}([-1,1]^{N})\cdot\prod_{p}\sigma_{p}(\mathbb{Z}_{p})}.\] _Remark 1_.: Since \(P(\mathbf{t})\) is a non-singular form in \(N\) variables of degree \(k\) with \((k-1)2^{k}<N\) and \(P(\mathbf{t})=0\) has a nontrivial integer solution, the classical argument (see [5] and [22, the proof of Theorem 1.3]) reveals that the quantity \(\sigma_{\infty}([-1,1]^{N})\cdot\prod_{p\text{ prime}}\sigma_{p}(\mathbb{Z}_{p})\) is convergent and is bounded above and below by non-zero constants, respectively, depending on the polynomial \(P(\mathbf{t})\). Then, on observing that \[0\leq\sigma_{\infty}(T_{\infty})\leq\sigma_{\infty}([-1,1]^{N})\text{ and }0\leq\sigma_{p}(T_{p})\leq\sigma_{p}(\mathbb{Z}_{p}),\] we infer that the infinite product \(\sigma_{\infty}(T_{\infty})\cdot\prod_{p}\sigma_{p}(T_{p})\) in the numerator of \(c_{P}\) converges to a non-negative constant. Our second main theorem shows that under an additional condition on the polynomial \(P\) defining the thin set, the product of local proportions is strictly positive. **Theorem 1.2**.: _In addition to the setup of Theorem 1.1, suppose that there exists \(\boldsymbol{b}\in\mathbb{Z}^{N}\setminus\{\boldsymbol{0}\}\) with \(P(\boldsymbol{b})=0\) such that the variety \(V_{\boldsymbol{b}}\) in \(\mathbb{P}^{n-1}\) admits a smooth \(\mathbb{Q}\)-point. Then, the constant \(c_{P}\) is positive._ ## Structure of the paper and notation In section 2, we provide auxiliary lemmas and a proposition. Lemma 2.2 is used to prove Lemma 2.3. Lemma 2.4 and Lemma 2.5 are required to prove Proposition 2.6. In section 3, we also provide an auxiliary lemma and a proposition, and provide the proofs of Theorem 1.1 and Theorem 1.2 by making use of lemmas and propositions obtained in section 2 and 3. In particular, by making use of Lemma 2.1, Lemma 2.3, Lemma 3.1 and Proposition 2.6, we prove Theorem 1.1 in section 3. By utilizing Theorem 1.1, Lemma 2.3 and Proposition 3.2, we prove Theorem 1.2 in section 3. For a given vector \(\boldsymbol{v}\in\mathbb{R}^{N}\), we write the \(i\)-th coordinate of \(\boldsymbol{v}\) by \((\boldsymbol{v})_{i}\) or \(v_{i}\). We use \(\langle\cdot,\cdot\rangle\) for the inner product. We write \(0\leq\boldsymbol{x}\leq X\) or \(\boldsymbol{x}\in[0,X]^{s}\) to abbreviate the condition \(0\leq x_{1},\ldots,x_{s}\leq X\). For a prime \(p\) and vectors \(\boldsymbol{v}\in\mathbb{R}^{n}\), we use \(p^{h}\|\boldsymbol{v}\) when one has \(p^{h}|v_{i}\) for all \(1\leq i\leq n\) but \(p^{h+1}\nmid v_{i}\) for some \(1\leq i\leq n.\) Throughout this paper, we use \(\gg\) and \(\ll\) to denote Vinogradov's well-known notation, and write \(e(z)\) for \(e^{2\pi iz}\). We use \(A\asymp B\) when both \(A\gg B\) and \(A\ll B\) hold. We adopt the convention that when \(\epsilon\) appears in a statement, then the statement holds for each \(\epsilon>0\), with implicit constants depending on \(\epsilon\). ## Acknowledgement The authors acknowledge support from NSF grant DMS-2001549 under the supervision of Trevor Wooley. The third author would like to thank James Cumberbatch for helpful discussion and suggestion for the proof of Lemma 3.1. Especially, the third author would like to thank Trevor Wooley for his constant encouragement to complete this work. ## 2. Auxiliary lemmas and propositions Throughout this section, we fix a non-singular form \(P(\mathbf{t})\in\mathbb{Z}[t_{1},\ldots,t_{N}]\) in \(N\) variables of degree \(k\geq 2\). We also assume that \(N>(k-1)2^{k}\). Let \(\mathfrak{B}\) be a box in \([-1,1]^{N}\cap\mathbb{R}^{N}\). Denote by \(\mathbb{Z}_{\mathrm{prim}}^{N}\) the set of primitive integer vectors. For given \(A,B>0\) and \(\boldsymbol{r}\in\mathbb{Z}^{N}\) with \(0\leq\boldsymbol{r}\leq B-1\), we define \[\mathcal{N}(A,\mathfrak{B},B,\boldsymbol{r},P)=\#\left\{\boldsymbol{a}\in A \mathfrak{B}\cap\mathbb{Z}_{\mathrm{prim}}^{N}\right|\,P(\boldsymbol{a})=0, \ \boldsymbol{a}\equiv\boldsymbol{r}\ (\mathrm{mod}\ B)\right\}.\] The first lemma in this section provides the asymptotic formula for \[\mathcal{N}(A,\mathfrak{B},B,\boldsymbol{r},P)\text{ as }A\to\infty.\] In advance of the statement of Lemma 2.1, we define \(v_{p}(L)\) with \(L\in\mathbb{Z}\) and \(p\) prime as the integer \(s\) such that \(p^{s}\|L\). **Lemma 2.1**.: _Let \(B=p_{1}^{r_{1}}p_{2}^{r_{2}}\cdots p_{m}^{r_{m}}\) with \(p_{i}\) distinct prime numbers and \(r_{i}\geq 0\) for \(i=1,2,\ldots,m\). Put \(M=\{p_{1},p_{2},\ldots,p_{m}\}.\) Then, for a given \(\boldsymbol{r}\in\mathbb{Z}^{N}\) with \(0\leq\boldsymbol{r}\leq B-1\) and for sufficiently large \(A>0\), there exists \(\delta>0\) such that_ \[\mathcal{N}(A,\mathfrak{B},B,\boldsymbol{r},P)=\frac{1}{\zeta(N-k)}\prod_{p \notin M}\sigma_{p}\cdot\prod_{p\in M}\sigma_{p}^{B,\boldsymbol{r}}\cdot \sigma_{\infty}\cdot A^{N-k}+O(A^{N-k-\delta}),\] _where_ \[\sigma_{p} :=\lim_{r\to\infty}p^{-r(N-1)}\#\left\{1\leq\boldsymbol{a}\leq p^{r} \right|\,P(\boldsymbol{a})\equiv 0\ (\text{mod}\ p^{r})\right\},\] \[\sigma_{p}^{B,\boldsymbol{r}} :=\lim_{r\to\infty}p^{-r(N-1)}\#\left\{1\leq\boldsymbol{a}\leq p^ {r}\right|\,P(\boldsymbol{a})\equiv 0\ (\text{mod}\ p^{r}),\ \boldsymbol{a}\equiv \boldsymbol{r}\ (\text{mod}\ p^{v_{p}(B)})\right\}\] _and_ \[\sigma_{\infty}:=\sigma_{\infty}(\mathfrak{B})=\lim_{\eta\to 0+}(2\eta)^{-1}V_{ \infty}(\eta)\] _in which \(V_{\infty}(\eta)\) is the volume of the subset of \(\boldsymbol{y}\subseteq\mathfrak{B}\) satisfying \(|P(\boldsymbol{y})|<\eta\). In particular, the implicit constant in \(O(A^{N-k-\delta})\) depends on \(B\) and \(P(\mathbf{t})\)._ We record this lemma without proof because it is readily obtained by the previous results as follows. By repeating the argument of Birch in [5] replaced with the variables \(\boldsymbol{x}\) imposed on the congruence condition \(\boldsymbol{x}\equiv\boldsymbol{r}\ (\mathrm{mod}\ B)\), we obtain an asymptotic formula for the number of integer solutions \(\boldsymbol{x}\in[-A,A]^{N}\) of \(P(\boldsymbol{x})=0\) with the congruence condition \(\boldsymbol{x}\equiv\boldsymbol{r}\ (\mathrm{mod}\ B)\) as \(A\to\infty\). Then, by the Mobius inversion formula, one has the asymptotic formula for \(\mathcal{N}(A,\mathfrak{B},B,\boldsymbol{r},P)\) (see [21, Lemma 2.2] for an exposition of the application of the Mobius inversion formula). The main term of this asymptotic formula includes the product of \(p\)-adic densities and the singular integral. By applying the strategy proposed by Schmidt [23, 24] (see also a refined version [10, section 9]), the singular integral can be replaced by the real density \(\sigma_{\infty}\). Furthermore, we readily deduce by the coprimality between \(p\) and \(B\) that the product of \(p\)-adic densities in the main term becomes \(\prod_{p\notin M}\sigma_{p}\cdot\prod_{p\in M}\sigma_{p}^{B,\boldsymbol{r}}\). Thus, this yields the asymptotic formula for \(N(A,\mathfrak{B},B,\boldsymbol{r},P)\) as desired in Lemma 2.1. **Lemma 2.2**.: _Suppose that \(C\) is a sufficiently large constant and that \(A\) and \(Q\) are positive numbers with \(CQ\leq A\). Then, for a given \(\boldsymbol{c}\in\mathbb{Z}^{N}\) with \(1\leq\boldsymbol{c}\leq Q\), we have_ \[\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N}\big{|}\ P(\boldsymbol{a} )=0\ \text{and}\ \boldsymbol{a}\equiv\boldsymbol{c}\ (\text{mod}\ Q)\right\}\ll(A/Q)^{N-k},\] _where the implicit constant may depend on \(P(\mathbf{t}).\)_ Proof.: By orthogonality, we have \[\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N}\big{|}\ P( \boldsymbol{a})=0\ \text{and}\ \boldsymbol{a}\equiv\boldsymbol{c}\ (\text{mod}\ Q)\right\}\] \[=\int_{0}^{1}\sum_{-A\leq Q\boldsymbol{y}+\boldsymbol{c}\leq A}e( \alpha P(Q\boldsymbol{y}+\boldsymbol{c}))d\alpha. \tag{2.1}\] By change of variable \(\alpha=\beta/Q^{k}\), the last expression is seen to be \[Q^{-k}\int_{0}^{Q^{k}}\sum_{-A\leq Q\boldsymbol{y}+\boldsymbol{ c}\leq A}e(\beta P(\boldsymbol{y}+\boldsymbol{c}/Q))d\beta\] \[\ll\sup_{\begin{subarray}{c}1\leq l\leq Q^{k}\\ l\in\mathbb{N}\end{subarray}}\int_{l-1}^{l}\sum_{-A\leq Q\boldsymbol{y}+ \boldsymbol{c}\leq A}e(\beta P(\boldsymbol{y}+\boldsymbol{c}/Q))d\beta\] \[=\sup_{\begin{subarray}{c}1\leq l\leq Q^{k}\\ l\in\mathbb{N}\end{subarray}}\int_{l-1}^{l}\sum_{-A\leq Q\boldsymbol{y}+ \boldsymbol{c}\leq A}e(\beta(P(\boldsymbol{y})+g(\boldsymbol{y})))d\beta, \tag{2.2}\] where \(g\in\mathbb{Q}[\boldsymbol{y}]\) is a polynomial of degree at most \(k-1\). For a given \(l\in\mathbb{Z}\), we define the major arcs \(\mathfrak{M}^{l}_{\delta}\) by \[\mathfrak{M}^{l}_{\delta}=\bigcup_{\begin{subarray}{c}0\leq a\leq q\leq(A/Q) ^{\delta}\\ (q,a)=1\end{subarray}}\mathfrak{M}^{l}(q,a),\] where \[\mathfrak{M}^{l}(q,a)=\left\{\beta\in[l-1,l)|\ \left|\beta-(l-1)-\frac{a}{q} \right|\leq\frac{(A/Q)^{\delta}}{q(A/Q)^{k}}\right\}.\] Furthermore, we define the minor arcs \(\mathfrak{m}^{l}_{\delta}:=[l-1,l)\setminus\mathfrak{M}^{l}_{\delta}.\) Then, on writing that \[S(\beta):=\sum_{-A\leq Q\boldsymbol{y}+\boldsymbol{c}\leq A}e(\beta(P( \boldsymbol{y})+g(\boldsymbol{y}))), \tag{2.3}\] we see that \[\int_{l-1}^{l}S(\beta)d\beta=\int_{\mathfrak{M}^{l}_{\delta}}S(\beta)d\beta+ \int_{\mathfrak{m}^{l}_{\delta}}S(\beta)d\beta.\] Note by [18, Lemma 3.6] and the fact that \(P(\mathbf{t})\) is a non-singular form that \[\sup_{\beta\in\mathfrak{m}^{l}_{\delta}}S(\beta)\ll(A/Q)^{N-N\delta/(2^{k-1} (k-1))+\epsilon}. \tag{2.4}\] One infers by repeating the argument in [5] with the bound (2.4) that whenever \(N>(k-1)2^{k}\) and \(A/Q>0\) is sufficiently large, we have \[\int_{l-1}^{l}S(\beta)d\beta\ll(A/Q)^{N-k}. \tag{2.5}\] Recall that \[\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N}\big{|}\ P(\boldsymbol{a })=0\ \text{and}\ \boldsymbol{a}\equiv\boldsymbol{c}\ (\text{mod}\ Q)\right\}\] is bounded above by the last expression in (2.2) and recall the definition (2.3) of \(S(\beta)\). Therefore, by substituting the bound (2.5) into the last expression of (2.2), we conclude that \[\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N}\big{|}\ P(\boldsymbol{a })=0\ \text{and}\ \boldsymbol{a}\equiv\boldsymbol{c}\ (\text{mod}\ Q)\right\}\ll(A/Q)^{N-k}.\] The following lemma provides an upper asymptotic estimate for the number of integer points in the variety cut by the polynomial \(P(\mathbf{t})\in\mathbb{Z}[\mathbf{t}]\), that reduce modulo \(p\), for some sufficiently large \(p>M,\) to an \(\mathbb{F}_{p}\)-point of an another given variety \(Y\subset\mathbb{A}^{n}\) defined over \(\mathbb{Z}\). **Lemma 2.3**.: _Let \(\mathfrak{B}\) be a compact region in \(\mathbb{R}^{N}\) having a finite measure, and let \(Y\) be any closed subscheme of \(\mathbb{A}^{N}_{\mathbb{Z}}\) of codimension \(r\geq 1.\) Let \(A\) and \(M\) be positive real numbers. Suppose that \(r-1>k\) and that \(P(\mathbf{t})=0\) has a non-trivial integer solution. Then, there exists \(A_{0}:=A_{0}(P(\mathbf{t}))\in\mathbb{R}_{>0}\) such that whenever \(A>A_{0}\), we have_ \[\#\left\{\boldsymbol{a}\in A\mathfrak{B}\cap\mathbb{Z}^{N}\middle| \begin{aligned} (i)\ \boldsymbol{a}\ (\text{mod}\ p)\in Y(\mathbb{F}_{p})\ \text{for some prime}\ p>M\\ (ii)\ P(\boldsymbol{a})=0\end{aligned}\right\}\] \[\ll\frac{A^{N-k}}{M^{r-k-1}\log M}+A^{N-r+1}, \tag{2.6}\] _where the implicit constant may depend on \(\mathfrak{B}\) and \(Y\)._ In [4, Theorem 3.3], Bhargava provided an upper asymptotic estimate that \[\#\left\{\boldsymbol{a}\in A\mathfrak{B}\cap\mathbb{Z}^{N} \middle|\ \boldsymbol{a}\ (\text{mod}\ p)\in Y(\mathbb{F}_{p})\ \text{for some prime}\ p>M\right\}\] \[\ll\frac{A^{N}}{M^{r-1}\log M}+A^{N-r+1}. \tag{2.7}\] Furthermore, as alluded in [4, Remark 3.4], the bound in (2.7) can be achieved for suitable choices of \(Y\). Thus, this bound is essentially optimal. In the proof of Lemma 2.3, we mainly adopt the argument in [4, Theorem 3.3] though, we freely admit that the bound in (2.6) seems not optimal order of magnitude of the bound. Especially, one finds that the second term is trivially obtained by that in (2.7). We are independently interested in a sharper bound in (2.6) and expect that one might be able to improve this bound. Nevertheless, since the strength of the upper asymptotic estimate in Lemma 2.3 is enough for our purpose, we do not put our effort into optimizing this upper bound in this paper. Proof of Lemma 2.3.: We can and do assume that \(Y\) is irreducible. Otherwise, we can take its irreducible components and add up the equation (2.6) to deduce general cases. Since \(Y\) has codimension \(r\), there exists \(f_{1},\ldots,f_{r}\in\mathbb{Z}[t_{1},\ldots,t_{N}]\) such that the vanishing locus \(V(f_{1},\ldots,f_{r})\) contains an irreducible component of codimension \(r\) containing \(Y\). Indeed, we can assume that \(Y\) equals to the irreducible component, as they have the same underlying reduced subscheme, and we only consider \(\mathbb{Z}\) or \(\mathbb{F}_{p}\)-points of them. By [4, Lemma 3.1], the number of \(\boldsymbol{a}\in A\mathfrak{B}\cap Y(\mathbb{Z})\) is \(\ll A^{N-r}\). (Note that \(A\mathfrak{B}\cap Y(\mathbb{Z})\) equals to \(A\mathfrak{B}\cap\mathbb{Z}^{N}\cap Y(\mathbb{R})\)) Thus, it suffices to find an upper bound of the size of the set by \[\#\left\{\boldsymbol{a}\in A\mathfrak{B}\cap\mathbb{Z}^{N} \middle|\begin{array}{l}(i)\ P(\boldsymbol{a})=0\\ (ii)\ \boldsymbol{a}\ (\text{mod}\ p)\in Y(\mathbb{F}_{p})\ \text{for some prime}\ p>M\\ (iii)\ \boldsymbol{a}\notin Y(\mathbb{Z})\end{array}\right\}\] \[\ll\frac{A^{N-k}}{M^{r-k-1}\log M}+A^{N-r+1}.\] We may assume that \(r>k+1\), since \(r-1>k\) from the hypothesis in the statement of Lemma 2.3. We shall find an upper bound of a slightly larger set by \[\#\left\{\begin{aligned} &\left|(i)\ \boldsymbol{a}\in A \mathfrak{B}\cap\mathbb{Z}^{N}\ \text{and}\ P(\boldsymbol{a})=0\\ &(\boldsymbol{a},p)\middle|(ii)\ p>M\ \text{a prime and}\ \boldsymbol{a}\ (\text{mod}\ p)\in Y(\mathbb{F}_{p})\\ &\left|(iii)\ \boldsymbol{a}\notin Y(\mathbb{Z})\end{aligned}\right. \right\}\] \[\ll\frac{A^{N-k}}{M^{r-k-1}\log M}+A^{N-r+1}. \tag{2.8}\] First, we count pairs \((\boldsymbol{a},p)\) on the left hand side of (2.8) for each prime \(p\) satisfying \(p\leq A\); such primes arise only when \(A>M.\) Meanwhile, by Lemma 2.2 we find that for a given \(\boldsymbol{c}\in[1,p]^{N}\), the number of integer solutions \(\boldsymbol{a}\in[-A,A]^{N}\) of \(P(\boldsymbol{a})=0\) with the congruence condition \(\boldsymbol{a}\equiv\boldsymbol{c}\ (\text{mod}\ p)\) is \(O((A/p)^{N-k}).\) Then, since \(\#Y(\mathbb{F}_{p})=O(p^{N-r})\), we see that the number of \(\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N}\) such that \(\boldsymbol{a}\ (\text{mod}\ p)\) is in \(Y(\mathbb{F}_{p})\) is \(O(p^{N-r})\cdot O((A/p)^{N-k})=O(A^{N-k}/p^{r-k}).\) Thus the total number of desired pairs \((\boldsymbol{a},p)\) with \(p\leq A\) is at most \[\#\left\{\begin{aligned} &(\boldsymbol{a},p)\middle|(i)\ \boldsymbol{a}\in A \mathfrak{B}\cap\mathbb{Z}^{N}\ \text{and}\ P(\boldsymbol{a})=0\\ &(ii)\ A\geq p>M\ \text{a prime and}\ \boldsymbol{a}\ (\text{mod}\ p)\in Y( \mathbb{F}_{p})\\ &(iii)\ \boldsymbol{a}\notin Y(\mathbb{Z})\end{aligned}\right\}\] \[\ll\sum_{M<p\leq A}O\left(\frac{A^{N-k}}{p^{r-k}}\right)=O \left(\frac{A^{N-k}}{M^{r-k-1}\log M}\right). \tag{2.9}\] Next, we count pairs \((\mathbf{a},p)\) with \(p>A.\) It follows from (see the equation (17) in the proof of [4, Theorem 3.3]) that \[\#\left\{\begin{aligned} (\mathbf{a},p)& \begin{vmatrix}(i)\ \mathbf{a}\in A\mathfrak{B}\cap\mathbb{Z}^{N}\ \text{and}\ P(\mathbf{a})=0\\ (ii)\ p>A\ \text{a prime and}\ \mathbf{a}\ (\bmod\ p)\in Y(\mathbb{F}_{p})\\ (iii)\ \mathbf{a}\notin Y(\mathbb{Z})\end{aligned}\right\}\] \[\ll\#\left\{\begin{aligned} (\mathbf{a},p)& \begin{vmatrix}(i)\ \mathbf{a}\in A\mathfrak{B}\cap\mathbb{Z}^{N}\\ (ii)\ p>A\ \text{a prime and}\ \mathbf{a}\ (\bmod\ p)\in Y(\mathbb{F}_{p})\\ (iii)\ \mathbf{a}\notin Y(\mathbb{Z})\end{aligned}\right\}\ll A^{N-r+1}. \tag{2.10}\] Therefore, we find by (2.9) and (2.10) that the inequality (2.8) holds. Hence, we complete the proof of Lemma 2.3. Next, we will prove that most of \(f_{\mathbf{a}}(\mathbf{x})\) are irreducible. Let \(Y\) be a subset of \(\mathbb{A}_{\mathbb{Z}}^{N}\) defined to be \[Y:=\left\{\mathbf{a}\in\mathbb{A}_{\mathbb{Z}}^{N}\right|\ f_{\mathbf{a}}(\mathbf{x})\ \text{is reducible over}\ \mathbb{C}\right\}. \tag{2.11}\] Our goal here is to show that \(Y\) is, in fact, an algebraic variety and that the codimension of \(Y\) is strictly greater than a constant depending on \(n\) and \(d\). Here, we fix \(r:=\operatorname{codim}_{\mathbb{A}_{\mathbb{Z}}^{N}}Y\). To show the claim, we record the following two useful lemmas: **Lemma 2.4**.: _For any integers \(n\geq 3\) and \(d\geq 3\),_ \[\frac{(n+d)(n+d-1)}{n-1}<\binom{n+d}{d}\] Proof.: We do this by induction on \(n\). When \(n=3\), we have \[3 <3+d-2\] \[\frac{3(3+d)(3+d-1)}{2} <\frac{(3+d)(3+d-1)(3+d-2)}{2}\] \[\frac{(3+d)(3+d-1)}{2} <\frac{(3+d)(3+d-1)(3+d-2)}{3\cdot 2}=\binom{n+d}{d}\] Suppose that the lemma is true for \(n\). Then, \[\frac{(n+d+1)(n+d)}{n} =\frac{(n+d+1)(n+d)(n+d-1)(n-1)}{n(n-1)(n+d-1)}\] \[<\frac{(n+d)!}{n!d!}\cdot\frac{(n+d+1)(n-1)}{n(n+d-1)}\] \[=\frac{(n+d+1)!}{(n+1)!d!}\cdot\frac{(n+1)(n-1)}{n(n+d-1)}\] \[<\frac{(n+d+1)!}{(n+1)!d!}\] The last inequality follows from \(\frac{(n-1)(n+1)}{n(n+d-1)}<1.\) **Lemma 2.5**.: _Let \(n\geq 3\) and \(d\geq 3\) be integers. Suppose that \(d_{1}\) and \(d_{2}\) are natural numbers with \(d=d_{1}+d_{2}\). Let \(C_{n,d}(d_{1},d_{2})\) be a rational number defined as_ \[C_{n,d}(d_{1},d_{2})=\frac{d!(n+d_{1}-1)!}{d_{1}!(n+d-1)!}+\frac{d!(n+d_{2}-1)! }{d_{2}!(n+d-1)!}\] _Then, for given \(n\) and \(d,\) the quantity \(C_{n,d}\) attains the maximum value when \((d_{1},d_{2})=(1,d-1)\) or \((d_{1},d_{2})=(d-1,1)\). Furthermore, its maximum is strictly less than 1._ Proof.: Without loss of generality, we assume that \(d_{1}\leq d_{2}.\) We shall first show that one has \[C_{n,d}(d_{1},d_{2})\leq C_{n,d}(d_{1}-1,d_{2}+1). \tag{2.12}\] In order to verify the inequality (2.12), we observe that whenever \(n\geq 3\) one has \[(n-1)\cdot\frac{(n+d_{1}-2)!}{d_{1}!}\leq(n-1)\cdot\frac{(n+d_{2}-1)!}{(d_{2} +1)!}.\] Equivalently, this is seen to be \[\frac{(n+d_{1}-1)!}{d_{1}!}-\frac{(n+d_{1}-2)!}{(d_{1}-1)!}\leq\frac{(n+d_{2})!}{(d_{2}+1)!}-\frac{(n+d_{2}-1)!}{d_{2}!},\] and thus \[\frac{(n+d_{1}-1)!}{d_{1}!}+\frac{(n+d_{2}-1)!}{d_{2}!}\leq\frac{(n+d_{2})!}{ (d_{2}+1)!}+\frac{(n+d_{1}-2)!}{(d_{1}-1)!}. \tag{2.13}\] Therefore, we find from (2.13) that \[\frac{(n+d-1)!}{d!}C_{n,d}(d_{1},d_{2})\leq\frac{(n+d-1)!}{d!}C_{n,d}(d_{1}-1,d_{2}+1).\] This confirms the inequality (2.12). Then, by applying (2.12) iteratively, we conclude that the quantity \(C_{n,d}\) attains the maximum value when \((d_{1},d_{2})=(1,d-1)\) or \((d_{1},d_{2})=(d-1,1)\). Next, we deduce by applying Lemma 2.4 that \[\begin{split} C_{n,d}(1,d-1)&=\frac{d!n!}{(n+d-1)! }+\frac{d!(n+d-2)!}{(d-1)!(n+d-1)!}\\ &=\frac{d!n!}{(n+d-1)!}+\frac{d}{n+d-1}\\ &=\frac{n+d}{{n+d\choose d}}+\frac{d}{n+d-1}\\ &<\frac{(n-1)(n+d)}{(n+d)(n+d-1)}+\frac{d}{n+d-1}\\ &=\frac{n-1}{n+d-1}+\frac{d}{n+d-1}=1.\end{split} \tag{2.14}\] Therefore, this completes the proof of Lemma 2.5. **Proposition 2.6**.: _Recall the definition of \(Y\subseteq\mathbb{A}_{\mathbb{Z}}^{N}\) in (2.11). Then, \(Y\) is an affine variety. Further, let \(r\) be the codimension of \(Y\). Then, one has \(r>\lfloor(1-C_{n,d}(1,d-1))N\rfloor\)._ Proof.: Let \(\langle\mathbf{a},v_{d,n}(\mathbf{x})\rangle\) be a reducible polynomial. Then, we write \(\langle\mathbf{a},v_{d,n}(\mathbf{x})\rangle=f^{(1)}(\mathbf{x})f^{(2)}(\mathbf{x})\). Here, \(f^{(1)}(\mathbf{x})\) and \(f^{(2)}(\mathbf{x})\) are homogeneous whose degrees are strictly less than \(d\). Let \(d_{i}=\deg(f^{(i)}(\mathbf{x}))\) and \(t=\binom{n+d_{1}-1}{n-1}\). Then, \[\langle\mathbf{a},v_{d,n}(\mathbf{x})\rangle=\underbrace{(u_{1}x_{1}^{d_{1}}+\cdots u _{t}x_{n}^{d_{1}})}_{=f^{(1)}(\mathbf{x})}\underbrace{(u_{t+1}x_{1}^{d_{2}}+ \cdots+u_{M(d_{1},d_{2})}x_{n}^{d_{2}})}_{=f^{(2)}(\mathbf{x})} \tag{2.15}\] Let \(Y_{(d_{1},d_{2})}\subset Y\) be a subset where \(\langle\mathbf{a},v_{d,n}(\mathbf{x})\rangle\) seperates as (2.15). Comparing the coefficients in (2.15) for both sides, we attain \(a_{i}\)'s as a polynomial of \(u_{1},\ldots,u_{M}\). Now, let us write \(a_{i}=g_{i}(u_{1},\ldots,u_{M(d_{1},d_{2})})\). Consider a map \(\varphi_{(d_{1},d_{2})}:\mathbb{Z}[t_{1},\ldots,t_{N}]\to\mathbb{Z}[u_{1}, \ldots,u_{M(d_{1},d_{2})}]\) sending \(t_{i}\mapsto g_{i}(u_{1},\ldots,u_{M(d_{1},d_{2})})\). Then, by the construction, \(Y_{(d_{1},d_{2})}=V(\ker\varphi_{(d_{1},d_{2})})\), and so \(Y=\bigcup Y_{(d_{1},d_{2})}\) is an affine variety. Now, we prove \(r>k+1\). We will instead find the upper bound of the dimension of \(Y\). Let \(M=\max M(d_{1},d_{2})\). Since \(\mathbb{Z}[t_{1},\ldots,t_{N}]/\ker\varphi_{(d_{1},d_{2})}\) injects into \(\mathbb{Z}[u_{1},\ldots,u_{M(d_{1},d_{2})}]\), \(\dim Y\) is strictly less than \(M\). Hence, it suffices to show \(M\leq N-\lfloor(1-C_{n,d}(1,d-1))N\rfloor\). Note that we have \(M(d_{1},d_{2})=C_{d,n}(d_{1},d_{2})N\leq N-\lfloor(1-C_{n,d}(1,d-1))N\rfloor\). Indeed, we have \[M(d_{1},d_{2}) =\binom{n+d_{1}-1}{n-1}+\binom{n+d_{2}-1}{n-1}\] \[=\frac{(n+d_{1}-1)!}{(n-1)!d_{1}!}+\frac{(n+d_{2}-1)!}{(n-1)!d_{2 }!}\] \[=\frac{(n+d-1)!}{d!(n-1)!}\underbrace{\left(\frac{d!(n+d_{1}-1)! }{d_{1}!(n+d-1)!}+\frac{d!(n+d_{2}-1)!}{d_{2}!(n+d-1)!}\right)}_{C_{n,d}(d_{1 },d_{2})}\] \[=C_{n,d}(d_{1},d_{2})N\leq N-\lfloor(1-C_{n,d}(1,d-1))N\rfloor\] The latter inequality is by Lemma 2.5. Now, since \(\dim Y<M\leq N-\lfloor(1-C_{n,d}(1,d-1))N\rfloor\), we have \(\lfloor(1-C_{n,d}(1,d-1))N\rfloor<N-\dim Y=r\), which is desired. ## 3. Proof of Theorem 1.1 and Theorem 1.3 In this section, we provide the proofs of Theorem 1.1 and 1.2. Recall the definition (1.5) and (1.6) of sets \(T_{\infty}\) and \(T_{p}\) for every prime \(p\). We begin this section with a lemma which says that the sets \(T_{\infty}\) and \(T_{p}\) are measurable in \(\mathbb{R}^{N}\) and \(\mathbb{Z}_{p}^{N}\) with the Lebesgue measure and the Haar measure \(\mu_{p}\), respectively. **Lemma 3.1**.: _The sets \(T_{\infty}\subseteq\mathbb{R}^{N}\) and \(T_{p}\subseteq\mathbb{Z}_{p}^{N}\) are measurable with the Lebesgue measure and the Haar measure \(\mu_{p},\) respectively._ Proof.: We shall first prove that \(T_{\infty}\) is Lebesgue measurable. On observing that \(f_{\mathbf{a}}(c\mathbf{x})=c^{d}f_{\mathbf{a}}(\mathbf{x})\) for any \(c\in\mathbb{R},\) we note that \[T_{\infty}=\left\{\mathbf{a}\in\mathbb{R}^{N}\cap[-1,1]^{N}\big{|}\ \exists\ \mathbf{x}\in[-1,1]^{n}\cap\mathbb{R}^{n}\text{ such that }f_{\mathbf{a}}(\mathbf{x})=0\right\}.\] We claim that \[T_{\infty}=\bigcap_{k\in\mathbb{N}}\bigcup_{\mathbf{x}\in[-1,1]^{n}\cap\mathbb{R} ^{n}}\{\mathbf{a}\in\mathbb{R}^{N}|\ |f_{\mathbf{a}}(\mathbf{x})|<1/k\}, \tag{3.1}\] where \(|\cdot|\) is the euclidean norm in \(\mathbb{R}\). Obviously, the set \(T_{\infty}\) is included in the set on the right-hand side of (3.1). Conversely, for any element \(\mathbf{a}\in\mathbb{R}^{N}\) contained in the set on the right hand side of (3.1), there exists a sequence \(\mathbf{x}_{m}\in[-1,1]^{n}\cap\mathbb{R}^{n}\) such that \(f_{\mathbf{a}}(\mathbf{x})\) converges to \(0\) as \(m\to\infty\). Since \([-1,1]^{n}\cap\mathbb{R}^{n}\) is a compact set, we see that there exists a limit point \(\mathbf{x}\in[-1,1]^{n}\cap\mathbb{R}^{n}\) such that \(f_{\mathbf{a}}(\mathbf{x})=0.\) This means that \(\mathbf{a}\in T_{\infty}.\) Hence, we confirm that (3.1) holds. Meanwhile, since the set \([-1,1]^{n}\cap\mathbb{Q}^{n}\) is a dense in \([-1,1]^{n}\cap\mathbb{R}^{n}\), we see that \[T_{\infty}=\bigcap_{k\in\mathbb{N}}\bigcup_{\mathbf{x}\in[-1,1]^{n}\cap\mathbb{Q} ^{n}}\{\mathbf{a}\in\mathbb{R}^{N}|\ |f_{\mathbf{a}}(\mathbf{x})|<1/k\}. \tag{3.2}\] For fixed \(k\in\mathbb{N}\) and \(\mathbf{x}\in[-1,1]^{n}\cap\mathbb{Q}^{n},\) the set \(\{\mathbf{a}\in\mathbb{R}^{N}|\ |f_{\mathbf{a}}(\mathbf{x})|<1/k\}\) is an open set. Therefore, on noting from (3.2) that \(T_{\infty}\) is obtained by taking countable unions and countable intersections of the set \(\{\mathbf{a}\in\mathbb{R}^{N}|\ |f_{\mathbf{a}}(\mathbf{x})|<1/k\},\) one finds that the set \(T_{\infty}\) is Lebesgue measurable. Next, we shall prove that \(T_{p}\) is measurable with \(\mu_{p},\) by following the same method used in the previous paragraph. We claim that \[T_{p}=\bigcap_{k\in\mathbb{N}}\bigcup_{\mathbf{x}\in\mathbb{Z}_{p}^{n}}\{\mathbf{a} \in\mathbb{Z}_{p}^{N}|\ |f_{\mathbf{a}}(\mathbf{x})|_{p}<p^{-k}\}. \tag{3.3}\] Clearly, the set \(T_{p}\) is included in the set on the right-hand side of (3.3). Conversely, for any element \(\mathbf{a}\in\mathbb{Z}_{p}^{N}\) in the set on the right hand side of (3.3), there exists a sequence \(\mathbf{x}_{m}\in\mathbb{Z}_{p}^{n}\) such that \(|f_{\mathbf{a}}(\mathbf{x}_{m})|_{p}\) converges to \(0\) as \(m\to\infty.\) Hence, there exists a point \(\mathbf{x}\in\mathbb{Z}_{p}^{N}\) such that \(f_{\mathbf{a}}(\mathbf{x})=0.\) This means that \(\mathbf{a}\in T_{p}.\) Thus, we confirm that (3.3) holds. Meanwhile, since the set \(\mathbb{Z}^{n}\) is a dense in \(\mathbb{Z}_{p}^{n}\), we find that \[T_{p}=\bigcap_{k\in\mathbb{N}}\bigcup_{\mathbf{x}\in\mathbb{Z}^{n}}\{\mathbf{a}\in \mathbb{Z}_{p}^{N}|\ |f_{\mathbf{a}}(\mathbf{x})|_{p}<p^{-k}\}. \tag{3.4}\] For fixed \(k\in\mathbb{N}\) and \(\mathbf{x}\in\mathbb{Z}^{n}\), the set \(\{\mathbf{a}\in\mathbb{Z}_{p}^{N}|\ |f_{\mathbf{a}}(\mathbf{x})|_{p}<p^{-k}\}\) is open and thus is measurable with \(\mu_{p}\). Therefore, on noting from (3.4) that \(T_{p}\) is obtained by taking countable unions and countable intersections of the \(\{\mathbf{a}\in\mathbb{Z}_{p}^{N}|\ |f_{\mathbf{a}}(\mathbf{x})|_{p}<p^{-k}\},\) we discern that the set \(T_{p}\) is measurable with \(\mu_{p}\). In advance of the proofs of Theorem 1.1 and Theorem 1.2, we provide some definitions and observations. We consider the natural map \[\Phi^{A}:\ [-A,A]^{N}\cap\mathbb{Z}_{\text{prim}}^{N} \to[-1,1]^{N}\times\prod_{p\text{ prime}}\mathbb{Z}_{p}^{N}\] \[\boldsymbol{a} \mapsto\left(\frac{\boldsymbol{a}}{A},\boldsymbol{a},\ldots, \boldsymbol{a},\ldots\right).\] We sometimes use a different order of primes in the product \(\prod_{p\text{ prime}}\mathbb{Z}_{p},\) for notational convenience. Furthermore, for a given subset \(\mathcal{U}\) of \([-1,1]^{N}\times\prod_{p\text{ prime}}\mathbb{Z}_{p}^{N}\) and a given polynomial \(P(\mathbf{t})\in\mathbb{Z}[t_{1},\ldots,t_{N}],\) we define a quantity \(\boldsymbol{d}(\mathcal{U},A;P)\) by \[\boldsymbol{d}(\mathcal{U},A;P):=\frac{\#\left\{\boldsymbol{a}\in(\Phi^{A})^{ -1}(\mathcal{U})\middle|\ P(\boldsymbol{a})=0\right\}}{\#\left\{\boldsymbol{a} \in[-A,A]^{N}\cap\mathbb{Z}_{\text{prim}}^{N}\ \middle|\ P(\boldsymbol{a})=0\right\}}. \tag{3.5}\] We say that a subset of \(\mathbb{Z}_{p}\) is an open interval if it has the form \(\{x\in\mathbb{Z}_{p}|\ |x-a|_{p}\leq b\}\) for some \(a\in\mathbb{Z}_{p}\) and \(b\in\mathbb{R}.\) Furthermore, by an open box \(I_{p}\) in \(\mathbb{Z}_{p}^{N}\) with a given prime \(p,\) we mean a Cartesian product of open intervals. Suppose that \(\mathfrak{B}\) be an open box in \([-1,1]^{N}\cap\mathbb{R}^{N}.\) For a given set \(\mathfrak{p}\) of prime numbers, it follows that \[\boldsymbol{d}\bigg{(}\mathfrak{B}\times\prod_{p\in\mathfrak{p}} I_{p}\times\prod_{p\notin\mathfrak{p}}\mathbb{Z}_{p}^{N},A;P\bigg{)}\] \[=\frac{\#\bigg{\{}\boldsymbol{a}\in(\Phi^{A})^{-1}\bigg{(} \mathfrak{B}\times\prod_{p\in\mathfrak{p}}I_{p}\times\prod_{p\notin\mathfrak{ p}}\mathbb{Z}_{p}^{N}\bigg{)}\bigg{|}\ P(\boldsymbol{a})=0\bigg{\}}}{\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap \mathbb{Z}_{\text{prim}}^{N}\middle|\ P(\boldsymbol{a})=0\right\}}.\] We observe that the set \((\Phi^{A})^{-1}\bigg{(}\mathfrak{B}\times\prod_{p\in\mathfrak{p}}I_{p}\times \prod_{p\notin\mathfrak{p}}\mathbb{Z}_{p}^{N}\bigg{)}\) can be viewed by a set of integers in \(A\mathfrak{B}\cap\mathbb{Z}_{\text{prim}}^{N}\) satisfying certain congruence conditions associated with the radius and the center of the open intervals defining \(I_{p}\). Thus, we infer from the Chinese remainder theorem that there exists \(B\in\mathbb{Z}\) whose prime divisors are in \(\mathfrak{p},\) and \(\boldsymbol{r}\in\mathbb{Z}^{N}\) such that \[\boldsymbol{d}\bigg{(}\mathfrak{B}\times\prod_{p\in\mathfrak{p}} I_{p}\times\prod_{p\notin\mathfrak{p}}\mathbb{Z}_{p}^{N},A;P\bigg{)}=\frac{\# \bigg{\{}\boldsymbol{a}\in A\mathfrak{B}\cap\mathbb{Z}_{\text{prim}}^{N} \bigg{|}\ P(\boldsymbol{a})=0,\ \boldsymbol{a}\equiv\boldsymbol{r}\ \text{mod}\ B \bigg{\}}}{\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}_{\text{prim}}^{N} \middle|\ P(\boldsymbol{a})=0\right\}}.\] Then, by applying Lemma 2.1, we obtain \[\begin{split}&\boldsymbol{d}\bigg{(}\mathfrak{B}\times\prod_{p\in \mathfrak{p}}I_{p}\times\prod_{p\notin\mathfrak{p}}\mathbb{Z}_{p}^{N},A;P\bigg{)} \\ &=\frac{\frac{1}{\zeta(N-k)}\prod_{p\notin\mathfrak{p}}\sigma_{p} \cdot\prod_{p\in\mathfrak{p}}\sigma_{p}^{B,\boldsymbol{r}}\cdot\sigma_{\infty }(\mathfrak{B})+O(A^{-\delta})}{\frac{1}{\zeta(N-k)}\prod_{p}\sigma_{p}\cdot \sigma_{\infty}([-1,1]^{N})+O(A^{-\delta})}.\end{split} \tag{3.6}\] For a given measurable set \(S_{p}\subseteq\mathbb{Z}_{p}^{N}\), recall the definition of \(\sigma_{p}(S_{p})\) in the preamble to the statement of Theorem 1.1. We observe that \(\sigma_{p}^{B,\boldsymbol{r}}=\sigma_{p}(I_{p})\) and \(\sigma_{p}=\sigma_{p}(\mathbb{Z}_{p})\). Therefore, we find from (3.6) that \[\lim_{A\to\infty}\boldsymbol{d}\bigg{(}\mathfrak{B}\times\prod_{p\in \mathfrak{p}}I_{p}\times\prod_{p\notin\mathfrak{p}}\mathbb{Z}_{p}^{N},A;P \bigg{)}=\frac{\prod_{p\in\mathfrak{p}}\sigma_{p}(I_{p})\cdot\sigma_{\infty}( \mathfrak{B})}{\prod_{p\in\mathfrak{p}}\sigma_{p}(\mathbb{Z}_{p})\cdot\sigma_ {\infty}([-1,1]^{N})} \tag{3.7}\] Proof of Theorem 1.1.: Recall that \[T_{\infty} =\big{\{}\boldsymbol{a}\in[-1,1]^{N}\cap\mathbb{R}^{N}\big{|}\ \exists\ \boldsymbol{x}\in\mathbb{R}^{n}\setminus\{\boldsymbol{0}\}\ \text{such that}\ f_{\boldsymbol{a}}(\boldsymbol{x})=0\big{\}}\] \[T_{p} =\big{\{}\boldsymbol{a}\in\mathbb{Z}_{p}^{N}\big{|}\ \exists\ \boldsymbol{x}\in \mathbb{Z}_{p}^{n}\setminus\{\boldsymbol{0}\}\ \text{such that}\ f_{\boldsymbol{a}}(\boldsymbol{x})=0\big{\}}\,.\] On recalling the definition (3.5) of \(\boldsymbol{d}(\cdot,A;P)\), we infer that \[\varrho_{d,n}^{P,\text{loc}}(A)=\boldsymbol{d}\bigg{(}T_{\infty}\times\prod_{ p\text{ prime}}T_{p},A;P\bigg{)}.\] Thus, it suffices to show that \[\lim_{A\to\infty}\boldsymbol{d}\bigg{(}T_{\infty}\times\prod_{p\text{ prime}}T_{p},A;P\bigg{)}=c_{P}. \tag{3.8}\] In order to verify that the equality (3.8) holds, we introduce \[\boldsymbol{d}(A,M)=\boldsymbol{d}\bigg{(}T_{\infty}\times\prod_{p<M}T_{p} \times\prod_{p\geq M}\mathbb{Z}_{p},A;P\bigg{)}.\] One sees by applying the triangle inequality that \[\lim_{A\to\infty}\bigg{|}\boldsymbol{d}\bigg{(}T_{\infty}\times\prod_{p\text{ prime}}T_{p},A;P\bigg{)}-c_{P}\bigg{|}\leq\lim_{A\to\infty}|d_{1}(A,M)|+\lim_{A\to \infty}|d_{2}(A,M)|, \tag{3.9}\] where \[d_{1}(A,M)=\boldsymbol{d}\bigg{(}T_{\infty}\times\prod_{p\text{ prime}}T_{p},A;P \bigg{)}-\boldsymbol{d}(A,M)\] and \[d_{2}(A,M)=\boldsymbol{d}(A,M)-c_{P}.\] First, we analyze the quantity \[\lim_{A\to\infty}|d_{1}(A,M)|.\] We readily see from the definition of \(\boldsymbol{d}(A,M)\) that \[|d_{1}(A,M)|=\boldsymbol{d}\bigg{(}T_{\infty}\times\prod_{p<M}T_{p}\times\bigg{(} \prod_{p\geq M}T_{p}\bigg{)}^{c},A;P\bigg{)}.\] Furthermore, we find that \[|d_{1}(A,M)|\] \[\leq\frac{\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N} \right|\,P(\boldsymbol{a})=0\Big{\}}}{\#\left\{\boldsymbol{a}\in[-A,A]^{N} \cap\mathbb{Z}^{N}\right|\,P(\boldsymbol{a})=0\Big{\}}}. \tag{3.10}\] Meanwhile, for sufficiently large prime \(p\), whenever \(f_{\boldsymbol{a}}(\boldsymbol{x})\) is irreducible over \(\overline{\mathbb{Z}/p\mathbb{Z}}\), the Lang-Weil estimate [17] (see also [28, Theorem 3]) ensures the existence of a smooth point \(\boldsymbol{x}\in(\mathbb{Z}/p\mathbb{Z})^{n}\) satisfying \(f_{\boldsymbol{a}}(\boldsymbol{x})=0\). Then, by Hensel's lemma, we have a point \(\boldsymbol{x}\) in \(\mathbb{Q}_{p}\) satisfying \(f_{\boldsymbol{a}}(\boldsymbol{x})=0.\) Therefore, we conclude from (3.10) that for sufficiently large \(M>0\), one has \[|d_{1}(A,M)|\] \[\leq\frac{\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N} \right|\,P(\boldsymbol{a})=0\Big{\}}}{\#\left\{\boldsymbol{a}\in[-A,A]^{N} \cap\mathbb{Z}^{N}\right|\,P(\boldsymbol{a})=0\Big{\}}}. \tag{3.11}\] We shall apply Lemma 2.3 with \(Y\) defined in (2.11). With this \(Y\) in mind, we find from (3.11) that \[|d_{1}(A,M)|\] \[\leq\frac{\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N} \right|\,(i)\,\,\boldsymbol{a}\,\,(\text{mod }p)\in Y(\mathbb{F}_{p})\text{ for some prime }p>M\right\}}{ \#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N}\right|\,P(\boldsymbol{a })=0\Big{\}}}. \tag{3.12}\] Note by the classical argument (see [5] and [22, the proof of Theorem 1.3]) that the fact that \(P(\mathbf{t})=0\) has a nontrivial integer solution implies that \(\sigma_{\infty}([-1,1]^{N})\cdot\prod_{p}\sigma_{p}(\mathbb{Z}_{p})\asymp 1\). Then, one infers by Lemma 2.1 that \[\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N}_{\text{prim}}\right|\,P (\boldsymbol{a})=0\right\}\asymp A^{N-k}.\] Proposition 2.6 together with the hypothesis in the statement of Theorem 1.1 that \(k<\lfloor(1-C_{n,d}(1,d-1))N\rfloor\) reveals that the codimension \(r\) of \(Y\) is strictly greater than \(k+1\). Hence, we find by applying Lemma 2.3 that \[|d_{1}(A,M)|\ll\frac{1}{M^{r-k-1}\log M}+A^{k-r+1}.\] Therefore, we obtain \[\lim_{A\to\infty}|d_{1}(A,M)|\ll\frac{1}{M^{r-k-1}\log M}. \tag{3.13}\] Next, we turn to estimate \(\lim_{A\to\infty}|d_{2}(A,M)|.\) For simplicity, we temporarily write \[\lim_{A\to\infty}|d_{2}(A,M)|=\varphi(M). \tag{3.14}\] One infers by (3.7) together with Lemma 3.1 that \[d_{2}(A,M)=\frac{\prod_{p\leq M}\sigma_{p}(T_{p})\cdot\sigma_{\infty}(T_{ \infty})}{\prod_{p\in\mathfrak{p}}\sigma_{p}(\mathbb{Z}_{p})\cdot\sigma_{ \infty}([-1,1]^{N})},\] and thus by the definition of \(c_{P}\), we discern that \[\varphi(M)\to 0, \tag{3.15}\] as \(M\to\infty\). Hence, we conclude from (3.9), (3.13) and (3.14) that \[\lim_{A\to\infty}\biggl{|}\boldsymbol{d}\biggl{(}T_{\infty}\times\prod_{p\text { prime}}T_{p},A;P\biggr{)}-c_{P}\biggr{|}\ll\frac{1}{M^{r-k-1}\log M}+\varphi(M).\] By letting \(M\to\infty\), we see from (3.15) that \[\lim_{A\to\infty}\biggl{|}\boldsymbol{d}\biggl{(}T_{\infty}\times\prod_{p \text{ prime}}T_{p},A;P\biggr{)}-c_{P}\biggr{|}=0,\] which gives (3.8). This completes the proof of Theorem 1.1. For the proof of Theorem 1.2, we require a proposition that plays an important role in guaranteeing the positiveness of \(c_{P}\). In order to describe this proposition, it is convenient to define \[S_{\infty}:=\left\{\boldsymbol{a}\in\mathbb{R}^{N}\big{|}\ \exists\ \boldsymbol{x}\in\mathbb{R}^{n}\setminus\{ \boldsymbol{0}\}\text{ such that }f_{\boldsymbol{a}}(\boldsymbol{x})=0\right\}\] and recall the definition (1.6) of \(T_{p}\) for every prime \(p.\) Also, for a given \(\boldsymbol{b}\in\mathbb{Z}^{N}\), we define \[B_{\infty}(\boldsymbol{b},\eta) =\left\{\boldsymbol{a}\in\mathbb{R}^{N}\big{|}\ |a_{i}-b_{i}|<\eta\text{ for }1\leq i\leq N\right\}\] \[B_{p}(\boldsymbol{b},\eta) =\left\{\boldsymbol{a}\in\mathbb{Z}_{p}^{N}\big{|}\ |a_{i}-b_{i}|_{p}<\eta\text{ for }1\leq i\leq N\right\},\] for every prime \(p.\) Furthermore, we consider two functions \(\Psi_{\infty}:\mathbb{R}^{N}\times\mathbb{R}^{n}\to\mathbb{R}\) and \(\Psi_{p}:\mathbb{Z}_{p}^{N}\times\mathbb{Z}_{p}^{n}\to\mathbb{Z}_{p}\) defined by \[\Psi_{\infty}(\boldsymbol{a},\boldsymbol{x}):=f_{\boldsymbol{a}}(\boldsymbol {x})\text{ and }\Psi_{p}(\boldsymbol{a},\boldsymbol{x}):=f_{\boldsymbol{a}}( \boldsymbol{x}), \tag{3.16}\] for every prime \(p.\) For the gradient vectors associated with the polynomial \(f_{\mathbf{a}}(\mathbf{x}),\) we use the notation \[\nabla f_{\mathbf{a}}(\mathbf{x})=(\partial_{x_{1}}f_{\mathbf{a}}(\mathbf{x}),\dots,\partial_{x _{n}}f_{\mathbf{a}}(\mathbf{x})). \tag{3.17}\] **Proposition 3.2**.: _Supoose that there exists \(\mathbf{b}\in\mathbb{Z}^{N}\setminus\{\mathbf{0}\}\) such that the variety \(V_{\mathbf{b}}\) in \(\mathbb{P}^{n-1}\) admits a smooth \(\mathbb{Q}\)-rational point. Then, for each prime \(p\), there exists positive numbers \(\eta_{p}\) and \(\eta_{\infty}\) less than or equal to \(1\) such that_ \[B_{p}(\mathbf{b},\eta_{p}) \subseteq T_{p}\] \[B_{\infty}(\mathbf{b},\eta_{\infty}) \subseteq S_{\infty}.\] For the proof of Proposition 3.2, we mainly use a version of Hensel's lemma in \(p\)-adic numbers [15, Theorem 25] and use the implicit function theorem in real numbers. Proof.: First, we shall show that for each prime \(p\), there exists \(\eta_{p}>0\) such that \[B_{p}(\mathbf{b},\eta_{p})\subseteq T_{p}. \tag{3.18}\] Recall the definition (3.16) of \(\Psi_{p}\) and the notation (3.17) for the gradient vector. Note from the hypothesis that there exists \(\mathbf{b}\in\mathbb{Z}^{N}\) with \(P(\mathbf{b})=0\) such that the variety \(V_{\mathbf{b}}\) in \(\mathbb{P}^{n-1}\) admits a smooth \(\mathbb{Q}\)-rational point. Since \(V_{\mathbf{b}}\) is a projective variety, it also admits a smooth integer point. Then, if we write this integer point by \(\mathbf{y}\in\mathbb{Z}^{n},\) we find that \[\Psi_{p}(\mathbf{b},\mathbf{y})=f_{\mathbf{b}}(\mathbf{y})=0\text{ and }\nabla f_{\mathbf{b}}(\mathbf{y })\neq\mathbf{0}\in\mathbb{Z}_{p}^{n}. \tag{3.19}\] Then, we infer that there exists \(j_{0}\) with \(1\leq j_{0}\leq n\) such that \[\left|\left(\nabla f_{\mathbf{b}}(\mathbf{y})\right)_{j_{0}}\right|_{p}\neq 0. \tag{3.20}\] Let us write \(\left|\left(\nabla f_{\mathbf{b}}(\mathbf{y})\right)_{j_{0}}\right|_{p}=p^{-\alpha}\) with \(\alpha\in\mathbb{N}\cup\{0\}.\) We find by applying [15, Theorem 25] with the function \(\Psi_{p}\) that for all \(p\)-adic numbers \(a_{i}\in\mathbb{Z}_{p}\)\((1\leq i\leq N)\) with \(|a_{i}-b_{i}|_{p}<p^{-2\alpha}\) and for all \(p\)-adic numbers \(x_{j}\in\mathbb{Z}_{p}\)\((1\leq j\leq n,\ j\neq j_{0})\) with \(|x_{j}-y_{j}|_{p}<p^{-2\alpha},\) there is a unique \(p\)-adic number \(x_{j_{0}}\) with \(|x_{j_{0}}-y_{j_{0}}|_{p}<p^{-\alpha}\) such that \[\Psi_{p}(\mathbf{a},\mathbf{x})=f_{\mathbf{a}}(\mathbf{x})=0. \tag{3.21}\] Therefore, by setting \(\eta_{p}=p^{-2\alpha},\) we conclude that for all \(\mathbf{a}\in B_{p}(\mathbf{b},\eta_{p}),\) there exists \(\mathbf{x}\in\mathbb{Z}_{p}^{n}\) such that \(f_{\mathbf{a}}(\mathbf{x})=0.\) In other words, we have \[B_{p}(\mathbf{b},\eta_{p})\subseteq T_{p}. \tag{3.22}\] Next, we shall show that there exists \(\eta_{\infty}>0\) such that \[B_{\infty}(\mathbf{b},\eta_{\infty})\subseteq S_{\infty}.\] In order to do this, we use the same argument leading from (3.18) to (3.22) with the implicit function theorem in real numbers in place of [15, Theorem 25]. By using the same notation \(\mathbf{y}\) for a smooth \(\mathbb{Q}\)-rational point that the variety \(V_{\mathbf{b}}\in\mathbb{P}^{n-1}\) admits, we infer by applying the argument leading from (3.18) to (3.20) that there exists \(j_{0}\) with \(1\leq j_{0}\leq n\) such that \[\left|\left(\nabla f_{\mathbf{b}}(\mathbf{y})\right)_{j_{0}}\right|\neq 0.\] Then, it follows by the implicit function theorem that there exists \(\gamma>0\) having the property that for all real numbers \(a_{i}\in\mathbb{R}\)\((1\leq i\leq N)\) with \(|a_{i}-b_{i}|<\gamma\) and for all real numbers \(x_{j}\in\mathbb{R}\)\((1\leq j\leq n,\ j\neq j_{0})\) with \(|x_{j}-y_{j}|<\gamma\) there is a unique real number \(x_{j_{0}}\) such that \[\Psi_{\infty}(\mathbf{a},\mathbf{x})=f_{\mathbf{a}}(\mathbf{x})=0.\] Therefore, by setting \(\eta_{\infty}=\gamma,\) we conclude that for all \(\mathbf{a}\in B_{\infty}(\mathbf{b},\eta_{\infty}),\) there exists \(\mathbf{x}\in\mathbb{R}^{n}\) such that \(f_{\mathbf{a}}(\mathbf{x})=0.\) In other words, we have \[B_{\infty}(\mathbf{b},\eta_{\infty})\subseteq S_{\infty}.\] This completes the proof of Proposition 3.2. Proof of Theorem 1.2.: Recall the natural map \[\Phi^{A}:\ [-A,A]^{N}\cap\mathbb{Z}^{N}_{\text{prim}} \to[-1,1]^{N}\times\prod_{p\text{ prime}}\mathbb{Z}^{N}_{p}\] \[\mathbf{a} \mapsto\left(\frac{\mathbf{a}}{A},\mathbf{a},\dots,\mathbf{a},\dots\right).\] Furthermore, we recall the definitions \[S_{\infty} =\left\{\mathbf{a}\in\mathbb{R}^{N}\right|\ \exists\ \mathbf{x}\in \mathbb{R}^{n}\setminus\{\mathbf{0}\}\text{ such that }f_{\mathbf{a}}(\mathbf{x})=0\right\}\] \[T_{p} =\left\{\mathbf{a}\in\mathbb{Z}^{N}_{p}\right|\ \exists\ \mathbf{x}\in \mathbb{Z}^{n}_{p}\setminus\{\mathbf{0}\}\text{ such that }f_{\mathbf{a}}(\mathbf{x})=0\right\},\] for every prime \(p.\) Recall from the hypothesis in the statement of Theorem 1.1 that there exists \(\mathbf{b}\in\mathbb{Z}^{N}\) with \(P(\mathbf{b})=0\) such that the variety \(V_{\mathbf{b}}\) in \(\mathbb{P}^{n-1}\) admits a smooth \(\mathbb{Q}\)-rational point. One sees by applying Proposition 3.2 that for each prime \(p\) there exist positive numbers \(\eta_{p}\) and \(\eta_{\infty}\) less than \(1\) such that \(B_{p}(\mathbf{b},\eta_{p})\subseteq T_{p}\) and \(B_{\infty}(\mathbf{b},\eta_{\infty})\subseteq S_{\infty}.\) We choose a sufficiently large number \(C:=C(\mathbf{b})>0\) such that \(B_{\infty}(\mathbf{b}/C,\eta_{\infty}/C)\subseteq[-1,1]^{N}.\) Furthermore, on observing the relation that \(f_{\mathbf{b}/C}(\mathbf{x})=(1/C)\cdot f_{\mathbf{b}}(\mathbf{x}),\) we infer that \(B_{\infty}(\mathbf{b}/C,\eta_{\infty}/C)\subseteq S_{\infty}.\) Therefore, on noting that \[T_{\infty}=S_{\infty}\cap[-1,1]^{N}.\] one deduces that \[B_{\infty}(\mathbf{b}/C,\eta/C)\subseteq T_{\infty}.\] Meanwhile, it follows by Theorem 1.1 that \[\lim_{A\to\infty}\varrho_{d,n}^{P,\text{loc}}(A)=c_{P},\] and thus, we find that \[\lim_{A\to\infty}\varrho_{d,n}^{P,\mathrm{loc}}(A)\geq\frac{\prod_{p<M}\sigma_{p}(B _{p}(\boldsymbol{b},\eta_{p}))\cdot\prod_{p\geq M}\sigma_{p}(T_{p})\cdot\sigma_{ \infty}(B_{\infty}(\boldsymbol{b}/C,\eta_{\infty}/C))}{\prod_{p}\sigma_{p}( \mathbb{Z}_{p})\cdot\sigma_{\infty}([-1,1]^{N})}, \tag{3.23}\] for any \(M>0.\) Then, it suffices to show that the right-hand side in (3.23) is greater than \(0\). We shall prove this by showing that there exists \(M>0\) such that \[\frac{\prod_{p<M}\sigma_{p}(B_{p}(\boldsymbol{b},\eta_{p}))\cdot\sigma_{ \infty}(B_{\infty}(\boldsymbol{b}/C,\eta_{\infty}/C))}{\prod_{p<M}\sigma_{p}( \mathbb{Z}_{p})\cdot\sigma_{\infty}([-1,1]^{N})}>0 \tag{3.24}\] and \[\frac{\prod_{p\geq M}\sigma_{p}(T_{p})}{\prod_{p\geq M}\sigma_{p}(\mathbb{Z}_ {p})}>0. \tag{3.25}\] First, we shall show that the inequality (3.24) holds. For a given \(B\in\mathbb{N}\) and \(\boldsymbol{r}\in\mathbb{Z}^{N}\), we recall the definition of \(\sigma_{p}^{B,\boldsymbol{r}}\) in the statement of Lemma 2.1. Note that there exists \(B\in\mathbb{N}\cup\{0\}\) such that \[\prod_{p<M}\sigma_{p}(B_{p}(\boldsymbol{b},\eta_{p}))=\prod_{p<M}\sigma_{p}^{B,\boldsymbol{b}}. \tag{3.26}\] Furthermore, the \(p\)-adic densities \(\sigma_{p}(\mathbb{Z}_{p})\), \(\sigma_{p}^{B,\boldsymbol{b}}\) and the real densities \(\sigma_{\infty}([-1,1]^{N})\), \(\sigma_{\infty}(B_{\infty}(\boldsymbol{b}/C,\eta_{\infty}/C))\) are greater than \(0\) by the application of the Hensel's lemma and the implicit function theorem (see [22, the proof of Theorem 1.3] or [8, Lemma 5.7]). Therefore, one sees from (3.26) that the inequality (3.24) holds for any \(M>0\). Next, we shall show that (3.25) holds. For any \(M_{1}\) with \(M_{1}>M,\) we find from (3.7) with \([M,M_{1})\) and \([-1,1]^{N}\) in place of \(\mathfrak{p}\) and \(\mathfrak{B}\) that \[\lim_{A\to\infty}\boldsymbol{d}\bigg{(}[-1,1]^{N}\times\prod_{p\in[M,M_{1})}T _{p}\times\prod_{p\notin[M,M_{1})}\mathbb{Z}_{p}^{N},A;P\bigg{)}=\frac{\prod_{ p\in[M,M_{1})}\sigma_{p}(T_{p})}{\prod_{p\in[M,M_{1})}\sigma_{p}(\mathbb{Z}_{p})}. \tag{3.27}\] Meanwhile, we see that \[\boldsymbol{d}\bigg{(}[-1,1]^{N}\times\bigg{(}\prod_{p\in[M,M_{1}) }T_{p}\bigg{)}^{c}\times\prod_{p\notin[M,M_{1})}\mathbb{Z}_{p}^{N},A;P\bigg{)}\] \[\leq\frac{\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}^{N} \right|\,(i)\,\,\exists\,\,p>M\text{ s.t. }f_{\boldsymbol{a}}(\boldsymbol{x})=0\text{ has no solution in }\mathbb{Z}_{p}^{n} \right\}}{\left.\#\left\{\boldsymbol{a}\in[-A,A]^{N}\cap\mathbb{Z}_{\text{ prim}}^{N}\right|\,P(\boldsymbol{a})=0\right\}}.\] Then, it follows by the same argument leading from (3.10) to (3.13) that \[\lim_{A\to\infty}\boldsymbol{d}\bigg{(}[-1,1]^{N}\times\bigg{(}\prod_{p\in[M,M _{1})}T_{p}\bigg{)}^{c}\times\prod_{p\notin[M,M_{1})}\mathbb{Z}_{p}^{N},A;P \bigg{)}\ll\frac{1}{M^{r-k-1}\log M}. \tag{3.28}\] Thus, on noting that \[\boldsymbol{d}\bigg{(}[-1,1]^{N}\times\prod_{p\in[M,M_{1})}T_{p} \times\prod_{p\notin[M,M_{1})}\mathbb{Z}_{p}^{N},A;P\bigg{)}\] \[=1-\boldsymbol{d}\bigg{(}[-1,1]^{N}\times\bigg{(}\prod_{p\in[M,M_{ 1})}T_{p}\bigg{)}^{c}\times\prod_{p\notin[M,M_{1})}\mathbb{Z}_{p}^{N},A;P\bigg{)},\] we find from (3.28) that \[\lim_{A\to\infty}\boldsymbol{d}\bigg{(}[-1,1]^{N}\times\prod_{p \in[M,M_{1})}T_{p}\times\prod_{p\notin[M,M_{1})}\mathbb{Z}_{p}^{N},A;P\bigg{)}> 1/2, \tag{3.29}\] for sufficiently large \(M>0.\) Therefore, it follows from (3.27) and (3.29) that for sufficiently large \(M>0,\) one has \[\frac{\prod_{p\geq M}\sigma_{p}(T_{p})}{\prod_{p\geq M}\sigma_{p}( \mathbb{Z}_{p})}\] \[=\lim_{M_{1}\to\infty}\frac{\prod_{p\in[M,M_{1})}\sigma_{p}(T_{p} )}{\prod_{p\in[M,M_{1})}\sigma_{p}(\mathbb{Z}_{p})}\] \[=\lim_{M_{1}\to\infty}\lim_{A\to\infty}\boldsymbol{d}\bigg{(}[-1,1 ]^{N}\times\prod_{p\in[M,M_{1})}T_{p}\times\prod_{p\notin[M,M_{1})}\mathbb{Z}_ {p}^{N},A;P\bigg{)}>1/2.\] Hence, the inequality (3.25) holds. By using the inequalities (3.24) and (3.25), one finds that \[\frac{\prod_{p<M}\sigma_{p}(B_{p}(\boldsymbol{b},\eta_{p}))\cdot\prod_{p\geq M }\sigma_{p}(T_{p})\cdot\sigma_{\infty}(B_{\infty}(\boldsymbol{b}/C,\eta_{ \infty}/C))}{\prod_{p}\sigma_{p}(\mathbb{Z}_{p})\cdot\sigma_{\infty}([-1,1]^{ N})}>0,\] and thus we conclude from (3.23) that \[\lim_{A\to\infty}\varrho_{d,n}^{P,\text{loc}}(A)>0.\]
2301.01952
Quantum Bayesian Inference in Quasiprobability Representations
Bayes' rule plays a crucial piece of logical inference in information and physical sciences alike. Its extension into the quantum regime has been the object of several recent works. These quantum versions of Bayes' rule have been expressed in the language of Hilbert spaces. In this paper, we derive the expression of the Petz recovery map within any quasiprobability representation, with explicit formulas for the two canonical choices of normal quasiprobability representations (which include Discrete Wigner representations) and of representations based on symmetric, informationally complete positive operator-valued measures (SIC-POVMs). By using the same mathematical syntax of (quasi-)stochastic matrices acting on (quasi-)stochastic vectors, the core difference in logical inference between classical and quantum theory is found in the manipulation of the reference prior rather than in the representation of the channel.
Clive Cenxin Aw, Kelvin Onggadinata, Dagomir Kaszlikowski, Valerio Scarani
2023-01-05T08:16:50Z
http://arxiv.org/abs/2301.01952v2
# Quantum Bayesian Inference in Quasiprobability Representations ###### Abstract Bayes' rule plays is a crucial piece of logical inference in information and physical sciences alike. Its extension into the quantum regime has been the object of several recent works. These quantum versions of Bayes' rule have been expressed in the language of Hilbert spaces. In this paper, we derive the expression of the Petz recovery map within any quasiprobability representation, with explicit formulas for the two canonical choices of "normal quasiprobability representations" (which include Discrete Wigner representations) and of representations based on symmetric, informationally complete positive operator-valued measures (SIC-POVMs). By using the same mathematical syntax of (quasi-)stochastic matrices acting on (quasi-)stochastic vectors, this construction brings to the fore the structural similarities and the core differences in logical inference between classical and quantum theory. ## I Introduction Inference is a logical necessity in every science. In information theory and physics, the fundamentality of inference is particularly overt in notions of process reversibility and state recovery. Here, the most empirically applied and canonical approach is Bayes' rule: \[\tilde{\mathcal{E}}_{\gamma}(a|a^{\prime})=\mathcal{E}(a^{\prime}|a)\frac{ \gamma(a)}{\bar{\gamma}(a^{\prime})}. \tag{1}\] This relation gives us a recipe for obtaining various probability-theoretic objects [1; 2; 3; 4]. Of particular note, we may use it to obtain the "reverse" transition \(\tilde{\mathcal{E}}_{\gamma}\) for any given (i) the forward process or _transformation_\(\mathcal{E}\), and (ii) the reference _prior_\(\gamma\) on the input of said process. The _posterior_, \(\tilde{\gamma}(a^{\prime})=\sum_{a}\mathcal{E}(a^{\prime}|a)\gamma(a)\), emerges from these two objects. This typical form of Bayes' rule works only for classical information theory. The extension to quantum theory requires some work: as one possible reason for this, notice that in a classical process \(a\to a^{\prime}\) one can retain information on both input and output, and thus define the joint probability distribution \(P(a,a^{\prime})\); while nothing of the sort can be done for the quantum process \(\alpha\to\alpha^{\prime}=\mathcal{E}(\alpha)\), where \(\mathcal{E}\) is a completely positive trace preserving (CPTP) map. Various proposals have been presented over the years, and we refer to a very recent consolidating framework for all the references [5]. A special role is played by the _Petz recovery map_[6; 7; 8]: \[\hat{\mathcal{E}}_{\gamma}[\bullet]=\sqrt{\gamma}\,\mathcal{E}^{\dagger}\left[ \frac{1}{\sqrt{\mathcal{E}[\gamma]}}\bullet\frac{1}{\sqrt{\mathcal{E}[\gamma] }}\right]\sqrt{\gamma}, \tag{2}\] This recovery channel is defined for any CPTP map \(\mathcal{E}\) and a reference density operator \(\gamma\). Notably, when reference priors, input states and the channel share the same eigenbases, the Petz map reduces to the classical Bayes rule [8; 9]. This and other properties pertaining to what may be called the "conservation of divergences" (which is what led to its conception) has built up this recovery map's reputation as the "quantum Bayes' rule" [10]; a reputation recently vindicated in an axiomatic approach [11]. The Petz map construction appears also naturally in the definition of fluctuation theorems in thermodynamics [12; 13; 14]. Now, having said this, it seems that what exactly makes the Petz map similar (or different) to the classical Bayesian update has not been formalized as well as it can be. From an information-theoretical perspective, there are correspondences between the action of these recipes. Yet, we know that there are key regime-differences in the woodwork. This lack of formal comparison across these regimes is at least partially because the Petz map has thus far only been formalized in terms of CPTP maps and density operators, living in a Hilbert space. Meanwhile, the classical Bayes rule exists as a stochastic matrix mapping stochastic vectors, living in a real vector space. In this paper, we attempt to close this gap by investigating the Petz map in _quasiprobability representation_ (QPR) [15; 16]. This formalism provides a complete description of quantum theories while sharing the familiar mathematical equipment found in classical probability theory. The distinction is that _quasiprobabilities_ (or "negative probabilities") are generally necessary in the quantum case [17]. This negativity has been attributed as a resource for advantage in quantum computation [18; 19; 20]. As such, we seek to put the Petz map in the same formal habitat as that of classical Bayesian inversion and in an expression that is comparable to it. From there we may discuss the similarities, differences and interpretations wherever appropriate. We believe this work makes a formal step in understanding the essential distinctions between classical and quantum inference. This paper is sectioned as follows. In Section II, we review features of Bayesian inference for classical and quantum transformations. In Section III, we review the formalisms of QPR in quantum theory. Readers familiar with the formal content here may skim through these sections. In Section IV, we work towards the key expression of the Petz map in QPR, stating relevant theorems along the way. In Section V, we discuss consequent theoretical observations, contrasting notable formal features of the expression to the classical Bayesian update. We also introduce "transition graphs" that can help visualize the implications of our results. Finally in Section VI, we summarize our findings and state some open lines of inquiry. ## II Classical & quantum Bayesian inference In the context of classical mechanics and probability theory, a physical transformation can be expressed by conditional probabilities \(\mathcal{E}(a^{\prime}|a)\) mapping probability distributions of inputs \(p(a)\) to distributions of outputs \(\tilde{p}(a^{\prime})=\sum_{a}\mathcal{E}(a^{\prime}|a)p(a)\) residing in some given state space \(A\)[21]. This can be captured compactly by a stochastic matrix \(S^{\mathcal{E}}=\{\mathcal{E}(a^{\prime}|a)\}\), mapping \(v^{p}=\{p(a)\}\) to \(v^{\tilde{p}}=\{\tilde{p}(a^{\prime})\}\). As already discussed, if we want to acquire a stochastically valid and logically sound "reverse" of this transformation \(\mathcal{E}\), we must invoke not only the channel in question but also a _reference prior_\(\gamma\) on the input. This is essentially a pre-existing best guess of the inputs for which the Bayesian inverse is constructed. This process of acquiring \(\tilde{\mathcal{E}}_{\gamma}\) from \(\mathcal{E}\) and \(\gamma\) can be referred to as performing "retrodiction" (inference about the past, in contrast to prediction, inferring about the future) on \(\mathcal{E}\) on the prior \(\gamma\). Meanwhile, \(S^{\tilde{\mathcal{E}}_{\gamma}}v^{\tilde{p}}\) gives the "retrodicted input" given an observation \(\tilde{p}\). It may also be referred to as the "Bayesian update on \(\gamma\) given \(\tilde{p}\)". For every each _individual_ transition \(a\to a^{\prime}\), we may consult (1) for the corresponding retrodiction \(a^{\prime}\to a\). For the mapping of _distributions_, it is more instructive to write the retrodiction map as a stochastic matrix: \[S^{\mathcal{E}}_{\mathbf{CL}}=D_{\gamma}(S^{\mathcal{E}})^{\mathrm{T}}D^{-1}_ {\mathcal{E}[\gamma]} \tag{3}\] Here \(D_{p}\) is a diagonal matrix with entries corresponding to some distribution \(p\). As introduced in Section I, the counterpart to Bayes rule in quantum theory, is the Petz map (2). It is well-defined and CPTP for any full-rank \(\mathcal{E}[\gamma]\).[22] It may also be expressed as \[\hat{\mathcal{E}}_{\gamma}=\mathcal{M}_{\gamma^{\forall 2}}\circ\mathcal{E}^{ \dagger}\circ\mathcal{M}_{\mathcal{E}[\gamma]^{-\forall 2}}, \tag{4}\] where \(\mathcal{M}_{\alpha^{\prime}}[\bullet]=\alpha^{r}\bullet\alpha^{r}\) for any density operator \(\alpha\) and \(r\in\mathbb{R}\), and \(\mathcal{E}^{\dagger}\) is the adjoint of \(\mathcal{E}\). This is the unique map for which \[\mathrm{Tr}(\mathcal{E}[\rho]\sigma)=\mathrm{Tr}\big{(}\mathcal{E}^{\dagger }[\sigma]\rho\big{)} \tag{5}\] for all \(\rho,\sigma\). Before continuing, it is important to stress that _Bayesian inference is generically not inversion_. Inference is possible for any map, while inversion is only possible for invertible maps (information-preserving) - and even then, the two operations are generally not the same, since the inverse of a map is generically not a valid map. In fact, it can be proved that inference and inversion coincide if and only if \(S^{\mathcal{E}}\) is a permutation (for the classical case), or \(\mathcal{E}\) is a unitary channel (in the quantum case) [14]. In general therefore, \(S^{\tilde{\mathcal{E}}_{\mathbf{CL}}}_{\mathbf{CL}}S^{\mathcal{E}}v^{\rho}\neq v ^{\rho}\) and \(\hat{\mathcal{E}}_{\gamma}\circ\mathcal{E}[\rho]\neq\rho\); although the reference state is recovered: \(S^{\tilde{\mathcal{E}}_{\mathbf{CL}}}_{\mathbf{CL}}S^{\mathcal{E}}v^{\gamma}=v ^{\gamma}\) and \(\hat{\mathcal{E}}_{\gamma}\circ\mathcal{E}[\gamma]=\gamma\) for all \(\gamma\). ## III Quasiproability representations ### Generalities We now move on to provide a brief review of the essential elements of QPRs for quantum theory. To bridge quantum theoretic objects in a \(d\)-dimensional Hilbert space to a QPR, the architectural core is given by the so-called _frame_\(\{F_{j}\}_{j\in\Lambda}\), which is a set of Hermitian operators spanning the Hermitian space equipped with inner product. We denote \(\Lambda\) as the discrete state space with a minimal cardinality of \(d^{2}\)[23]. Those saturating the lower bound referred as minimal bases, which we will assume for the remainder of this paper. A counterpart to the frame is known as the _dual frame_\(\{G_{j}\}_{j\in\Lambda}\), which is defined such that: \[\forall\,A,B:\ \sum_{j}\mathrm{Tr}[F_{j}A]\,\mathrm{Tr}[G_{j}B]=\mathrm{Tr}[AB]\,. \tag{6}\] In general, the dual is not unique given a frame. However, for a minimal basis, the frame and dual always enjoy an orthogonality relation \(\mathrm{Tr}[F_{j}G_{k}]=\delta_{jk}\). As long as these objects are known to the user, we can describe all Hibert space objects in terms of QPR. The morphisms are summarized in Table 1. By requiring that our state quasiprobability to be normalized, \(\sum_{a}v^{\rho}_{a}=1\), this immediately implies a constraint on the frame operators: \(\sum_{a}F_{a}=\mathbb{1}\). Moreover, with each POVM \(\{E_{m}\}\) satisfying a unity sum \(\sum_{m}E_{m}=\mathbb{1}\), we also have a constraint for the dual frame operators: \(\mathrm{Tr}[G_{j}]=1\) for all \(j\in\Lambda\). Likewise, it is the case that \(\mathrm{Tr}[F_{j}]=1/d\) for all \(j\in\Lambda\). As such, the QPR of any CPTP map \(\mathcal{E}\) is a quasi-stochastic matrix \(S^{\mathcal{E}}\). With a slight abuse of notation, for ease of correspondence with the classical formalism, we shall also denote the elements of the quasi-stochastic matrix as \(S^{\mathcal{E}}_{a^{\prime}a}\equiv\mathcal{E}(a^{\prime}|a)\). Now despite the vast plurality of valid representations that adhere to these rules, there are two canonical choices of QPR used in the relevant literature. To these, we turn. ### Normal quasiprobability representation The first class of representations are those for which the frame and dual frame operators are proportional to each other up to some scaling factor \(c\), i.e., \(G_{j}=cF_{j}\) for all \(j\). For minimal bases, the constant \(c\) is equal to the Hilbert space dimension \(d\). The class of representations satisfying this is known as _normal quasiprobability representation_ (NQPR) [24]. An example of NQPR, and perhaps the most widely used representation, is the _discrete Wigner_ (DW) representation [25; 26; 27], which is well-defined for prime dimension \(d\) and composites of them. For a qubit system (\(d=2\)), the frame has a simple expression given by \[F_{k}=F_{r,s}=\frac{1}{4}\Big{[}\mathbb{1}+(-1)^{r}\sigma_{x}+(-1)^{s}\sigma_{ z}+(-1)^{r+s}\sigma_{y}\Big{]}, \tag{7}\] where \(k=(r,s)\in\mathbb{Z}_{2}\times\mathbb{Z}_{2}\). For composite \(d=d_{1}\times d_{2}\times\cdots\times d_{L}\), where \(d_{1},d_{2},\ldots,d_{L}\) are primes, a tensor structure applies for the total frame. That is, the frame operators decompose as \[F_{k}=F_{k_{1}}\otimes F_{k_{2}}\otimes\cdots\otimes F_{k_{L}},\] where \(k\to(k_{1},k_{2},\ldots,k_{L})\) with each \(k_{l}=(r_{l},s_{l})\in\mathbb{Z}_{d_{l}}\times\mathbb{Z}_{d_{l}}\). This tensor structure is enjoyed by any NQPR and thus affords them an aesthetic benefit when dealing with composite states and purifications \begin{table} \begin{tabular}{|l|l|l|} \hline **Object** & **Hilbert space formalism** & **Quasiprobability formalism** \\ \hline State & \(\rho=\sum_{i}\lambda_{i}\ket{\lambda_{i}}\bra{\lambda_{i}}\) & \(v^{\rho}\ :\ v_{a}^{\rho}=\mathrm{Tr}[\rho\,F_{a}]\) \\ \hline POVM & \(\{E_{m}\,|\,E_{m}\geq 0\,,\sum_{m}E_{m}=\mathbb{1}\}\) & \(\bar{v}^{m}\ :\ \bar{v}_{a^{\prime}}^{m}=\mathrm{Tr}[E_{m}\,G_{a^{\prime}}]\) \\ \hline Unitary & \(\mathcal{U}[\bullet]=U\bullet U^{\dagger},\ UU^{\dagger}=\mathbb{1}\) & \(S^{\mathcal{U}}:S^{\mathcal{U}}_{a^{\prime}a}=\mathrm{Tr}\big{[}F_{a^{\prime} }UG_{a}U^{\dagger}\big{]}\) \\ \hline Channel & \(\mathcal{E}[\bullet]=\sum_{l}\kappa_{l}\bullet\kappa_{l}^{\dagger},\ \sum_{l}\kappa_{l}^{ \dagger}\kappa_{l}=\mathbb{1}\) & \(S^{\mathcal{E}}\ :\ S^{\mathcal{E}}_{a^{\prime}a}=\mathrm{Tr}[F_{a^{\prime}} \mathcal{E}[G_{a}]]\) \\ \hline Born Rule & \(\mathrm{Tr}[\rho E_{m}]\) & \(v^{\rho}\cdot\bar{v}^{m}\in[0,1]\) \\ \hline Dimensionality & \(\mathtt{dim}[\mathbb{C}^{d}]=d\) & \(\mathtt{dim}[\mathbb{Z}_{d}\otimes\mathbb{Z}_{d}]=d^{2}\) \\ \hline \end{tabular} \end{table} Table 1: Morphisms between the Hilbert space formalism and quantum theory. \(v_{a}^{p}=p(a)\) indicates the \(a\)-th entry in a \(p\)-distribution. \(S^{\mathcal{E}}_{a^{\prime}a}=\mathcal{E}(a^{\prime}|a)\) indicates the entry on the \(a^{\prime}\)-column and \(a\)-row of a matrix \(S^{\mathcal{E}}\). scenario is the tetrahedron \[F_{0} =\frac{1}{4}\left[\mathbb{1}+\frac{1}{\sqrt{3}}(1,-1,1)\cdot\vec{ \sigma}\right], \tag{9}\] \[F_{1} =\frac{1}{4}\left[\mathbb{1}+\frac{1}{\sqrt{3}}(1,1,-1)\cdot\vec{ \sigma}\right],\] (10) \[F_{2} =\frac{1}{4}\left[\mathbb{1}+\frac{1}{\sqrt{3}}(-1,1,1)\cdot\vec {\sigma}\right],\] (11) \[F_{3} =\frac{1}{4}\left[\mathbb{1}+\frac{1}{\sqrt{3}}(-1,-1,-1)\cdot \vec{\sigma}\right], \tag{12}\] where \(\vec{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) is the vector of Pauli matrices. In our calculations, the choice of representation, when relevant, will be stated in context and distinguished. If not, the derivation will apply generally to all representations. ## IV The Petz map in quasiprobability formalisms Now, our task is to express the Petz recovery map in its QPR, which we denote as \(S^{\mathcal{E}_{\gamma}}\). This obviously can be done by invoking the morphism for channels in Table 1 and then connecting it with (2). This gives: \[S^{\mathcal{E}_{\gamma}}_{aa^{\prime}}=\mathrm{Tr}\Bigg{[}F_{a}\sqrt{\gamma} \mathcal{E}^{\dagger}\left[\frac{1}{\sqrt{\mathcal{E}[\gamma]}}G_{a^{\prime}} \frac{1}{\sqrt{\mathcal{E}[\gamma]}}\right]\sqrt{\gamma}\Bigg{]} \tag{13}\] But, of course, this affords us no new insight. We are still relying entirely on the Hilbert space formalism. Nothing novel can be said in comparison to classical Bayesian inference as found in (3). Our specific task is as illustrated in FIG. 1: write the Petz in a way that _only quasiprobability-theoretic objects_ (quasi-stochastic vectors, matrices and frames) _are required_. The naive guess that \(S^{\hat{\mathcal{E}}_{\gamma}}\) could be obtained by grafting the quasiprobabilistic formalism onto the classical Bayesian inverse (3) is easily dismissed: the \(S^{\mathcal{E}_{\Sigma}}_{\mathbf{C}\mathbf{L}}\) obtained by such a recipe is in general not a valid map (see Appendix E for explicit counterexamples). Rather, taking a hint from (4), we note that channel isomorphism works also when a map is _not_ CPTP. Hence it is the case that \[S^{\hat{\mathcal{E}}_{\gamma}}=M_{\gamma^{1/2}}\left(S^{\mathcal{E}^{\dagger} }\right)M_{\mathcal{E}[\gamma]^{-1/2}}\,. \tag{14}\] with \[S^{\mathcal{M}_{a^{\prime}a^{\prime}}}_{a^{\prime}a}\coloneqq(M_{a^{\prime}}) _{a^{\prime}a}=\mathrm{Tr}[F_{a^{\prime}}\alpha^{r}G_{a}\alpha^{r}]\,. \tag{15}\] Now, it is crucial for our goals that all objects entering (14) can be constructed within the quasiprobability formalism: so we have to prove that this holds for \(M_{\alpha^{r}}\). As a first check, we notice that all the entries of these matrices are real. Indeed, one can rewrite \((M_{\alpha^{r}})_{a^{\prime}a}=\mathrm{Tr}[\mathcal{F}_{a^{\prime}}^{r} \mathcal{G}_{a}^{r}]\) with \(\mathcal{F}_{a}^{r}=\alpha^{r/2}F_{a}\alpha^{r/2}\) and \(\mathcal{G}_{a}^{r}=\alpha^{r/2}G_{a}\alpha^{r/2}\). These are Hermitian operators, and so \(\mathrm{Tr}[\mathcal{F}_{a^{\prime}}^{r}\mathcal{G}_{a}^{r}]=\frac{1}{2} \mathrm{Tr}[\{\mathcal{F}_{a^{\prime}}^{r}\mathcal{G}_{a}^{r}\}]\) is real. Next, we provide a recipe to explicitly compute the \(M_{\alpha^{r}}\) (recalling that we shall need it for \(r=\pm\frac{1}{2}\)). For \(r=1\), it is relatibritionfward that \[(M_{\alpha})_{a^{\prime}a} = \mathrm{Tr}[F_{a^{\prime}}\alpha G_{a}\alpha] \tag{16}\] \[= \sum_{xy}v_{x}^{\alpha}v_{y}^{\alpha}\mathrm{Tr}[F_{a^{\prime}}G _{x}G_{a}G_{y}]\] (17) \[:= \sum_{xy}v_{x}^{\alpha}v_{y}^{\alpha}\xi_{a^{\prime}xay} \tag{18}\] where the \(\xi_{pqrs}=\mathrm{Tr}[F_{p}G_{q}G_{r}G_{s}]\) are referred to as _structure coefficients_. Here we have invoked the fact that every density operator \(\alpha\) can be reconstructed from \(v^{\alpha}\) as \(\alpha=\sum_{x}v_{x}^{\alpha}G_{x}\). While such a closed expression cannot be found for \(r=\pm\frac{1}{2}\), fortunately for any \(r\in\mathbb{R}\) one can prove (see Appendix A) that \[M_{\alpha^{r}}=M_{\alpha}^{r}\,. \tag{19}\] Thus, to compute the \(M_{\alpha^{r}}\) for \(r=\pm\frac{1}{2}\), one first writes down \(M_{\alpha}\) and then takes the suitable roots. The resulting matrices are guaranteed to contain only positive entries by the remark above, which was valid for every \(r\). In summary, we have obtained our main result: **Result**.: _The Petz map in any QPR reads_ \[S^{\mathcal{E}_{\gamma}}_{\mathbf{Q}\mathbf{M}}=M_{\gamma}^{1/2}\left(S^{ \mathcal{E}^{\dagger}}\right)M_{\mathcal{E}[\gamma]}^{-1/2} \tag{20}\] _where_ \[\left(M_{\gamma}\right)_{a^{\prime}a} =\sum_{xy}v_{x}^{\gamma}v_{y}^{\gamma}\,\xi_{a^{\prime}xay}\] \[\left(M_{\mathcal{E}[\gamma]}\right)_{a^{\prime}a} =\sum_{xy}(S^{\mathcal{E}}v^{\gamma})_{x}(S^{\mathcal{E}}v^{\gamma })_{y}\,\xi_{a^{\prime}xay}\] _and \(\xi_{pqrs}=\mathrm{Tr}[F_{p}G_{q}G_{r}G_{s}]\) are structure coefficients determined by the specific QPR. Everything is expressed exclusively in the quasiprobabilistic formalism: no knowledge of Hilbert space renditions of the quantum channel or reference state is required._ For the two canonical choices of QPR introduced above, we prove in Appendix B that \[\mathbf{NQPR}\,:\,S^{\mathcal{E}^{\dagger}}_{\mathbf{NQ}}{=}(S^{\mathcal{E}})^ {\mathrm{T}} \tag{21}\] Figure 1: The task, illustrated commutatively. \[\mathbf{SIC-POVM}:\ S_{\mathbf{SP}}^{\mathcal{E}^{\dagger}}\!=\!(S^{\mathcal{E}})^{ \mathrm{T}}+K_{\mathcal{E}} \tag{22}\] where \((K_{\mathcal{E}})_{ij}=\frac{1}{d}(\sum_{a}\mathcal{E}(j|a)-1)\); whence explicitly \[S_{\mathbf{N}\mathbf{Q}}^{\mathcal{E}} =M_{\gamma}^{1/2}(S^{\mathcal{E}})^{\mathrm{T}}M_{\mathcal{E}[ \gamma]}^{-1/2} \tag{23}\] \[S_{\mathbf{SP}}^{\mathcal{E}} =M_{\gamma}^{1/2}\left[(S^{\mathcal{E}})^{\mathrm{T}}+K_{ \mathcal{E}}\right]M_{\mathcal{E}[\gamma]}^{-1/2} \tag{24}\] Since the QPR of unital maps (i.e. \(\mathcal{E}[\mathbb{1}]=\mathbb{1}\)) are quasi-bistochastic matrices (that is, \(\sum_{a}\mathcal{E}(j|a)=1\) for all \(j\)), for such maps \(K_{\mathcal{E}}\) vanishes and the expressions for NQPR and SIC-POVM representations are formally identical. ## V Discussion ### Formal Comparisons Across Regimes Here we discuss and compare formal features across classical and quantum Bayesian inference, as expressed in (3) and (20). The key points of comparison are summarized in Table 2. We first express (3) in the following form: \[S_{\mathbf{CL}}^{\mathcal{E}}=W_{\gamma}^{1/2}\left(S^{\mathcal{E}^{\dagger}} \right)W_{\mathcal{E}[\gamma]}^{-1/2} \tag{25}\] Here, we have highlighted two things about the classical retrodiction map. Firstly, (25) highlights the fact that one can always write \(D_{\gamma}\) as the square root of its own square \(W_{\gamma}=D_{\gamma}^{2}\). This cosmetic change has advantages for comparing with (20) later. We leave also a reminder that \(D_{\gamma}\) is a diagonal matrix with entries corresponding to the distribution of \(\gamma\) (i.e. \((D_{\gamma})_{ij}=v_{i}^{\gamma}\delta_{ij}\)). Secondly, (25) highlights the fact that (in parallel with the opposite relation found in NQPR) for classical channels the transpose of the channel corresponds to the adjoint \((S^{\mathcal{E}})^{\mathrm{T}}=S^{\mathcal{E}^{\dagger}}\). That is, \((S^{\mathcal{E}})^{\mathrm{T}}\)_satisfies the relation (5) by morphism_ (see Appendix C). With this, there are a few similarities and differences worth noting. Firstly, the retrodiction maps, across both regimes, feature the same structure: a central "adjoint" matrix \(S^{\mathcal{E}^{\dagger}}\), a prior-dependent matrix (i.e. \(M_{\gamma}^{1/2},W_{\gamma}^{1/2}=D_{\gamma}\)) acting on its left, and a posterior-dependent matrix (i.e. \(M_{\mathcal{E}[\gamma]}^{-1/2},W_{\mathcal{E}[\gamma]}^{-1/2}=D_{\mathcal{E}[ \gamma]}^{-1}\)) acting on its right. Secondly, this central adjoint object in \(S_{\mathbf{N}\mathbf{Q}}^{\mathcal{E}}\) and \(S_{\mathbf{CL}}^{\mathcal{E}}\) are both the transpose of the channel matrix itself. For \(S_{\mathbf{SP}}^{\mathcal{E}}\), the additional \(K_{\mathcal{E}}\) term may be thought of as correcting for the positivity of the states. Thirdly, the prior (and posterior) dependent matrices \(X_{\gamma}\) differ structurally between classical and quantum inference. Having expressed \(D_{\gamma}\) as a function of \(W_{\gamma}\) we see how these matrices can be generally defined as \[(X_{\gamma})_{ij}=\sum_{xy}v_{x}^{\gamma}v_{y}^{\gamma}\xi_{ixjy}. \tag{26}\] With this, it becomes clear from that the key difference between these two domains of inference is the nature of the "structure coeffceients" \(\xi_{pqrs}\). While in the quantum scenario \(\xi_{pqrs}=\mathrm{Tr}[F_{P}G_{q}G_{r}G_{s}]\), classical Bayesian inference calls us to something much more reductive: \(\xi_{pqrs}=\delta_{pq}\delta_{rs}\delta_{pr}\). This singular difference in the classical expression casts out many structural features necessary in the general quantum case. These may be enumerated: \[\mathbf{General Retrodictive Expression}\] \[\begin{array}{c}S_{\mathbf{RT}}^{\mathcal{E}_{\gamma}}=X_{\gamma}^{1/2 }\big{(}S^{\mathcal{E}^{\dagger}}\big{)}X_{\mathcal{E}[\gamma]}^{-1/2}\\ (X_{\gamma})_{ij}=\sum_{xy}v_{x}^{\gamma}v_{y}^{\gamma}\xi_{ixjy}\end{array}\] \begin{tabular}{|c|c|c|} \hline **Object** & **Quantum** & **Classical** \\ \hline \(S^{\mathcal{E}^{\dagger}}\) & \(\mathbf{N}\mathbf{Q}:(S^{\mathcal{E}})^{\mathrm{T}}\) & \\ & \(\mathbf{SP}:(S^{\mathcal{E}})^{\mathrm{T}}+K_{\mathcal{E}}\) & \((S^{\mathcal{E}})^{\mathrm{T}}\) \\ \hline \(\xi_{ixjy}\) & \(\mathrm{Tr}[F_{i}G_{x}G_{j}G_{y}]\) & \(\delta_{ix}\delta_{jy}\delta_{ij}\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline **QPR** & **Quantum** & **Classical** \\ \hline \(S^{\mathcal{E}^{\dagger}}\) & \(\mathbf{N}\mathbf{Q}:(S^{\mathcal{E}})^{\mathrm{T}}\) & \\ & \(\mathbf{SP}:(S^{\mathcal{E}})^{\mathrm{T}}+K_{\mathcal{E}}\) & \\ \hline \(\xi_{ixjy}\) & \(\mathrm{Tr}[F_{i}G_{x}G_{j}G_{y}]\) & \(\delta_{ix}\delta_{jy}\delta_{ij}\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline * Firstly, the classical matrix neglects the dependence (present in the quantum matrix) of every entry on the _aggregation_ of every parameter in the prior distribution. * Secondly and relatedly, the classical case only has diagonal entries, and only depends on the corresponding parameter in the prior distribution. Meanwhile, the quantum matrix has non-diagonal "_coherences_". * Thirdly, while the entries of the classical matrix correspond trivially to values in the prior distribution, entries in the quantum matrix are _weighted_ depending on the representation via the trace of four frame and dual operators. * Finally, the presence of coherences make it such that quantum retrodiction _finding the root_ of \(M_{\gamma}\). In the classical scenario, \(W_{\gamma}\) is already diagonal. All these features emerge simply because of the differences between the structure coefficients present in these scenarios. We elaborate on the significance of these differences in Section VI. There are other resultant properties of \(M_{\alpha}\), on the matrix level, that may be worth noting. Generally, it is a real, semi-definite matrix with a unit trace. That is, so defined, \(M_{\alpha}\geq 0\) and \(\mathrm{Tr}[M_{\alpha}]=1\). For SIC-POVM, it is not generally symmetric and thus not Hermitian. For NQPR, however, it does have symmetry and is thus a density operator under such a representation. That said, \(\mathrm{Tr}[M_{\alpha}^{r}]\neq 1\) when \(\mathtt{rank}(M_{\alpha})\neq 1\). Finally, since the square roots of \(M_{\gamma}\) and \(M_{\mathcal{E}[\gamma]}\) are certainly functions of the \(\mathcal{E}(a^{\prime}|a)\) and the \(\gamma(a)\), the quantum Bayes rule can _in principle_ be written as \[\hat{\mathcal{E}}_{\gamma}(a|a^{\prime})=f\Big{(}\{\mathcal{E}(a^{\prime}|a) \},\,\{\gamma(a)\}\Big{)} \tag{27}\] in full analogy to Eq. (1). But writing down this expression _in practice_ requires the explicit expressions. For the simplest quantum case (the qubit) we would be working to solve for the roots of a quartic characteristic equation, for which no general analytical solution exists at present. ### Visualizing Quantum Inference via QPR #### iv.2.1 Introducing Transition Graphs A notable advantage of stochastic maps is their ease of visualization. One can draw what might be called "transition graphs", where transition between \(a_{i}\) to \(a^{\prime}_{j}\) are depicted by arrows going from the former to the latter. The probabiltiy weights on these transitions may be then depicted by a number or by a colour function. These kinds of graphs are not straightforward to write for the standard Hilbert space formalism. This is simply due to the use of complex terms, probability amplitudes and the plurality of possible basis choices. With QPR, we can illustrate transformations and their quantum Bayesian inverses with transition graphs just as we would for classical stochastic channels, albeit with the added task of depicting negativity in these transitions. In Appendix F and this section, we consider some choices of \(\mathcal{E}\) that give rise to \(S^{\mathcal{E}}\) and their retrodictions \(S^{\mathcal{E}}_{\mathbf{DW}}\) and \(S^{\mathcal{E}}_{\mathbf{SP}}\). These are then depicted as transition graphs. We have chosen to include, in particular, a Half-SWAP channel with a \(|1\rangle\!\langle 1|\) ancilla to visually illustrate and explore the properties of quantum retrodiction. Other transformations are also noted in passing with their graphs and expressions consolidated in Appendix F. Before these, we note some illustrative elements of these figures. Firstly, with transition arrows we depict negative (positive) quasiprobabilities with cooler (warmer) shades. Furthermore, these negative (positive) arrows will be drawn with dashed (solid) lines. A colour legend is included in FIG. 2a. Secondly, in order to get a sensing of how irreversible a forward map is and which states it tends to erase toward, we add coloured "bubbles" around the _output_ side (denoted \(\{a^{\prime}_{j}\}\)) of every graph for a given \(S^{\mathcal{E}}\). The intensity and colour of the bubbles are weighted according quasiprobability distribution of the state \(\mathcal{E}[\mathbb{1}/d]\). Hence, one should expect that these bubbles are coloured uniformly for all unital maps. Thirdly, a similar feature is added for the retrodictive transition graphs, drawn for \(S^{\mathcal{E}_{\gamma}}\) matrices. Crucial for understanding the Bayesian inverse is the reference prior. Hence, for Bayesian inverting transition graphs we add coloured bubbles on the _input_ (denoted \(\{a_{j}\}\), that is, the input of the _forward_ map) side of the graph, weighted according to the distribution of \(\gamma\). Finally, for simplicity, we stick to channels acting on qubits. We also use the most canonical choices of frames for both DW (\(r,s\) starting from \(0\)) and SIC-POVM representations (consolidated in (9)). #### iv.2.2 Fully Reversible & Fully Irreversible As depicted in Figures 5 and 6 (found in Appendix F), we observe the provable property that \(S^{\hat{\mathcal{U}}_{\gamma}}=S^{\hat{\mathcal{U}}}=(S^{\hat{\mathcal{U}}}) ^{\mathrm{T}}\), for unitary channels \(\mathcal{U}\). The Bayesian inverses simply reflect the transition trajectories back, doing so with equal probability and negativity and regardless of what reference prior is chosen. More interesting features occur for non-unitary channels. We may write any CPTP map as a dilation defined by a global unitary \(U\) acting on an extended state space \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) for which the input system \(\bullet_{A}\) and an environment or ancilla \(\beta_{B}\) is defined: \[\mathcal{E}[\bullet]=\mathrm{Tr}_{B}[U\bullet\otimes\!\beta\,U^{\dagger}] \tag{28}\] We stick to the case where both the target and the ancilla are qubits. Arbitrary qubits may be written as: (29) Where \(\ket{\psi}=\cos(\theta/2)\ket{0}+e^{i\phi}\sin(\theta/2)\ket{1}\) and \(\ket{\psi^{\perp}}=e^{-i\phi}\sin(\theta/2)\ket{0}+\cos(\theta/2)\ket{1}\). In maximal contrast to unitary channels, one may consider a quantum total erasure channel. This is simply a kind of replacement map where a Full-SWAP (111) acts on a qubit and an ancilla and we trace out the environment. The Bayesian inverse of such quantum channels follow their classical counterparts: they erase back to reference prior [14]. Since the channel is totally irreversible, the quantum Bayes rule simply reverts our inference to our best guess about the initial state (illustrated by FIG. 4). #### ii.2.3 Liminally (Ir)reversible For a more conceptually involved and instructive scenario, we consider the Half-SWAP \(U_{\!\!\lambda}\), which may be represented in the computational basis as: \[U_{\!\!\lambda}\,\,\hat{=}\,\frac{1}{\sqrt{2}}\begin{pmatrix}\sqrt{2}&0&0&0\\ 0&1&1&0\\ 0&1&-1&0\\ 0&0&0&\sqrt{2}\end{pmatrix} \tag{30}\] As depicted in 2, we have the forward and retrodictive transition graphs for a channel given by \(\mathcal{E}[\bullet]=\mathrm{Tr}_{B}[U_{\!\!\lambda}\bullet\otimes\!\ket{1}\! \rangle\!\langle 1\,\,U_{\!\!\lambda}^{\dagger}]\). To understand the retrodictive action given by the Petz, we can gain some intuitions from by writing out these mappings: \[\ket{01} \stackrel{{ U_{\!\!\lambda}}}{{\longrightarrow}}\frac{1 }{\sqrt{2}}\big{(}\ket{01}+\ket{10}\big{)}\stackrel{{\mathrm{Tr}_ {B}}}{{\longrightarrow}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! We see that if the reference state is \(\gamma=|0\rangle\!\langle 0|\) or \(|+\rangle\!\langle+|\), then any state is compatible to its output (they are unambiguously full rank in \(\mathbb{C}^{2}\)). Hence, the Petz Map erases all (output) states back to the reference, in full consistency with the earlier comments about the quantum total erasure channel. This is depicted in Figures 1(c) and 1(d). A very different situation occurs for \(\gamma=|1\rangle\!\langle 1|\). In this case only \(|1\rangle\!\langle 1|\) is allowed as an output. Thus, the Petz sends \(|1\rangle\!\langle 1|\) to itself while all other states are retrodicted in (complicated but logically consistent) ways dependent on channel's forward transitions, reflected in FIG. 1(e). To explain this more symmetrically: in the former two scenarios, all outputs are compatible with the absolute conviction (as enforced by state purity) given to the reference state, hence all outputs are retrodicted to it. Meanwhile, in this latter case, _only one pure output_ (which just so happens to be the same as the reference) is compatible with the pure reference state. Hence, all _other states_ (beside the expected output) are retrodicted in accordance to the channel without any regard the reference, since the reference already excludes the possibility of such states. These more complicated Bayesian inversions come together and cumulate into a vertical reflection of the forward channel, as FIG. 1(e) depicts. For an arbitrary \(\gamma\), we get a classical mixture of all these key effects together. We depict the case where \(\gamma=\gamma(\frac{\pi}{16},\frac{\pi}{5},\frac{\pi}{3})\) in FIG. 1(f). It should be said the interplay of reference and channel dependencies we have reviewed here is fundamental in classical retrodiction scenarios as well. The Half-SWAP illustrates that these same fundamental Bayesian principles hold in the quantum regime via the inferential structure of the Petz Map, even when complementarity and entanglement is introduced. ## VI Conclusions By expressing the Petz Recovery map as a decomposition of matrices given by (20) we have situated quantum Bayesian inference in the same formal language as that of its classical counterpart given by (26). We have also highlighted what we have found to be the most noteworthy (and interpretation-neutral) differences between these two levels of inference. It should be clear to the reader that, in keeping with the Bayesian character of the Petz, the crucial formal difference is found in the prior-dependent (and in turn, posterior dependent) matrices \(X_{\gamma}\). The properties of these objects encode the most significant differences between classical and quantum inferences. Particularly, it formalizes how prior and posterior variables are taken into account to the central adjoint map, structurally speaking. In the classical case we neglect the total aggregation (all parameters are involved in every entry), "coherences" (non-diagonal terms with sums of product pairs of the parameters, weighted depending on the choice of representation) and "eigenstructure" (finding the matrix root of prior-dependent matrix is not generally circumvented) found in the quantum case. This is all emergent from the simple differences between the "structure coefficients" found across these regimes. Adding to all this the characteristic notion of negativity embedded in all three component matrices, we have a sense of just how wide the conceptual gap we have on our hands. The reduction in structure that we find in the classical regime may be understood as a kind of classical epistemic prejudice, which leads to absurdities if applied to general quantum scenarios. This prejudice (or reduction) is, of course, unsurprising. Classical Bayesian thinking is deeply intuitive and subconscious for us, regardless of being mathematically initiated or not. Nevertheless, articulating or formalizing such strong intuitions already has its complications. Given the orders of differences between this kind of inference and that of the quantum regime, we see why the classical prejudice presides our everyday experience. That said, this work also illustrates noteworthy similarities. Despite profound structural differences, many Bayesian intuitions seem to nevertheless come together as illustrated in transition graphs. The prior and posterior dependent matrices and the role of channel's adjoint presides in both regimes. Under this exploration, we take a step closer to what a regime-independent Bayesian foundation could be. Many open questions remain. For one, we can look into the overlap between classical and quantum inference: what (quantum) processes does classical and quantum retrodiction become equivalent (that is, for every choice of reference prior)? It is easily checked that this holds for channels that give permutative \(S^{\mathcal{E}}\) and quantum total erasure channels. Aside from these two extreme cases, are there other channels where this retrodiction property holds? Related to this, it may be worth investigating if for some frames or under some gauge transformations the current decomposition (in (20)) may be simplified into a more classically intuitive form. No such simpler alternative has been found for every channel and every reference. Finally, throughout this paper we have identified the Petz with quantum Bayesian inference. Nevertheless, other transpose maps exist that have been seen as retrodiction channels. It could be noteworthy to perform a similar decomposition on those maps under quasiprobability schemes. ###### Acknowledgements. This research was supported by the National Research Foundation and the Ministry of Education, Singapore, under the Research Centres of Excellence programme (till 6 December 2022); and by the National Research Foundation, Singapore, and A\({}^{*}\)Star under the CQT Bridging Grant (from 7 December 2022 onwards). We also thank Zaw Lin Htoo and Eugene Koh for helpful discussions. ## Appendix A \(M_{\alpha^{r}}=M_{\alpha}^{r}\) for all \(r\in\mathbb{R}\) We know that in general, \[(A_{ij}(z)\to A)\not\Rightarrow(A_{ij}(z^{r})\to A^{r})\,. \tag{10}\] Informally speaking, powers on the level of entry parameters do not necessarily translate to powers on the level of matrices. Thankfully, this does obtain in our case. The derivation is as follows. We first note that \[(M_{\alpha_{r}})_{ij}=\operatorname{Tr}[F_{i}\alpha^{r}G_{j}\alpha^{r}] \tag{11}\] It may be tempting to invoke that since \[S^{\mathcal{E}}S^{\mathcal{F}}=S^{\mathcal{E}\circ\mathcal{F}} \tag{12}\] we already have desired theorem proven. However, this will only do for \(r\in\mathbb{Z}^{+}\). If we wish to do so for values of \(r\in\mathbb{R}\), we must be careful not to already assume the conclusion in our derivation. Our proof comes in three main steps. Firstly, \[(M_{\alpha^{1/2}}M_{\alpha^{1/2}})_{ij} =\sum_{k}(M_{\alpha^{1/2}})_{ik}(M_{\alpha^{1/2}})_{kj}\] \[=\sum_{k}\operatorname{Tr}\bigl{[}F_{i}\sqrt{\alpha}G_{k}\sqrt{ \alpha}\bigr{]}\operatorname{Tr}\bigl{[}F_{k}\sqrt{\alpha}G_{j}\sqrt{\alpha} \bigr{]}\] \[=\sum_{k}\operatorname{Tr}\bigl{[}G_{k}\sqrt{\alpha}F_{i}\sqrt{ \alpha}\bigr{]}\operatorname{Tr}\bigl{[}F_{k}\sqrt{\alpha}G_{j}\sqrt{\alpha} \bigr{]}\] \[=\operatorname{Tr}\bigl{[}\sqrt{\alpha}F_{i}\sqrt{\alpha}\sqrt{ \alpha}G_{j}\sqrt{\alpha}\bigr{]}\] \[=\operatorname{Tr}[F_{i}\alpha G_{j}\alpha]=(M_{\alpha})_{ij}\] Crucially, in the fourth equality we use the property (6) in QPR. Hence, \[M_{\alpha^{1/2}}=M_{\alpha}^{1/2}\] By reiterating this (i.e. sending \(\alpha\to\sqrt{\alpha}\)), we obtain the more general relation that \[\forall x\in\mathbb{Z}^{+}:\ M_{\alpha^{1/2x}}=M_{\alpha}^{\frac{1}{2}}\] Which is really just to say: \[M_{\alpha^{*}}=M_{\alpha}^{\epsilon},\] for an arbitrarily small and positive real number \(\epsilon\). Invoking (12) and the definition of \(M_{\alpha^{r}}\), for \(N\in\mathbb{Z}^{+}\), we have, \[M_{\alpha^{N\epsilon}} =S^{\mathcal{M}_{\alpha^{N\epsilon}}}\] \[=\overbrace{S^{\mathcal{M}_{\alpha^{\epsilon}}}\circ\mathcal{M}_{ \alpha^{\epsilon}}\circ\cdots\circ\mathcal{M}_{\alpha^{\epsilon}}}^{N\overbrace{ \alpha^{\epsilon}}}\] \[=\prod_{N}M_{\alpha^{*}}=\prod_{N}M_{\alpha}^{\epsilon}.\] Together, we may thus conclude that for \(N\in\mathbb{Z}^{+}\): \[M_{\alpha^{N\epsilon}}=M_{\alpha}^{N\epsilon}. \tag{13}\] Secondly, we note that \[(M_{\alpha}M_{\alpha^{-1}})_{ij} =\sum_{k}\operatorname{Tr}[F_{i}\alpha G_{k}\alpha]\operatorname {Tr}\bigl{[}F_{k}\alpha^{-1}G_{j}\alpha^{-1}\bigr{]}\] \[=\sum_{k}\operatorname{Tr}[G_{k}\alpha F_{i}\alpha]\operatorname {Tr}\bigl{[}F_{k}\alpha^{-1}G_{j}\alpha^{-1}\bigr{]}\] \[=\operatorname{Tr}\bigl{[}\alpha F_{i}\alpha\alpha^{-1}G_{j} \alpha^{-1}\bigr{]}\] \[=\operatorname{Tr}[F_{i}G_{j}]=\delta_{ij}=\mathbb{1}_{ij}\] Hence, \(M_{\alpha^{-1}}=M_{\alpha}^{-1}\). Repeating this, we can easily see that for any \(N^{\prime}\in\mathbb{Z}^{+}\): \[M_{\alpha^{-N^{\prime}}}=M_{\alpha}^{-N^{\prime}} \tag{14}\] Finally, taking from (12), (13) and (14), we find that: \[M_{\alpha^{N\epsilon-N^{\prime}}} =S^{\mathcal{M}_{\alpha^{N\epsilon-N^{\prime}}}}=S^{\mathcal{M}_ {\alpha^{N\epsilon}}\circ\mathcal{M}_{\alpha^{-N^{\prime}}}}\] \[=M_{\alpha^{N\epsilon}}M_{\alpha^{-N^{\prime}}}=M_{\alpha}^{N \epsilon}M_{\alpha}^{-N^{\prime}}\] \[=M_{\alpha}^{N\epsilon-N^{\prime}}\] Since it holds for any arbitrarily small, positive \(\epsilon\) and for any arbitrarily large positive integers \(N\) and \(N^{\prime}\), we write \(N\epsilon-N^{\prime}\in\mathbb{R}\) and denote this \(N\epsilon-N^{\prime}\to r\). Thus we obtain our desired result: \(M_{\alpha^{r}}=M_{\alpha}^{r}\) for any \(r\in\mathbb{R}\). ## Appendix B \(S^{\mathcal{E}^{\dagger}}\) in terms of \((S^{\mathcal{E}})^{\text{T}}\) for QPRs We derive the QPR expressions for \(S^{\mathcal{E}^{\dagger}}\) for some CPTP map \(\mathcal{E}[\bullet]=\sum_{l}\kappa_{l}\bullet\kappa_{l}^{\dagger}\). For NQPRs, we find easily that: \[(S^{\mathcal{E}^{\dagger}}_{\mathbf{NQ}})_{ij} =\operatorname{Tr}\bigl{[}F_{i}\mathcal{E}^{\dagger}[G_{j}]\bigr{]} =\operatorname{Tr}\Biggl{[}F_{i}\sum_{l}\kappa_{l}^{\dagger}G_{j}\kappa_{l} \Biggr{]}\] \[=\sum_{l}\operatorname{Tr}\Bigl{[}G_{j}\kappa_{l}F_{i}\kappa_{l}^ {\dagger}\Bigr{]}=\sum_{l}\operatorname{Tr}\Bigl{[}F_{j}\kappa_{l}G_{i}\kappa_{ l}^{\dagger}\Bigr{]}\] \[=\operatorname{Tr}\Biggl{[}F_{j}\sum_{l}\kappa_{l}G_{i}\kappa_{ l}^{\dagger}\Biggr{]}=\operatorname{Tr}[F_{j}\mathcal{E}[G_{i}]]=S_{ji}^{ \mathcal{E}}\] Thus, for NQPRs \[S^{\mathcal{E}^{\dagger}}_{\mathbf{NQ}}=(S^{\mathcal{E}})^{\text{T}} \tag{15}\] For SIC-POVM representations, we have a more complicated expression. We first use (8), \[(S^{\mathcal{E}^{\dagger}}_{\mathbf{SP}})_{ij} =\operatorname{Tr}\!\left[F_{i}\mathcal{E}^{\dagger}[G_{j}]\right]\] \[=\operatorname{Tr}\!\left[\frac{G_{i}+\mathbb{1}}{d(d+1)}\, \mathcal{E}^{\dagger}\left[d(d+1)F_{j}-\mathbb{1}\right]\right]\] By expanding the terms and noting the unitality of every adjoint map (i.e. \(\mathcal{E}^{\dagger}[\mathbb{1}]=\mathbb{1}\)), we arrive at the expression: \[(S^{\mathcal{E}^{\dagger}}_{\mathbf{SP}})_{ij} =\operatorname{Tr}\!\left[F_{i}\sum_{l}\kappa_{l}^{\dagger}G_{j} \kappa_{l}\right]+\operatorname{Tr}\!\left[\mathcal{E}^{\dagger}[F_{j}]\right] -\operatorname{Tr}[F_{i}]\] \[=S^{\mathcal{E}}_{ji}+\operatorname{Tr}\!\left[\mathcal{E}^{ \dagger}[F_{j}]\right]-\frac{1}{d}\] By taking note of the isomorphisms found in Table 1, we may write \(\operatorname{Tr}\!\left[\mathcal{E}^{\dagger}[F_{j}]\right]=\sum_{l} \operatorname{Tr}\!\left[\kappa_{l}^{\dagger}F_{j}\kappa_{l}\right]=\sum_{l} \operatorname{Tr}\!\left[F_{j}\kappa_{l}\mathbb{1}\kappa_{l}^{\dagger}\right] =\operatorname{Tr}\!\left[F_{j}\mathcal{E}[\mathbb{1}]\right]=\frac{1}{d}(S^{ \mathcal{E}}v^{1})_{j}=\frac{1}{d}\sum_{a}\mathcal{E}(j|a)\). Hence, we can write the total expression of each entry for SIC-POVM representation as: \[(S^{\mathcal{E}^{\dagger}}_{\mathbf{SP}})_{ij}=S^{\mathcal{E}}_{ji}+\frac{1}{d }\left(\sum_{a}\mathcal{E}(j|a)-1\right) \tag{10}\] This can be written, on the matrix level, as (22). ## Appendix C \((S^{\mathcal{E}})^{\mathbf{T}}\) as \(S^{\mathcal{E}^{\dagger}}\) for Classical In the previous section we proved that for quantum channels (expressed in QPRs), we can express the adjoint channel in terms of the tranpose of the channel. Here, we prove the opposite relation for classical channels: that the transpose of a classical channel is the adjoint of that channel. Namely, the transpose map is the map for which (5) is fulfilled in the case of classical scenarios. Noting first the commutative diagram found in FIG. 3 (which invokes the morphisms found in Table 1), we see how (5) is fulfilled by a map for which \[(S^{\mathcal{E}}v^{\rho})\cdot\bar{v}^{\sigma}=(S^{\mathcal{E}^{\dagger}}v^{ \sigma})\cdot\bar{v}^{\rho}, \tag{11}\] for all \(\rho\) and \(\sigma\). With this, we first expand the LHS of (11): \[(S^{\mathcal{E}}v^{\rho})\cdot\bar{v}^{\sigma} =\sum_{y}(S^{\mathcal{E}}v^{\rho})_{y}\bar{v}^{\sigma}_{y} \tag{12}\] \[=\sum_{xy}S^{\mathcal{E}}_{yx}v^{\rho}_{x}\bar{v}^{\sigma}_{y} \tag{13}\] Next we expand the following, in order to check if the transpose qualifies as the adjoint: \[((S^{\mathcal{E}})^{\mathrm{T}}v^{\sigma})\cdot\bar{v}^{\rho}=\sum_{xy}(S^{ \mathcal{E}})^{\mathrm{T}}_{yx}v^{\sigma}_{x}\bar{v}^{\rho}_{y} \tag{14}\] \[=\sum_{xy}S^{\mathcal{E}}_{xy}v^{\sigma}_{x}\bar{v}^{\rho}_{y} \tag{15}\] \[=\sum_{xy}S^{\mathcal{E}}_{yx}\bar{v}^{\rho}_{x}v^{\sigma}_{y} \tag{16}\] Now for classical scenarios the trace of two states, if treated like quantum states in Hilbert space, would simply be the inner product of its density spectra: \(v^{\rho}\cdot v^{\sigma}=\operatorname{Tr}[\rho\,\sigma]\). This is because the states, being classical distributions, would be diagonalized in the same way. Thus we could have replaced \(\bar{v}^{\rho}\) with \(v^{\rho}\) in all the above calculations and in FIG. 3. The reason why we have written \(\bar{v}^{\rho}\) as opposed to \(v^{\rho}\) is to simply highlight that while indeed (13) is identical to (16) for classical scenarios because \(\bar{v}^{\rho}=v^{\rho}\) there (and NQPR for that matter since \(\bar{v}^{\rho}=c\,v^{\rho}\)), the same does _not_ hold for SIC-POVM. The transpose qualifies as an adjoint for both NQPR and classical channels, but not for SIC-POVM. Hence, the relation proved for classical states and channels does not contradict the ones proved in the previous section for QPRs. ## Appendix D Properties of \(M_{\alpha}\) Here we note some interesting properties of \(M_{\alpha}\). Namely that it is a matrix with all real entries and non-negative eigenvalues that sum to 1. ### Real Entries It can be shown that all the entries of \(M_{\alpha}\) are real: \((M_{\alpha})_{ij}=\operatorname{Tr}[F_{i}\alpha G_{j}\alpha]\in\mathbb{R}\). A proof was given in the main text, valid for any \(M_{\alpha^{\prime}}\); we repeat it here for completeness. We first note that the anticommutator for any two Hermitian operators \(A\) and \(B\) is always also Hermitian: \(\{A,B\}^{\dagger}=\{A,B\}\); while the trace of the commutator of any two operators is always zero (in finite dimension) due Figure 3: Relations between formalisms pertaining the adjoint map, illustrated commutatively. to cyclicity: \(\mathrm{Tr}\big{[}[A,B]\big{]}=\mathrm{Tr}[AB]-\mathrm{Tr}[BA]=0\). Hence \[\mathrm{Tr}[AB]=\mathrm{Tr}\Big{[}\frac{\left\{A,B\right\}}{2}+\frac{\left[A,B \right]}{2}\Big{]}=\frac{1}{2}\mathrm{Tr}[\left\{A,B\right\}]\in\mathbb{R} \tag{10}\] Noting that \(\sqrt{\alpha}F_{i}\sqrt{\alpha}\) and \(\sqrt{\alpha}G_{j}\sqrt{\alpha}\) are both Hermitian (frame and dual operators are always Hermitian, and \(\alpha\) is a density operator in Hilbert Space), we apply (10) to \((M_{\alpha})_{ij}\). The entries of \(M_{\alpha}\) are thus proven to be always real. ### Positive Semi-Definitiveness For NQPR, we can always write \[(M_{\alpha})_{ij}=c\,\mathrm{Tr}\Big{[}\sqrt{\alpha}F_{i}\sqrt{\alpha}\left( \sqrt{\alpha}F_{j}\sqrt{\alpha}\right)^{\dagger}\Big{]}\] Hence, \(M_{\alpha}\) is a Gram matrix with some positive factor \(c\). Thus it is positive semi-definite. For SIC-POVM, we expand \((M_{\alpha})_{ij}\) via (8), arriving at: \[(M_{\alpha})_{ij}=\frac{1}{d(d+1)}\Big{(}\mathrm{Tr}\Big{[} \sqrt{\alpha}G_{i}\sqrt{\alpha}\left(\sqrt{\alpha}G_{j}\sqrt{\alpha}\right)^{ \dagger}\Big{]}\] \[+\mathrm{Tr}\big{[}G_{j}\alpha^{2}\big{]}\,\Big{)}\] The first term, as with the NQPR case, corresponds to a Gram Matrix, which is positive semi-definite. One can then note that the second term corresponds to a matrix \(J_{\alpha}\) (i.e. \((J_{\alpha})_{ij}=\mathrm{Tr}\big{[}G_{j}\alpha^{2}\big{]}\)) with duplicate rows (every \(j\)-th column with filled with identical entries. This simply implies that the only non-zero eigenvalue would be the sum of the entries of any given row. Which just means: \(\mathrm{eig}[J_{\alpha}]=\left\{\sum_{j}\mathrm{Tr}\big{[}G_{j}\alpha^{2} \big{]}\,,0\right\}=\left\{\mathrm{Tr}\big{[}\alpha^{2}\big{]}\,,0\right\}\geq 0\). So \(M_{\alpha}\) is the sum of two positive semi-definitive matrices and thus we may conclude \(M_{\alpha}\geq 0\) for SIC-POVM as well. ### Unit Trace The trace of \(M_{\alpha}\) is given by \[\mathrm{Tr}[M_{\alpha}]=\sum_{i}(M_{\alpha})_{ii}=\mathrm{Tr}\Big{[}\underbrace {\sum_{i}F_{i}\alpha G_{i}}_{\mathbb{I}}\,\alpha\Big{]}=1\] To prove the relation invoked for the final equality we will use the previously found result in [32]. Consider the superoperator \[\Lambda[\bullet]=\sum_{i=1}^{d^{2}}\Pi_{i}\bullet\Pi_{i}\,, \tag{11}\] it can be shown that \[\Lambda[\Pi_{i}]=\frac{d}{d+1}(\Pi_{i}+\mathbb{I}) \tag{12}\] Since the set \(\{\Pi_{i}\}\) forms a basis, we can express the superoperator as \[\Lambda=\frac{d}{d+1}(\mathcal{I}+\mathbb{I}) \tag{13}\] where \(\mathcal{I}[A]=\mathrm{Tr}[A]\,\mathbb{I}\). Using this we can easily show that for SIC-POVM representation we have \[\sum_{i}F_{i}\alpha G_{i} =\frac{d+1}{d}\sum_{i}\Pi_{i}\alpha\Pi_{i}-\frac{1}{d}\alpha\sum_ {i}\Pi_{i}\] \[=\mathbb{I}\,. \tag{14}\] For discrete Wigner representation, Zhu [24] showed that the dual frame can always be expressed as such: \[G_{i}=-\sqrt{d+1}\Pi_{i}+\left(\frac{1+\sqrt{d+1}}{d}\right)\mathbb{I} \tag{15}\] Thus, it can also be easily shown that \(\sum_{i}F_{i}\alpha G_{i}=\mathbb{I}\) in this representation. Appendix E Examples for \(S^{\hat{\mathcal{E}}_{\gamma}}\neq S^{\hat{\mathcal{E}}_{\gamma}}_{\mathbf{C}}\) As discussed in Section V.2.3, it is the case that \(\hat{\mathcal{E}}_{\gamma}[\rho]=\hat{\mathcal{E}}_{+}[\rho]=|+\rangle\!\langle+|\) for all \(\rho\) when \(\mathcal{E}[\bullet]=\mathrm{Tr}_{B}[U_{\mathbb{J}}\bullet\otimes|1\rangle \!\langle 1|\,U_{\mathbb{J}}^{\dagger}]\) and \(\gamma=|+\rangle\!\langle+|\). Yet we can easily find that, for the canonical state representations for DW and SIC-POVM, we have: \[S^{\hat{\mathcal{E}}_{+}}_{\mathbf{DW}} =\begin{pmatrix}1&\frac{1}{f}\left(3-\sqrt{2}\right)&1&\frac{1}{f }\left(\sqrt{2}+3\right)\\ 0&\frac{1}{f}\left(\sqrt{2}+4\right)&0&\frac{1}{f}\left(4-\sqrt{2}\right)\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}\] \[\neq\frac{1}{2}\begin{pmatrix}1&1&1&1\\ 1&1&1&1\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}=S^{\hat{\mathcal{E}}_{+}}_{\mathbf{DW}}\] Likewise, \[S^{\hat{\mathcal{E}}_{+}}_{\mathbf{SP}} =\begin{pmatrix}0.925&0.183&-0.264&0.353\\ 0.0744&0.744&0.275&0.168\\ -0.0191&0.0491&0.915&0.0947\\ 0.0199&0.0233&0.0737&0.384\end{pmatrix}\] \[\neq\frac{1}{12}\begin{pmatrix}\sqrt{3}+3&\sqrt{3}+3&\sqrt{3}+3&\sqrt{3}+3\\ \sqrt{3}+3&\sqrt{3}+3&\sqrt{3}+3&\sqrt{3}+3\\ 3-\sqrt{3}&3-\sqrt{3}&3-\sqrt{3}&3-\sqrt{3}\\ 3-\sqrt{3}&3-\sqrt{3}&3-\sqrt{3}&3-\sqrt{3}\end{pmatrix}=S_{\mathbf{SP}}^{ \hat{\mathcal{L}}_{+}}\] Indeed, for some channels one can find states for which the post-measurement probabilities violate acceptable bounds. This means \(S_{\mathbf{CL}}^{\hat{\mathcal{E}}_{+}}\) fails to represent a generally valid quantum transformation. For instance, for a unitary transformation \(\mathcal{U}[\bullet]=U\bullet U^{\dagger}\) where \(U=\frac{i}{2}\left(\begin{smallmatrix}\sqrt{3}&-1\\ 1&\sqrt{3}\end{smallmatrix}\right)\) We find that \[(S_{\mathbf{DW}}^{\hat{\mathcal{L}}_{+}}v^{+})\cdot\bar{v}^{0} =\frac{1}{2}(1+\sqrt{3})>1\] \[(S_{\mathbf{SP}}^{\hat{\mathcal{L}}_{+}}v^{0})\cdot\bar{v}^{+} =\frac{1}{13}(2-5\sqrt{3})<0.\] \(S^{\hat{\mathcal{E}}_{\gamma}}\neq S_{\mathbf{CL}}^{\hat{\mathcal{E}}_{+}}\) is thus easily shown. ## Appendix F Other Transition Graphs In this appendix, we include illustrative cases of \(S^{\mathcal{E}}\), some respective retrodictions and their transition graphs. In FIG. 5, the transition graphs are depicted for very familiar Pauli rotations. It so happens that these unitaries translate to \(S^{\mathcal{E}}\) that give permutations. This is seen in the bold bijective transition arrows. Like other unitary channels, all retrodictions are reference-prior independent. Transition graphs of such retrodictions are thus always mirror images of the corresponding forward transition graph. That said, most unitaries do not enjoy a permutative structure that exists for these SU(2) rotations. The Hadamard gate for instance defined by the following computationally represented operator and gives the respective quasi-stochastic matrix: \[U_{\mathrm{H}}\,\hat{=}\,\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix},\quad S^{\mathcal{H}\alpha}=\frac{1}{2}\bigg{(}\begin{smallmatrix} 1&1&1&-1\\ 1&-1&1&1\\ 1&1&-1&1\\ -1&1&1&1\end{smallmatrix}\bigg{)},\] which is consistent across the canonical choices of the DW and SIC-POVM representations. Likewise, an arbitrarily chosen unitary \(U_{\mathrm{eg}}\): \[U_{\mathrm{eg}}\,\hat{=}\,\frac{1}{4}\bigg{(}\begin{smallmatrix}i\,\big{(} \sqrt{3}+2i\big{)}&0&3i&0\\ 0&i\,\big{(}\sqrt{3}+2i\big{)}&0&3i\\ -3i&0&2+i\sqrt{3}&0\\ 0&-3i&0&2+i\sqrt{3}\end{smallmatrix}\bigg{)}\] has the following quasiprobability objects: \[S_{\mathbf{DW}}^{\mathcal{L}_{\mathrm{eg}}}=\frac{1}{16}\begin{pmatrix}\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Figure 6: Transition Graphs for the Hadamard gate and an arbitrarily chosen qubit unitary and their respective retrodictions.
2302.09686
On cohomological dimension of group homomorphisms
The (co)homological dimension of homomorphism $\phi:G\to H$ is the maximal number $k$ such that the induced homomorphism is nonzero for some $H$-module. The following theorems are proven: THEOREM 1. For every homomorphism $\phi:G\to H$ of a geometrically finite group $G$ the homological dimension of $\phi$ equals the cohomological dimension, $hd(\phi)= cd(\phi)$. THEOREM 2. For every homomorphism $\phi:G\to H$ of geometrically finite groups $cd(\phi\times\phi)=2cd(\phi)$.
Aditya De Saha, Alexander Dranishnikov
2023-02-19T22:57:14Z
http://arxiv.org/abs/2302.09686v2
# On cohomological dimension of homomorphisms ###### Abstract. The (co)homological dimension of homomorphism \(\phi:G\to H\) is the maximal number \(k\) such that the induced homomorphism is nonzero for some \(H\)-module. The following theorems are proven. **0.1 Theorem**.: _For every homomorphism \(\phi:G\to H\) of a geometrically finite group \(G\) the homological dimension of \(\phi\) equals the cohomological dimension, \(\operatorname{hd}(\phi)=\operatorname{cd}(\phi)\)._ **0.2 Theorem**.: _For every homomorphism \(\phi:G\to H\) of geometrically finite groups \(\operatorname{cd}(\phi\times\phi)=2\operatorname{cd}(\phi)\)._ Key words and phrases:cohomological dimension of groups, aspherical manifolds, classifying spaces 2020 Mathematics Subject Classification: Primary 20J05, Secondary 55N25, 20J06 The second author was supported by Simons Foundation. which there is a finite classifying CW-complex \(BG=K(G,1)\), there is the equality \(\operatorname{hd}(G)=\operatorname{cd}(G)\). In this paper we prove this equality for homomorphisms of geometrically finite groups. ### Theorem _For every homomorphism \(\phi:G\to H\) of a geometrically finite group \(G\) we have \(\operatorname{hd}(\phi)=\operatorname{cd}(\phi)\)._ It is known that the logarithmic law for the cohomological dimension of the product does not hold even for geometrically finite groups [Dr1], though the formula \(\operatorname{cd}(G\times G)=2\operatorname{cd}(G)\) holds [Dr2]. In this paper we extend the latter equality to group homomorphisms between geometrically finite groups. ### Theorem _For every homomorphism \(\phi:G\to H\) between geometrically finite groups_ \[\operatorname{cd}(\phi\times\phi)=2\operatorname{cd}(\phi).\] Among our main tools in proving the above theorems are Davis' trick with aspherical manifolds and a characterization of cohomological dimension of a group homomorphism in terms of projective resolutions which we give in this paper. ## 2. Preliminaries ### Cohomology with local coefficients Let \(\pi=\pi_{1}(X)\), and let \(\mathbb{Z}\pi\) be the group ring of \(\pi\). We will refer to \(\mathbb{Z}\pi\)-module as to \(\pi\)-module. There is a bijection between local coefficients systems (locally trivial sheaves) \(\mathcal{A}\) on \(X\) and \(\pi\)-modules where \(\mathcal{A}\) corresponds to the stalk \(A=\mathcal{A}_{x}\). A map \(f:X\to Y\) and a local coefficient system \(\mathcal{A}\) on \(Y\), define a local coefficient system on \(X\), denoted \(f^{*}\mathcal{A}\). Given a pair of coefficient systems \(\mathcal{A}\) and \(\mathcal{B}\) on \(X\), the tensor product \(\mathcal{A}\otimes\mathcal{B}\) is defined by setting \((\mathcal{A}\otimes\mathcal{B})_{x}=\mathcal{A}_{x}\otimes\mathcal{B}_{x}\). On the \(\pi\)-module level it s the tensor product of abelian groups \(A\otimes B\) with the diagonal action of \(\pi\). We recall the definition of the (co)homology groups with local coefficients via \(\pi\)-modules [Ha]: \[H^{k}(X;\mathcal{A})\cong H^{k}(\operatorname{Hom}_{\mathbb{Z}\pi}(C_{*}( \widetilde{X}),A),\delta)\] and \[H_{k}(X;\mathcal{A})\cong H_{k}(A\otimes_{\mathbb{Z}\pi}C_{*}(\widetilde{X}), 1\otimes\partial)\] where \((C_{*}(\widetilde{X}),\partial)\) is the chain complex of the universal cover \(\widetilde{X}\) of \(X\) and \(A\) is the stalk of the local coefficient system \(\mathcal{A}\), and \(\delta\) is the coboundary operator. When \(\mathcal{A}\) is a trivial coefficient system the above definition coincides with the usual definition in terms of the chain complex \(C_{*}(X)\) of \(X\). Moreover, there is the following intermediate statement. ### Proposition _Let \(p:X^{\prime}\to X\) be a normal covering and \(\mathcal{A}\) local coefficient system on \(X\) such that \(p^{*}\mathcal{A}\) is a trivial system. Then cohomology and homology groups of \(X\) with coefficients in \(\mathcal{A}\) can be defined by means of the chain complex \((C_{*}(X^{\prime}),\partial)\) as_ \[H^{k}(X;\mathcal{A})\cong H^{k}(\operatorname{Hom}_{\mathbb{Z}\pi^{\prime}}(C _{*}(X^{\prime}),A),\delta)\] _and_ \[H_{k}(X;\mathcal{A})\cong H_{k}(A\otimes_{\mathbb{Z}\pi^{\prime}}C_{*}(X^{ \prime}),1\otimes\partial)\] _where \(\pi^{\prime}=\pi_{1}(X)/\pi_{1}(X^{\prime})\) is the structure group of \(p\)._ Proposition 2.1 follows from isomorphisms of (co)chain complexes \[(\operatorname{Hom}_{\mathbb{Z}\pi}(C_{*}(\widetilde{X}),A),\delta)\cong( \operatorname{Hom}_{\mathbb{Z}\pi^{\prime}}(C_{*}(X^{\prime}),A),\delta)\] and \[(A\otimes_{\mathbb{Z}\pi}C_{*}(\widetilde{X}),1\otimes\partial)\cong(A\otimes _{\mathbb{Z}\pi^{\prime}}C_{*}(X^{\prime}),1\otimes\partial).\] We refer to [Bre] for the definition of the cup product \[\cup:H^{i}(X;\mathcal{A})\otimes H^{j}(X;\mathcal{B})\to H^{i+j}(X; \mathcal{A}\otimes\mathcal{B})\] and the cap product \[\cap:H_{i}(X;\mathcal{A})\otimes H^{j}(X;\mathcal{B})\to H_{i-j}(X; \mathcal{A}\otimes\mathcal{B}).\] ### The Kunneth Formula and UCF For abelian groups \(A\) and \(B\) we use notation \(A*B\) for the \(Tor(A,B)\). **2.2 Theorem** (Kunneth [Bre]).: _Let \(\mathcal{A}\) and \(\mathcal{B}\) be local coefficient systems on a finite complexes \(X\) and \(Y\) with \(\mathcal{A}_{x}*\mathcal{B}_{x}=0\). Then there is a natural short exact sequence_ \[0\to\bigoplus_{p}H^{p}(X;\mathcal{A})\otimes H^{r-p}(Y;\mathcal{B})\to H^{r}( X\times Y;\mathcal{A}\hat{\otimes}\mathcal{B})\to\bigoplus_{p}H^{p+1}(X; \mathcal{A})*H^{r-p}(Y;\mathcal{B})\to 0.\] When \(Y=pt\) the Kunneth theorem turns into the Universal Coefficient Formula: **2.3 Theorem** (UCF [Bre]).: _Let \(\mathcal{A}\) be a local coefficient system on a finite complexes \(X\) and let \(B\) be an abelian group with \(A*B=0\). Then there is a natural short exact sequence_ \[0\to H^{n}(X;\mathcal{A})\otimes B\to H^{n}(X;\mathcal{A}\otimes B)\to H^{n+1 }(X;\mathcal{A})*B\to 0.\] ### Cohomological dimension of groups By definition the cohomology of a discrete group \(\pi\) with coefficients in a \(\pi\)-module \(A\) is the cohomology of its classifying space, \(H^{*}(\pi,A)=H^{*}(B\pi;\mathcal{A})\) where \(\mathcal{A}\) is the corresponding local system. Also we will be using the notation \(H^{*}(B\pi;A)\) for \(H^{*}(\pi,A)\). It is customary to separate coefficients for group (co)homology by a coma and for the space cohomology by a semicolon. The cohomological dimension of a group \(\pi\) is defined as follows \[\operatorname{cd}(\pi)=\max\{n\mid H^{n}(\pi,A)\neq 0\ \text{ for some }\ \mathbb{Z}\pi\text{-module }A\}.\] Similarly the homological dimension is defined as \[\operatorname{hd}(\pi)=\max\{n\mid H_{n}(\pi,A)\neq 0\ \text{ for some }\ \mathbb{Z}\pi\text{-module }A\}.\] The geometric dimension \(\operatorname{gd}(\pi)\) is defined as the minimal dimension of CW-complexes serving as a classifying space \(B\pi\) for \(\pi\) \[\operatorname{gd}(\pi)=\min\{n\mid\exists\ B\pi\ \text{ with }\ \dim B\pi=n\}.\] Consider the short exact sequence \(0\to\mathbf{I}_{\pi}\to\mathbb{Z}\pi\xrightarrow{\epsilon}\mathbb{Z}\to 0\) defined by the augmentation map \(\epsilon\). The _Berstein-Schwarz class_\(\beta_{\pi}\in H^{1}(\pi,\mathbf{I}_{\pi})\) of a discrete group \(\pi\) is the image \(\delta(1)\) of \[1\in\mathbb{Z}=H^{0}(B\pi;\mathbb{Z})\xrightarrow{\delta}H^{1}(B\pi;\mathbf{I }_{\pi})\] under the connecting homomorphism in the coefficients long exact sequence. Here \(\mathbf{I}_{\pi}\) is the augmentation ideal of the group ring \(\mathbb{Z}\pi\) [Be],[Sch]. The Berstein-Schwarz class is universal in the following sense. **2.4 Theorem** (Universality [DR],[Sch]).: _For any cohomology class \(\alpha\in H^{k}(\pi,L)\) there is a homomorphism of \(\pi\)-modules \(\mathbf{I}_{\pi}^{k}\to L\) such that the induced homomorphism for cohomology takes \(\beta_{\pi}^{k}\in H^{k}(\pi;\mathbf{I}_{\pi}^{k})\) to \(\alpha\) where \(\mathbf{I}_{\pi}^{k}=\mathbf{I}_{\pi}\otimes\cdots\otimes\mathbf{I}_{\pi}\) and \(\beta_{\pi}^{k}=\beta_{\pi}\cup\cdots\cup\beta_{\pi}\)._ The Universality Theorem implies **2.5 Corollary**.: \[\operatorname{cd}(\pi)=\max\{k\mid\beta_{\pi}^{k}\neq 0\}.\] Everywhere above the background ring \(\mathbb{Z}\) can be replaced by any commutative ring with unit \(\mathbf{k}\) and the dimensions \(\operatorname{cd}_{\mathbf{k}}(\pi)\) and \(\operatorname{hd}_{\mathbf{k}}(\pi)\) can be defined accordingly. Thus, in our notations \(\operatorname{cd}(\pi)=\operatorname{cd}_{\mathbb{Z}}(\pi)\). In this paper we use along with \(\mathbb{Z}\) the fields \(\mathbf{k}=\mathbb{Z}_{p}\) and \(\mathbf{k}=\mathbb{Q}\). **2.6 Proposition**.: _For all \(\mathbf{k}\) and \(\pi\),_ \[\operatorname{cd}_{\mathbf{k}}(\pi)\leq\operatorname{cd}(\pi).\] Proof.: This follows from the fact that any \(\mathbf{k}\pi\) module is canonically a \(\mathbb{Z}\pi\) module. So \[\operatorname{cd}_{\mathbf{k}}\pi =\max\{n:H^{n}(\pi;M)\neq 0\text{ for some }\mathbf{k}\pi\text{-module }M\}\] \[\leq\max\{n:H^{n}(\pi;M)\neq 0\text{ for some }\mathbb{Z}\pi\text{-module }M\}\] \[=\operatorname{cd}\pi.\] ### Projective resolutions Let \(\pi\) be a discrete group. A projective resolution \(P_{*}(\pi)\) of \(\mathbb{Z}\) for the group \(\pi\) is an exact sequence of projective \(\pi\)-modules \[\cdots\to P_{n}(\pi)\to P_{n-1}(\pi)\to\cdots\to P_{2}(\pi)\to P_{1}(\pi)\to \mathbb{Z}\pi\overset{\epsilon}{\to}\mathbb{Z}.\] The homology and cohomology of \(\pi\) with coefficients in a \(\pi\)-module \(M\) can be defined as homology of the chain and cochain complexes \(P_{*}(\pi)\otimes_{\pi}M\) and \(Hom_{\pi}(P_{*}(\pi),M)\). The cohomological dimension \(\operatorname{cd}(\pi)\) equals the shortest length of projective resolution \(P_{*}(G)\) [Bro]. The same holds true for any commutative ground ring with unit \(\mathbf{k}\). ## 3. Cohomological dimension of group homomorphisms Let \(\phi:G\to H\) be a group homomorphism, and let \(\mathbf{k}\) be a ring. We define (see [Gr]) \[\operatorname{cd}(\phi)=\max\{n\mid 0\neq\phi^{*}:H^{n}(H,A)\to H^{n}(G,A)\ \text{ for some }\ \mathbb{Z}H\text{-module }A\}.\] Similarly the homological dimension is defined as \[\operatorname{hd}(\phi)=\max\{n\mid 0\neq\phi^{*}:H_{n}(G,A)\to H_{n}(H,A)\ \text{ for some }\ \mathbb{Z}H\text{-module }A\}.\] As before, everywhere the background ring \(Z\) can be replaced with a commutative ring \(\mathbf{k}\) with identity, to get \(\operatorname{cd}_{\mathbf{k}}(\phi)\) and \(\operatorname{hd}_{\mathbf{k}}(\phi)\). Note that for the identity homomorphism \(1_{G}:G\to G\) we recover the definition of \(\operatorname{cd}(G)\) and \(\operatorname{hd}(G)\). Note that the proof of 2.6 translates directly to the proof of the following: **3.1 Proposition**.: _For any commutative ring \(\mathbf{k}\) with identity, and group homomorphism \(\phi:G\to H\) we have_ \[\operatorname{cd}_{\mathbf{k}}(\phi)\leq\operatorname{cd}(\phi).\] Any group homomorphism \(\phi:G\to H\) can be realized by a cellular map \(\bar{\phi}:BG\to BH\) as \(\phi=\bar{\phi}_{*}:\pi_{1}(BG)\to\pi_{1}(BH)\). The geometric dimension \(\operatorname{gd}(\phi)\) of \(\phi\) is defined as the minimal \(n\) such that \(\bar{\phi}\) can be deformed to the \(n\)-skeleton \(BH^{(n)}\): \[\operatorname{gd}(\phi)=\min\{n\mid\bar{\phi}\sim f:BG\to BH^{(n)}\}.\] The Lusternik-Schnirelmann category \(\operatorname{cat}(f)\) of a map \(f:X\to Y\) is defined as the minimal number \(k\) such that \(X\) can be covered by \(k+1\) open sets \(U_{0},U_{1},\dots,U_{k}\) with null-homotopic restrictions \(f|_{U_{i}}:U_{i}\to Y\) for all \(i\). This definition can be extended to group homomorphisms \(\phi:G\to H\) as \(\operatorname{cat}(\bar{\phi})\) [Sc]. The following is well-known (see for example Proposition 4.3 in [Dr3]). **3.2 Proposition**.: _For a group homomorphism \(\phi:G\to H\) with its realizing map the \(\bar{\phi}:BG\to BH\) the following are equivalent:_ _(1) \(\operatorname{gd}(\phi)\leq n\);_ _(2) \(\operatorname{cat}(\bar{\phi})\leq n\)._ We note that in the case of the identity homomorphism \(1_{G}:G\to G\) our definition of the geometric dimension gives the category \(\operatorname{cat}(G)\) which by the Eilenberg-Ganea theorem [EG] coincides with \(\operatorname{cd}(G)\). The later equals \(\operatorname{gd}(G)\) with a potential exception when \(\operatorname{cd}(G)=2\). We recall that any group homomorphism \(\varphi:G\to H\) defines a chain map between projective resolutions \(\varphi_{*}:P_{*}(G)\to P_{*}(H)\) which induces homomorphism of (co)homology groups [Bro]. A chain homotopy between two chain maps \(\varphi_{*},\psi_{*}:P_{*}(G)\to P_{*}(H)\) is a sequence of homomorphisms \(D_{*}:P_{*}(G)\to P_{*+1}(H)\) such that \(\partial D_{*}+D_{*}\partial=\varphi_{*}-\psi_{*}\). Two chain homotopic homomorphisms induce the same homomorphisms for homology and cohomology [Ha]. **3.3 Theorem**.: _Let \(\varphi:G\to H\) be a group homomorphism with \(\operatorname{cd}(\varphi)=n\) and let \(\varphi_{*}:(P_{*}(G),\partial_{*})\to(P_{*}(H),\partial^{\prime}_{*})\) be the chain map between projective resolutions of \(\mathbb{Z}\) for \(G\) and \(H\) induced by \(\varphi\). Then \(\varphi_{*}\) is chain homotopic to \(\psi_{*}:P_{*}(G)\to P_{*}(H)\) with \(\psi_{k}=0\) for \(k>n\)._ Proof.: Let \(d^{\prime}_{n+1}:P_{n+1}(H)\to K_{n}=\operatorname{Im}(\partial^{\prime}_{n+1})\) be the range restriction for \(\partial^{\prime}_{n+1}\). We have the following commutative diagram: Since \(\partial^{\prime}_{n+2}d^{\prime}_{n+1}=0\), we obtain that \(d^{\prime}_{n+1}\) is a cocycle in the cochain complex \(\operatorname{Hom}_{H}(P_{*}(H),K_{n})\). Therefore, it represents an element \([d^{\prime}_{n+1}]\in H^{n+1}(H,K_{n})\). Since \(\operatorname{cd}(\varphi)=n\), we get that \(\varphi^{*}[d^{\prime}_{n+1}]=0\). That is, \[d_{n}:=d^{\prime}_{n+1}\varphi_{n+1}:P_{n+1}(G)\to K_{n}\] is a coboundary in the cochain complex \(\operatorname{Hom}_{G}(P_{*}(G),K_{n})\). So there exists a map \(h_{n}:P_{n}(G)\to K_{n}\) such that \(h_{n}\partial_{n+1}=d_{n}\). But since \(P_{n}(G)\) is a projective \(G\)-module and \(d^{\prime}_{n+1}\) is surjective, the map \(h_{n}\) can be lifted to a \(G\)-homomorphism \(D_{n}:P_{n}(G)\to P_{n+1}(H)\), such that \(d^{\prime}_{n+1}D_{n}=h_{n}\). Note that the \(D_{n}\) as constructed has the property \[\partial^{\prime}_{n+1}(\varphi_{n+1}-D_{n}\partial_{n+1})=\partial^{\prime}_{ n+1}\varphi_{n+1}-\partial^{\prime}_{n+1}D_{n}\partial_{n+1}=\partial^{\prime}_{ n+1}\varphi_{n+1}-h_{n}\partial_{n+1}=0.\] Now assume that we have constructed \(D_{n+i-1}\) such that \[\partial^{\prime}_{n+i}(\varphi_{n+i}-D_{n+i-1}\partial_{n+i})=0.\] Since the bottom row is exact, \[\operatorname{Im}(\varphi_{n+i}-D_{n+i-1}\partial_{n+i})\subset K_{n+i}= \operatorname{Im}(\partial^{\prime}_{n+i+1}).\] Since \(P_{n+i}(G)\) is a projective \(G\)-module the \(G\)-homomorphism \[\varphi_{n+i}-D_{n+i-1}\partial_{n+i}:P_{n+i}(G)\to K_{n+i}\] can be lifted to a \(G\)-homomorphism \(D_{n+i}:P_{n+i}(G)\to P_{n+i+1}(H)\). Since \[\partial^{\prime}_{n+i+1}D_{n+i}=\varphi_{n+i}-D_{n+i-1}\partial_{n+i} \tag{3.1}\] we obtain \[\partial^{\prime}_{n+i+1}(\varphi_{n+i+1}-D_{n+i}\partial_{n+i+1}) =\partial^{\prime}_{n+i+1}\varphi_{n+i+1}-\partial^{\prime}_{n+i +1}D_{n+i}\partial_{n+i+1}\] \[=\partial^{\prime}_{n+i+1}\varphi_{n+i+1}-(\varphi_{n+i}-D_{n+i- 1}\partial_{n+i})\partial_{n+i+1}\] \[=\partial^{\prime}_{n+i+1}\varphi_{n+i+1}-\varphi_{n+i}\partial_{ n+i+1}-D_{n+i-1}\partial_{n+i}\partial_{n+i+1}\] \[=\partial^{\prime}_{n+i+1}\varphi_{n+i+1}-\varphi_{n+i}\partial_{ n+i+1}\] \[=0\] Here we are using the facts that \(\partial\partial=0\) and \(\varphi_{*}\) is a chain map. Therefore we can inductively construct \(D_{k}\)'s for all \(k\geq n\). Define \(D_{k}:P_{k}(G)\to P_{k+1}(H)\) to be the zero homomorphisms for all \(k<n\). Therefore we have constructed a chain homotopy \(D_{*}:P_{*}(G)\to P_{*+1}(H)\) between \(\varphi_{*}\) and a chain map \(\psi_{*}\) defined by \[\psi_{k}=\varphi_{k}-\partial^{\prime}_{k+1}D_{k}-D_{k-1}\partial_{k}.\] In view of (3.1) we obtain that \(\psi_{k}=0\) for all \(k>n\). ## 4. Reduction to a map of aspherical manifold **4.1 Proposition**.: _Let \(F\) be any of the numerical invariants \(\operatorname{cd}_{\mathbf{k}}\), \(\operatorname{hd}_{\mathbf{k}}\), or \(\operatorname{gd}\). Then for homomorphisms \(A\stackrel{{ g}}{{\to}}G\stackrel{{ f}}{{\to}}H\)_ \[F(f\circ g)\leq\min\{F(f),F(g)\}.\] Proof.: The statement is obvious for the cohomological and homological dimensions. Clearly, \(\operatorname{gd}(f\circ g)\leq\operatorname{gd}(f)\). We may assume that the map \(\bar{f}:BH\to BA\) realizing \(f\) is cellular. Then the inequality \(\operatorname{gd}(f\circ g)\leq\operatorname{gd}(g)\) follows. **4.2 Corollary**.: _Suppose that \(r:BA\to BG\) is a retraction. Then for any \(f:BG\to BH\) and any above choice of the numerical invariant \(F\), \(F(f\circ r)=F(f)\)._ Proof.: Let \(i:BG\to BA\) be the inclusion for which \(r\circ i=1\). Then \[F(f\circ r)\leq F(f)=F((f\circ r)\circ i)\leq F(f\circ r).\] **4.3 Corollary**.: _For every homomorphism \(\phi:G\to H\) of a geometrically finite group \(G\) there is a closed aspherical orientable manifolds \(M\) containing \(BG\) as a retract and a map \(g:M\to BH\) such that \(\phi=(g|_{BG})_{*}:\pi_{1}(BG)\to\pi_{1}(BH)\) and \(F(\phi)=F(g)\) for the above choice of \(F\)._ Proof.: By Davis' trick [D] for every group \(G\) with finite complex \(BG\) there is a closed aspherical orientable manifold containing \(BG\) as a retract. **4.4 Theorem**.: _Let homomorphism \(\phi:G\to H\) factor as \(\phi=j\circ\phi^{\prime}\) where \(\phi^{\prime}\) is surjective and \(j\) is injective. Then for any choice \(F\) of the numerical invariants \(\operatorname{cd}_{\mathbf{k}}\), \(\operatorname{hd}_{\mathbf{k}}\), or \(\operatorname{gd}\) we have \(F(\phi)=F(\phi^{\prime})\)._ Proof.: In view of the equality \(\operatorname{gd}(\phi)=\operatorname{cat}(\phi)\), Theorem 3.2 and Theorem 3.3 in [DK], the statement is proven when \(F=\operatorname{gd}\) and \(F=\operatorname{cd}\). By Proposition 4.1\(\operatorname{hd}(\phi)\leq\operatorname{hd}(\phi^{\prime})\). Let \(\operatorname{hd}(\phi^{\prime})=k\). We show that \(\operatorname{hd}(\phi)\geq k\). Let \(\pi=\phi^{\prime}(G)\) and let \(\phi^{\prime}_{*}:H_{k}(G,M)\to H_{k}(\pi,M)\) be a nonzero homomorphism for some \(\pi\)-module \(M\). Let \(i\) denote the inclusion \(M\cong 1\otimes M\overset{\subset}{\to}\mathbb{Z}H\otimes_{\pi}M= \operatorname{Ind}_{\pi}^{H}M\) of \(M\) into the induced \(H\)-module. We consider the following commutative diagram generated by \(i\), the inclusion \(j:\pi\to H\), and \(\phi^{\prime}\) \[\begin{CD}H_{k}(G,M)@>{i_{*}}>{}>H_{k}(G,\operatorname{Ind}_{\pi}^{H}M)\\ @V{\phi^{\prime}_{*}}V{}V@V{\phi^{\prime}_{*}}V{}V\\ H_{k}(\pi,M)@>{i_{*}}>{}>H_{k}(\pi,\operatorname{Ind}_{\pi}^{H}M)@>{j_{*}}>{}>H_{k}(H, \operatorname{Ind}_{\pi}^{H}M).\end{CD}\] The bottom composition \(j_{*}i_{*}\) is the Shapiro Lemma isomorphism [Bro]. Therefore, \(j_{*}i_{*}\phi^{\prime}_{*}\neq 0\). Thus, the composition \[\phi_{*}=j_{*}\phi^{\prime}_{*}:H_{k}(G,\operatorname{Ind}_{\pi}^{H}M)\to H_{ k}(H,\operatorname{Ind}_{\pi}^{H}M)\] is not zero. Hence \(\operatorname{hd}(\phi)\geq k\). The same proof works for any ground ring \(\mathbf{k}\). **4.5 Corollary**.: _Let homomorphism \(\phi:G\to H\) factor as \(\phi=j\circ\phi^{\prime}\) where \(j\) is injective. Then for any choice \(F\) of the numerical invariants \(\operatorname{cd}_{\mathbf{k}}\), \(\operatorname{hd}_{\mathbf{k}}\), or \(\operatorname{gd}\) we have \(F(\phi)=F(\phi^{\prime})\)._ Proof.: Let \(\phi^{\prime}:G\to H^{\prime}\), \(j:H^{\prime}\to H\), and let \(\phi^{\prime\prime}:G\to\operatorname{Im}\phi\) denote the range restriction of \(\phi\). Let \(i:\operatorname{Im}(G)\to H^{\prime}\) be the inclusion. Then by Theorem 4.4 applied to \(\phi=(ij)\phi^{\prime\prime}\) and \(\phi^{\prime}=i\phi^{\prime\prime}\), \[F(\phi)=F((ij)\phi^{\prime\prime})=F(\phi^{\prime\prime})=F(i\phi^{\prime \prime})=F(\phi^{\prime}).\] **4.6 Corollary**.: _If the group \(H\) in Corollary 4.3 is geometrically finite, then there is a closed orientable aspherical manifold \(N\) containing \(BH\) and a map \(f:M\to N\) such that \(f(BG)\subset BH\), \(\phi=(f|_{BG})_{*}:\pi_{1}(BG)\to\pi_{1}(BH)\) and \(F(\phi)=F(g)\) for the above choice of \(F\)._ Proof.: If \(H\) is geometrically finite we consider the embedding \(i_{H}:BH\to N\) of \(BH\) into a closed aspherical orientable manifold from the Davis' trick. Let \(f=i_{H}\circ g:M\to N\). By Theorem 4.4, \(F(f)=F(g)\) We call a group homomorphism \(\phi:G\to H\) a _subhomomorphism_ of a group homomorphism \(\phi^{\prime}:G^{\prime}\to H^{\prime}\) if there are inclusions as a subgroup \(i:G\to G^{\prime}\) and \(j:H\to H^{\prime}\) such that \(\phi=j\phi^{\prime}i\). **4.7 Proposition**.: _Let \(\phi:G\to H\) be a subhomomorphism of \(\phi^{\prime}:G^{\prime}\to H^{\prime}\). Then \(F(\phi)\leq F(\phi^{\prime})\) for the above choice of \(F\)._ Proof.: By Corollary 4.5, the equality \(j\phi=\phi^{\prime}i\), and Proposition 4.1 we obtain \[F(\phi)=F(j\phi)=F(\phi^{\prime}i)\leq F(\phi^{\prime}).\] **4.8 Proposition**.: _Suppose that a local coefficient system \(\mathcal{A}\) on a CW complex \(X\) pulls back to a trivial system \((p^{\prime})^{*}\mathcal{A}=X^{\prime}\times A\) on \(X^{\prime}\) by a normal covering map \(p^{\prime}:X^{\prime}\to X\). Then every homology class \(a\in H_{k}(X;\mathcal{A})\) can be detected by a cohomology class \(\gamma\in H^{k}(X;\mathcal{B})\) with some local coefficient system \(\mathcal{B}\) with \((p^{\prime})^{*}\mathcal{B}\) trivial on \(X^{\prime}\), that is \(0\neq a\cap\gamma\in H_{0}(X,\mathcal{A}\otimes\mathcal{B})\)._ Proof.: Consider the chain complex \[\begin{CD}\dots @>{}>{}>C_{k+1}(X^{\prime})@>{\partial_{k+1}^{\prime}}>{}>C_{k}( X^{\prime})@>{\partial_{k}^{\prime}}>{}>C_{k-1}(X^{\prime})@>{}>{}>\dots\end{CD}.\] We set \(B^{\prime}:=C_{k}(X^{\prime})/\operatorname{Im}\partial_{k+1}^{\prime}\). Let \(\mathcal{B}\) be the corresponding local system on \(X\). Let \(p:\widetilde{X}\to X\) be the universal covering and let \(p=p^{\prime}\circ q\). The map \(q\) defines the following commutative diagram: \[\begin{CD}C_{k+1}(\widetilde{X})@>{\partial_{k+1}}>{}>C_{k}(\widetilde{X})@>{ \widetilde{f}}>{}>\widetilde{B}@>{}>{}>0\\ @V{q_{*}}V{}V@V{q_{*}}V{}V\\ C_{k+1}(X^{\prime})@>{\partial_{k+1}^{\prime}}>{}>C_{k}(X^{\prime})@>{f^{ \prime}}>{}>B^{\prime}@>{}>{}>0\end{CD}\] where \(\widetilde{B}:=C_{k}(\widetilde{X})/\operatorname{Im}\partial_{k+1}\). Since the chain complex \(C_{*}(\widetilde{X})\) is exact, \(\widetilde{B}=C_{k}(\widetilde{X})/\operatorname{Ker}\partial_{k}= \operatorname{Im}\partial_{k}\). Then \(\widetilde{f}\) can be identified as the range restriction of \(\partial_{k}\). We define \(f=f^{\prime}\circ q_{*}:C_{k}(\widetilde{X})\to B^{\prime}\). Since \[\delta f(x)=f(\partial_{k+1}(x))=q_{*}((\widetilde{f}\circ\partial_{k+1})(x)) =q_{*}(\partial_{k}\circ\partial_{k+1})(x))=0,\] the homomorphism \(f\) can be regarded as a \(k\)-cocycle with values in \(B^{\prime}\). Let \(\gamma:=[f]\in H^{k}(X;\mathcal{B})\) be the cohomology class of \(f\). Now we prove that \(a\cap\gamma\neq 0.\) In view of Proposition 2.1 we can represent the class \(a\) by a cycle \(z\in A\otimes C_{k}(\widetilde{X})\) where \(A\) be the stalk of \(\mathcal{A}\). Since \(z\notin\operatorname{Im}(1\otimes\partial_{k+1})\), we conclude that \((1\otimes\widetilde{f})(z)\neq 0\in A\otimes\widetilde{B}\). The tensor product of the above diagram with \(A\) over the group ring \(\mathbb{Z}\pi\) where \(\pi=\pi_{1}(X)\) gives the following commutative diagram with exact rows \[\begin{CD}A\otimes_{\pi}C_{k+1}(\widetilde{X})@>{1\otimes\partial_{k+1}}>{}>A \otimes_{\pi}C_{k}(\widetilde{X})@>{1\otimes\widetilde{f}}>{}>A\otimes_{\pi} \widetilde{B}@>{}>{}>0\\ @V{1\otimes q_{*}}V{}V@V{1\otimes q_{*}}V{}V{\cong}V@V{1\otimes q_{*}}V{}V@V{ \cong}V{}V\\ A\otimes_{\pi}C_{k+1}(X^{\prime})@>{1\otimes\partial_{k+1}^{\prime}}>{}>A \otimes_{\pi}C_{k}(X^{\prime})@>{1\otimes f^{\prime}}>{}>A\otimes_{\pi}B^{ \prime}@>{}>{}>0\end{CD}\] Since the action of the group \(\pi^{\prime}=\pi_{1}(X^{\prime})\subset\pi\) on \(A\) is trivial, the left two vertical arrows in the diagram are isomorphisms. By the Five Lemma the right vertical arrow is an isomorphism as well. Then \[(1\otimes f)(z)=(1\otimes f^{\prime})(1\otimes q_{*})(z)=(1\otimes q_{*})(1\otimes \widetilde{f})(z)\neq 0\in A\otimes_{\pi}B^{\prime}=H_{0}(X;\mathcal{A} \otimes\mathcal{B}).\] Thus, for the cohomology class \(\gamma\) of \(f\) we have \(a\cap\gamma\neq 0\). **4.9 Theorem**.: _For any choice of commutative ring with unit \(\mathbf{k}\), for every homomorphism \(\phi:G\to H\) of geometrically finite group \(G\),_ \[\operatorname{hd}_{\mathbf{k}}(\phi)=\operatorname{cd}_{\mathbf{k}}(\phi).\] Proof.: Since the proof in the case of general ground ring \(\mathbf{k}\) is the same as for \(\mathbb{Z}\), one may assume that everything below is performed over the ring of integers. In view of Theorem 4.4 it suffices to prove this theorem for the case when \(\phi\) is an epimorphism. Let \(r:M\to BG\) be a retraction of a closed orientable aspherical manifold as in Corollary 4.3. We define \(g=\bar{\phi}r:M\to BH\) where \(\bar{\phi}:BG\to BH\) is a map realizing \(\phi\) on the fundamental groups. In view of Corollary 4.3, it suffices to show that \(\operatorname{hd}(g)=\operatorname{cd}(g)\). Let \(M^{\prime}=g^{*}EH\) be the pull-back of the universal covering of \(BH\). Suppose that \(\operatorname{cd}(g)=k\). Then \(g^{*}(\beta_{H}^{k})\neq 0\). By the Poincare Duality with local coefficients, \(a=[M]\cap g^{*}(\beta_{H}^{k})\neq 0\). We note that the local coefficient system \(\mathcal{A}\) on \(M\) for the class \(g^{*}(\beta_{H}^{k})\) pulls back by \(g\) from the system on \(BH\) defined by the \(H\)-module \(\mathbf{I}_{H}^{k}\). The same system serves as coefficients for the dual homology class \(a\in H_{|a|}(M;\mathcal{A})\). Since \(EH\) is contractible, the system \(\mathcal{A}\) pulls back to a trivial system on \(M^{\prime}\) by the covering \(M^{\prime}\to M\). By Proposition 4.8 there is a cohomology class \(\gamma\in H_{|a|}(M;\mathcal{B})\) with a local system \(\mathcal{B}\) on \(M\), that pulls back to a trivial system on \(M^{\prime}\), detecting the homology class \(a\). Thus, \[([M]\cap\gamma)\cap g^{*}\beta_{H}^{k}\in H_{0}(M;\mathcal{A}\otimes\mathcal{ B})\neq 0.\] Since all coefficients are coming from \(BH\), the induced homomorphism \(g_{*}\) for homology is well-defined. Since \(g\) is an epimorphism of the fundamental groups, \(g_{*}\) is an isomorphism in dimension \(0\). Thus, we obtain \[0\neq g_{*}(([M]\cap\gamma)\cap g^{*}\beta_{H}^{k})=g_{*}([M]\cap\gamma)\cap \beta_{H}^{k}.\] Hence, \(g_{*}([M]\cap\gamma)\neq 0\). Note that \([M]\cap\gamma\in H_{k}(M;\mathcal{B})\). Thus, \(\operatorname{hd}(g)\geq k\). By Theorem 3.3 the chain map \(g_{*}:C_{*}(\widetilde{M})\to C_{*}(EH)\) is chain homotopic to a chain map \(q_{*}\) with \(q_{i}=0\) for \(i>k\). Then for any \(H\)-module \(1_{M}\otimes q_{i}=0\) for \(i>k\). Therefore, the induced homomorphism of homology is \(0\) in dimension greater than \(k\). Thus, \(\operatorname{hd}(g)\leq k\). We note that the proof of the inequality \(\operatorname{hd}(\phi)\leq\operatorname{cd}(\phi)\) works for any group homomorphisms. ## 5. Cohomological dimension with respect to a field For a discrete group \(\pi\) we consider the chain complex \((C_{*}(E\pi),\partial_{*})\). We use notations \(C_{k}=C_{k}(\pi)=C_{k}(E\pi)\) for the group of \(k\)-chains and \(B_{k}=B_{k}(\pi)=\operatorname{Im}\partial_{k}\) for the group of \((k-1)\)-boundaries. Since the chain complex \((C_{*}(E\pi),\partial_{*})\) is acyclic, there are short exact sequences \[0\to B_{k+1}\to C_{k}\to B_{k}\to 0. \tag{5.1}\] **5.1 Proposition**.: _Let \(\Lambda=\pi_{1}(M)\) be the fundamental group of a closed orientable aspherical \(n\)-manifold \(M\). Then \(H^{k}(\Lambda,B_{k}(\Lambda))=\mathbb{Z}\) for \(k\leq n\)._ Proof.: We may assume that \(M\) has one \(n\)-dimensional cell and, hence, \(C_{n}=\mathbb{Z}\Lambda\). Note that \(B_{n}\cong C_{n}\) and hence \(H^{n}(\Lambda,B_{n})=H^{n}(M;\mathbb{Z}\Lambda)=H^{n}_{c}(\widetilde{M};\mathbb{ Z})=\mathbb{Z}\). By results of McMillan [Mc], McMillan-Zeeman [McZ], and Stallings [St] proven almost simultaneously in early 60s, we obtain \(\widetilde{M}\times\mathbb{R}^{m}\cong\mathbb{R}^{n+m}\) for some (all) \(m\geq 1\). Hence the reduced suspension \(\Sigma^{m}(\alpha\widehat{M})\) of the one point compactification \(\alpha\widehat{M}\) of the universal covering \(\widehat{M}\) of \(M\) is homeomorphic to \(S^{n+m}\). Therefore, \[H^{k}(\Lambda,\mathbb{Z}\Lambda)=H^{k}_{c}(\widetilde{M};\mathbb{Z})=\bar{H}^ {k}(\alpha\widehat{M})=0\] for \(k<n\). Since the chi groups \(C_{i}(\pi)=\oplus^{m_{i}}\mathbb{Z}\Lambda\) are finitely generated free \(\Lambda\)-modules, we obtain from the coefficient exact sequence (5.1), \(H^{k-1}(\Lambda,B_{k-1})=H^{k}(\Lambda,B_{k})\) for \(k<n\). Then the equality \(H^{0}(\Lambda,B_{0})=\mathbb{Z}\) implies the equality \(H^{k}(\Lambda,B_{k})=\mathbb{Z}\) for \(k<n\). The following Proposition is a refinement of the inequality \(\operatorname{hd}(\pi)\leq\operatorname{cd}(\phi)\). **5.2 Proposition**.: _Let \(\phi:\Gamma\to\pi\) be a group homomorphism. Assume that \(\phi_{*}:H_{k}(\Gamma,A)\to H_{k}(\pi,A)\) is a nontrivial homomorphism for some \(\pi\)-module \(A\). Then \(\phi^{*}:H^{k}(B\pi;B_{k}(\pi))\to H^{k}(B\Gamma;B_{k}(\pi))\) is a nontrivial homomorphism._ Proof.: Clearly, the natural epimorphism \(f:C_{k}(\pi)\to B_{k}(\pi)=B_{k}\) is a cocycle. Let \(\alpha=[f]\in H^{k}(B\pi;B_{k})\). We show that \(\phi^{*}(\alpha)\neq 0\). In fact we show that \(a\cap\phi^{*}(\alpha)\neq 0\) for any \(a\in H_{k}(\Gamma,A)\) with \(\phi_{*}(a)\neq 0\). Let \(z\in A\otimes_{\pi}C_{k}(\Gamma)\) be a cycle representing \(a\). Then \(z^{\prime}=(1\otimes\phi_{*})(z)\in A\otimes_{\pi}C_{k}(\pi)\) is a cycle representing \(\phi_{*}(a)\neq 0\). Therefore, \(z^{\prime}\notin\operatorname{Im}(1\otimes\partial_{k+1})\). Hence, \[0\neq(1\otimes f)(z^{\prime})\in A\otimes_{\pi}B_{k}=H_{0}(\pi,A\otimes B_{k}).\] Thus, \(\phi_{*}(a)\cap\alpha\neq 0\). Note that \(\phi_{*}(a)\cap\alpha=\phi_{*}(a\cap\phi^{*}(\alpha))\). Therefore, \(a\cap\phi^{*}(\alpha)\neq 0\) and hence \(\phi^{*}(\alpha)\neq 0\). **5.3 Theorem**.: _For any homomorphism \(\varphi:\Gamma\to\pi\) between geometrically finite groups, there is a field \(\mathbf{k}\) such that \(\operatorname{cd}(\varphi)=\operatorname{cd}_{\mathbf{k}}(\varphi)\)._ Proof.: By Corollary 4.6 we may assume that \(\varphi\) is realized by a map of closed aspherical orientable manifolds. Let \(\operatorname{cd}(\varphi)=k\). Then by Theorem 4.9, \(\operatorname{hd}(\varphi)=k\). By Proposition 5.2 \[\varphi^{*}:H^{k}(B\pi;B_{k}(\pi))\to H^{k}(B\Gamma;B_{k}(\pi))\] is a nontrivial homomorphism. By virtue of Proposition 5.1 the image \[A=\varphi^{*}(H^{k}(\pi,B_{k}(\pi)))\neq 0\] is a cyclic group. Case 1: The group \(A=\mathbb{Z}\). Then \(A\otimes\mathbb{Q}\neq 0\), and considering the \(\mathbb{Q}\Gamma\)-module \(B_{k}(\pi)\otimes\mathbb{Q}\), we see that \(\operatorname{cd}_{\mathbb{Q}}(\varphi)\geq k\). Using Proposition 3.1 we get \(\operatorname{cd}_{\mathbb{Q}}(\varphi)=k\), and we are done. Case 2: The group \(A\) is of finite order \(m\). Let prime \(p\) be a divisor of \(m\). Then the homomorphism \[H^{k}(\pi,B_{k}(\pi))\otimes\mathbb{Z}_{p}\to A\otimes\mathbb{Z}_{p}\] is a nontrivial epimorphism. Since the group \(B_{k}(\pi)\subset C_{k-1}(\pi)\) is torsion free, we have \(B_{k}(\pi)*\mathbb{Z}_{p}=0\). Therefore, the Universal Coefficient Formula (UCF) holds for \(B\pi\). By the UCF there is a commutative diagram of exact sequences \[\begin{CD}0@>{}>{}>H^{k}(B\pi;B_{k}(\pi)\otimes\mathbb{Z}_{p}@>{}>{}>H^{n}(B\pi;B _{k}(\pi)\otimes\mathbb{Z}_{p})\\ @V{\pi^{0}}V{}V@V{\varphi_{p}^{*}}V{}V\\ 0@>{}>{}>H^{k}(B\Gamma;B_{k}(\pi)\otimes\mathbb{Z}_{p}@>{}>{}>H^{n}(B\Gamma;B_{ k}(\pi)\otimes\mathbb{Z}_{p})\end{CD}\] which implies that the homomorphism \[\varphi_{p}^{*}:H^{n}(B\pi;B_{k}(\pi)\otimes\mathbb{Z}_{p})\to H^{n}(B\Gamma;B _{k}(\pi)\otimes\mathbb{Z}_{p})\] is nonzero. Therefore, taking \(M=B_{k}(\pi)\otimes\mathbb{Z}_{p}\) as our \(\mathbb{Z}_{p}\pi\)-Module, we see that the map \(H^{k}(\pi,M)\to H^{k}(\Gamma,M)\) induced by \(\varphi\) is nonzero. Therefore, \(\operatorname{cd}_{\mathbb{Z}_{p}}(\varphi)\geq k\). Again, by Proposition 3.1 we conclude that \(\operatorname{cd}_{\mathbb{Z}_{p}}(\varphi)=\operatorname{cd}(\varphi)\). ## 6. Cohomological dimension of the product **6.1 Lemma**.: _For any two homomorphisms \(\phi:\Gamma\to\pi\) and \(\phi^{\prime}:\Gamma^{\prime}\to\pi^{\prime}\) and any commutative ring \(R\) there is the inequality \(\operatorname{cd}_{R}(\phi\times\phi^{\prime})\leq\operatorname{cd}_{R}\phi+ \operatorname{cd}_{R}\phi^{\prime}\)._ Proof.: Consider projective resolutions \(P_{*}(\Gamma)\), \(P_{*}(\Gamma^{\prime})\), \(P_{*}(\pi)\), and \(P_{*}(\pi^{\prime})\) of \(R\) for \(R\Gamma\), \(R\Gamma^{\prime}\), \(R\pi\), and \(R\pi^{\prime}\) modules respectfully. Let \(\phi_{*}:P_{*}(\Gamma)\to P_{*}(\pi)\) and \(\phi_{*}^{\prime}:P_{*}(\Gamma^{\prime})\to P_{*}(\pi^{\prime})\) be chain maps between projective resolutions generated by \(\phi\) and \(\phi^{\prime}\). Let \(\operatorname{cd}_{R}(\phi)=m\) and \(\operatorname{cd}_{R}(\phi^{\prime})=n\). By Theorem 3.3\(\phi_{*}\) is chain homotopic to a chain map \(\psi_{*}\) with \(\psi_{i}(P_{i}(\Gamma))=0\) for \(i>m\). Similarly, \(\phi_{*}^{\prime}\) is chain homotopic to a chain map \(\psi_{*}^{\prime}\) with \(\psi_{j}(P_{j}(\Gamma^{\prime}))=0\) for \(j>n\). Then the chain map \[(\phi\times\phi^{\prime})_{*}:(P(\Gamma)\otimes_{R}P(\Gamma^{\prime}))_{*}\to( P(\pi)\otimes_{R}P(\pi^{\prime}))_{*}\] is chain homotopic to \((\psi\times\psi^{\prime})_{*}\). Note that \((\psi\times\psi^{\prime})_{k}=0\) for \(k>m+n\). Therefore, the induced homomorphism \[(\psi\times\psi^{\prime})^{*}=(\phi\times\phi^{\prime})^{*}:H^{k}(\pi\times \pi^{\prime},M)\to H^{k}(\Gamma\times\Gamma^{\prime},M)\] is \(0\) for \(k>m+n\) for any \(R(\pi\times\pi^{\prime})\)-module \(M\). Hence \(\operatorname{cd}_{R}(\phi\times\phi^{\prime})\leq m+n\). **6.2 Lemma**.: _For any two homomorphisms \(\varphi:\Gamma\to\pi\) and \(\psi:\Gamma^{\prime}\to\pi^{\prime}\) between geometrically finite groups and for any field \(\mathbf{k}\) there is the inequality_ \[\operatorname{cd}_{\mathbf{k}}(\varphi\times\psi)\geq\operatorname{cd}_{ \mathbf{k}}(\varphi)+\operatorname{cd}_{\mathbf{k}}(\psi).\] Proof.: We recall that there is the Kunneth Formula for the sheaf cohomology of compact spaces [Bre] which in the case of a field degenerates into a natural isomorphism. Thus, for any geometrically finite group \(G\) and any \(\mathbf{k}G\)-module \(M\) for a field \(\mathbf{k}\) there is an isomorphism: \[\bigoplus_{p+q=r}H^{p}(G;M)\otimes_{\mathbf{k}}H^{q}(G;M)\to H^{r}(G\times G;M \otimes_{\mathbf{k}}M^{\prime}).\] Let \(m=\operatorname{cd}_{\mathbf{k}}(\varphi)\) and Let \(n=\operatorname{cd}_{\mathbf{k}}(\psi)\). Let \(M\) to be a \(\mathbf{k}\pi\)-module for which \(\varphi^{*}:H^{n}(\pi;M)\to H^{n}(\Gamma;M)\) is nonzero and let \(M^{\prime}\) to be a \(\mathbf{k}\pi^{\prime}\)-module for which \(\psi^{*}:H^{n}(\pi^{\prime};M^{\prime})\to H^{n}(\Gamma^{\prime};M^{\prime})\) is nonzero. We obtain the following commutative diagram: \[\begin{CD}0@>{}>{}>\bigoplus_{p+q=r}H^{p}(\pi,M)\otimes_{\mathbf{k}}H^{q}(\pi^{ \prime},M^{\prime})@>{}>{}>H^{r}(\pi\times\pi^{\prime},M\otimes_{\mathbf{k}}M^{ \prime})@>{}>{}>0\\ @V{}V{\bigoplus\,\varphi^{*}\otimes\psi^{*}}V@V{(\varphi\times\psi)^{*}}V{ }V@V{(\varphi\times\psi)^{*}}V{}V@V{(\varphi\times\psi)^{*}}V{}V@V{}V{}V\\ 0@>{}>{}>\bigoplus_{p+q=r}H^{p}(\Gamma;M)\otimes_{\mathbf{k}}H^{q}(\Gamma^{ \prime};M)@>{}>{}>H^{r}(\Gamma^{\prime}\times\Gamma;M\otimes_{\mathbf{k}}M^{ \prime})@>{}>{}>0\end{CD}\] For \(r=m+n\) on the left hand side the summand with \(p=m\) and \(q=n\) gives us a nonzero homomorphism of vector spaces over \(\mathbf{k}\). By commutativity, we get that the right vertical map is also nonzero in dimension \(m+n\). Therefore, \(\operatorname{cd}_{\mathbf{k}}(\varphi\times\psi)\geq m+n\). **6.3 Corollary**.: _For any two homomorphisms \(\varphi:\Gamma\to\pi\) and \(\psi:\Gamma^{\prime}\to\pi^{\prime}\) between geometrically finite groups and for any field \(\mathbf{k}\) there is the equality_ \[\operatorname{cd}_{\mathbf{k}}(\varphi\times\psi)=\operatorname{cd}_{\mathbf{ k}}(\varphi)+\operatorname{cd}_{\mathbf{k}}(\psi).\] Proof.: Apply Lemma 6.1 and Lemma 6.2. **6.4 Corollary**.: _For any homomorphism \(\varphi:\Gamma\to\pi\) between two geometrically finite groups there is the inequality \(\operatorname{cd}(\varphi\times\varphi)\geq 2\operatorname{cd}(\varphi)\)._ Proof.: By Theorem 5.3 there is a field \(\mathbf{k}\) such that \(\operatorname{cd}(\varphi)=\operatorname{cd}_{\mathbf{k}}(\varphi)\). Then in view of Proposition 3.1 \[\operatorname{cd}(\varphi\times\varphi)\geq\operatorname{cd}_{\mathbf{k}}( \varphi\times\varphi)=2\operatorname{cd}_{\mathbf{k}}(\varphi)=2\operatorname {cd}(\varphi).\] **6.5 Theorem**.: _For any homomorphism \(\varphi:\Gamma\to\pi\) between two geometrically finite groups there is the equality \(\operatorname{cd}(\varphi\times\varphi)=2\operatorname{cd}(\varphi)\)._ Proof.: Apply Lemma 6.1 with \(R=\mathbb{Z}\) and Corollary 6.4. ## 7. On the analog of the Eilenberg-Ganea theorem for homomorphisms The Eilenberg-Ganea equality [Bro] \(\operatorname{cd}(\pi)=\operatorname{gd}(\pi)\) holds true whenever \(\operatorname{cd}(\pi)\geq 3\). The Eilenberg-Ganea conjecture extends this equality to the case of \(\operatorname{cd}(\pi)=2\). A potential counter-example should have \(\operatorname{gd}(\pi)=3\). In [DK] a map \(f:W^{4}\to T^{3}\) of an aspherical \(4\)-manifold \(W^{4}\) onto a \(3\)-torus is constructed satisfying \(\operatorname{cd}(f)=2\) and \(\operatorname{gd}(f)=3\). Below we show the fact that the numbers \(2\) and \(3\) in this example are the same as in the Eilenberg-Ganea conjecture is rather coincidental. We recall the notation \[\pi_{s}^{k}(X)=\lim_{\to}[\Sigma^{n}X,S^{n+k}]\] for the stable \(k\)-cohomotopy group of \(X\). **7.1 Proposition**.: _Suppose that the map \(f:W\to T\) induces a nontrivial homomorphism \(f^{*}:\pi_{s}^{k}(T)\to\pi_{s}^{k}(W)\) and satisfies \(\operatorname{cd}(f)<k\). Then the map \(f\times 1:W\times S^{1}\to T\times S^{1}\) induces a nontrivial homomorphism_ \[(f\times 1)^{*}:\pi_{s}^{k+1}(T\times S^{1})\to\pi_{s}^{k}(W\times S^{1})\] _and satisfies the inequality \(\operatorname{cd}(f\times 1)<k+1\)._ Proof.: Let \(S^{1}=I_{+}\cup I_{-}\) with \(I_{+}\cap I_{-}=S^{0}=\{x_{0},-x_{0}\}\). We use the relative Mayer-Vietoris sequence for \(\pi_{s}^{*}\) to show the isomorphism \[\pi_{s}^{i-1}(Y)=\pi_{s}^{i-1}(Y\times-x_{0})=\pi_{s}^{i-1}(Y\times S^{0},Y \times x_{0})\to\pi_{s}^{i}(Y\times S^{1},Y\times x_{0}).\] The retraction \(Y\times S^{1}\to Y\times x_{0}\) defines the natural on \(Y\) splitting \[\pi_{s}^{i}(Y\times S^{1})=\pi_{s}^{i}(Y)\oplus\pi_{s}^{i-1}(Y\times S^{1},Y \times x_{0})=\pi_{s}^{i}(Y)\oplus\pi_{s}^{i-1}(Y).\] The commutative diagram \[\begin{CD}\pi_{s}^{i}(T\times S^{1})@>{\cong}>{}>\pi_{s}^{i}(T)\oplus\pi_{s}^{ i-1}(T)\\ @V{(f\times 1)^{*}}V{}V@V{f^{*}\oplus f^{*}}V{}V\\ \pi_{s}^{i}(W\times S^{1})@>{\cong}>{}>\pi_{s}^{i}(W)\oplus\pi_{s}^{i-1}(W)\end{CD}\] with \(i=k+1\) implies that the homomorphism \[(f\times 1)^{*}:\pi_{s}^{k+1}(T\times S^{1})\to\pi_{s}^{k}(W\times S^{1})\] is not \(0\). The inequality \(\operatorname{cd}(f\times 1)<k+1\) follows from Lemma 6.1. ### Corollary _For any \(k>2\) there is a map between aspherical manifolds \(f_{k}:W^{k+1}\to T^{k}\) with \(\operatorname{cd}(f)<k\) and \(\operatorname{gd}(f)=k\)._ Proof.: We note that the map \(f:W^{4}\to T^{3}\) from [DK] induces a nontrivial isomorphism \(f^{*}:\pi_{s}^{3}(T)\to\pi_{s}^{3}(W^{4})\). We cross it with \(S^{1}\) and apply Proposition 7.1. Then we iterate this process until the range is a \(k\)-torus \(T^{k}\).
2305.19432
Observational signatures of forming young massive clusters: continuum emission from dense HII regions
Young massive clusters (YMCs) are the most massive star clusters forming in nearby galaxies and are thought to be a young analogue to the globular clusters. Understanding the formation process of YMCs leads to looking into very efficient star formation in high-redshift galaxies suggested by recent JWST observations. We investigate possible observational signatures of their formation stage, particularly when the mass of a cluster is increasing via accretion from a natal molecular cloud. To this end, we study the broad-band continuum emission from ionized gas and dust enshrouding YMCs, whose formation is followed by recent radiation-hydrodynamics simulations. We perform post-process radiative transfer calculations using simulation snapshots and find characteristic spectral features at radio and far-infrared frequencies. We show that a striking feature is long-lasting, strong free-free emission from a $\sim$ 10pc-scale HII region with a large emission measure of $\gtrsim 10^7 \mathrm{cm}^{-6} \ \mathrm{pc}$, corresponding to the mean electron density of $\gtrsim 10^3~\mathrm{cm}^{-3}$. There is a turnover feature below $\sim$ 10 GHz, a signature of the optically-thick free-free emission, often found in Galactic ultra-compact HII regions. These features come from the peculiar YMC formation process, where the cluster's gravity effectively traps photoionized gas for a long duration and enables continuous star formation within the cluster. Such large and dense HII regions show distinct distribution on the density-size diagram, apart from the standard sequence of Galactic HII regions. This is consistent with the observational trend inferred for extragalactic HII regions associated with YMCs.
Mutsuko Inoguchi, Takashi Hosokawa, Hajime Fukushima, Kei E. I. Tanaka, Hidenobu Yajima, Shin Mineshige
2023-05-30T22:01:41Z
http://arxiv.org/abs/2305.19432v2
Observational signatures of forming young massive clusters: continuum emission from dense H ii regions ###### Abstract Young massive clusters (YMCs) are the most massive star clusters forming in nearby galaxies and are thought to be a young analogue to the globular clusters. Understanding the formation process of YMCs leads to looking into very efficient star formation in high-redshift galaxies suggested by recent JWST observations. We investigate possible observational signatures of their formation stage, particularly when the mass of a cluster is increasing via accretion from a natal molecular cloud. To this end, we study the broad-band continuum emission from ionized gas and dust enshrouding YMCs, whose formation is followed by recent radiation-hydrodynamics simulations. We perform post-process radiative transfer calculations using simulation snapshots and find characteristic spectral features at radio and far-infrared frequencies. We show that a striking feature is long-lasting, strong free-free emission from a \(\sim 10\) pc-scale H ii region with a large emission measure of \(\gtrsim 10^{7}\)cm\({}^{-6}\) pc, corresponding to the mean electron density of \(\gtrsim 10^{3}\) cm\({}^{-3}\). There is a turnover feature below \(\sim 10\) GHz, a signature of the optically-thick free-free emission, often found in Galactic ultra-compact H ii regions. These features come from the peculiar YMC formation process, where the cluster's gravity effectively traps photoionized gas for a long duration and enables continuous star formation within the cluster. Such large and dense H ii regions show distinct distribution on the density-size diagram, apart from the standard sequence of Galactic H ii regions. This is consistent with the observational trend inferred for extragalactic H ii regions associated with YMCs. keywords: keyword1 - keyword2 - keyword3 + Footnote †: journal: Accepted XXX. Received YYY; in original form ZZZ ## 1 Introduction Young massive clusters (YMCs), also known as superstar clusters, are the most massive star clusters forming in the present-day universe (Portegies Zwart et al., 2010; Longmore et al., 2014). Their typical mass (\(\gtrsim 10^{4}\) M\({}_{\odot}\)) and density (\(\gtrsim 10^{3}\) M\({}_{\odot}\) pc\({}^{-3}\)) are much larger than those of open clusters (OCs), typical star clusters in our Galaxy (Beck, 2015). YMCs are often found in nearby starburst and interacting galaxies (e.g. Whitmore et al., 1993, 1999; Whitmore, 2000), and they are considered a young analogue to the globular clusters. Therefore, revealing the YMC formation processes in the nearby galaxies leads to glimpsing the star formation in the early universe. Some latest JWST observations resolve individual YMCs in distant galaxies even at redshifts \(z\simeq 4-6\)(e.g. Vanzella et al., 2022, 2020), suggesting that the YMCs should contribute larger fractions of the star formation activities in such galaxies. Star clusters in their early formation stage are called "embedded clusters" (Lada and Lada, 2003). Previous optical and infrared (IR) observations report candidates of YMCs in the embedded stage in nearby galaxies (e.g. Gorjian et al., 2001; Beck et al., 2002; Galliano et al., 2005). The cold gas and dust associated with these candidates are confirmed by recent high-resolution observations using Atacama Large Millimeter/sub-millimeter Array (ALMA) (Johnson et al., 2015; Turner et al., 2017; Leroy et al., 2018; Finn et al., 2019). More recently, the latest JWST observations report discoveries of new populations of YMCs, many of which seem to be in an early embedded stage (Whitmore et al., 2023). Recent ALMA observations by He et al. (2022) report possible candidates of individual \(\sim 10\) pc-scale H ii regions associated with embedded YMCs in the Antennae galaxy. Theoretical models suggest that YMCs and progenitors of globular clusters often form in highly pressurized environments (Elmegreen and Efremov, 1997; Kruijssen, 2015), and some observations point to cloud-cloud collisions as a key physical process (Tsuge et al., 2021, 2020). However, the exact mechanism of the YMC formation, including the evolution of a cluster under radiative feedback from high-mass cluster-member stars (Krumholz et al., 2019), remains uncertain. Recently, Fukushima and Yajima (2021) (FY21 hereafter) studied conditions required for the YMC formation, systematically performing a suite of radiation-hydrodynamic (RHD) simulations of the cluster formation and cloud destruction by the stellar radiative feed back (see also, e.g. Dale et al., 2012; Kim et al., 2018; He et al., 2019; Dobbs et al., 2020; Grudic et al., 2021; Menon et al., 2022). As a result of examining cases with different parameters, such as the initial cloud mass, size, metallicity, etc., they found the critical gas surface density above which the YMC formation occurs, \(\Sigma\simeq 200\) M\({}_{\odot}\)pc\({}^{-2}\). Recent observations report molecular clouds with such high surface densities are ubiquitously found in distant galaxies at redshifts \(z\sim 1\)(Dessauges-Zavadsky et al., 2023). In the cases of the YMC formation, a star cluster rapidly becomes massive enough to gravitationally trap the hot photoionized gas, before the entire cloud is blown away by an expanding H ii bubble. The cluster mass continues to increase even after the emergence of the H ii bubble for such a case (Bressert et al., 2012), resulting in a high star-formation efficiency (SFE), or the mass ratio of the cluster to the natal molecular cloud. FY21 suggest characteristic features of the formation stage of the YMC, i.e., the stage when the cluster mass increases through accretion from the natal cloud. The gravity from the growing massive cluster keeps the H ii bubble compact and dense around it for a long duration (Keto, 2003). We aim to derive observational signatures of such a characteristic evolutionary stage. To this end, we conduct various post-process radiation transfer calculations using simulation snapshots. In this paper, we particularly investigate the continuum spectra expected for the YMC formation stage. We study the free-free radio spectrum from dense H ii regions associated with forming YMCs, with which the mean density of the photoionized gas has been observationally inferred (e.g. Garay and Lizano, 1999; Kim and Koo, 2001). Intriguingly, Hunt and Hirashita (2009) (hereafter HH09) demonstrate that H ii regions found in Blue Compact Dwarf galaxies obey the density-size relation apart from that for Galactic H ii regions (see also Gilbert and Graham, 2007). They show that such extragalactic H ii regions are distinctively denser than their Galactic counterparts with similar sizes, suggesting a different population of H ii regions in extreme starburst environments. We show that our post-process calculations for OC- and YMC-forming cases well explain such observational trends regarding the density-size relations. We organize the rest of the paper as follows. In Section 2, we briefly review our numerical methods on the radiation hydrodynamic simulations by FY21. We also describe the method for post-process radiation transfer calculations to give the continuum spectra based on the simulation data. In Section 3.1, we compare the two representative cases of the OC and YMC formation, which we mainly consider throughout the paper. In Section 3.2, we consider the evolution of the continuum spectra from radio to IR wavelengths, with which we consider possible observational signatures of the YMC formation. Finally, Sections 4 and 5 provide discussions and concluding remarks. ## 2 Methods ### Radiation-hydrodynamics simulations We here briefly describe the method of three-dimensional (3D) RHD simulations of the cluster formation and cloud destruction (see also FY21). They use a modified version of the adaptive mesh refinement (AMR) code syntaxo(Matsumoto, 2007; Matsumoto et al., 2015), for which the M1 method (e.g. Levermore, 1984) is implemented to handle the radiative transfer (dubbed syntaxo-m1). They adopt the reduced speed of light approximation with \(\tilde{c}=3\times 10^{-4}c\), where \(c\) is the light speed (Rosdahl et al., 2013). Photoionization of atoms, photodissociation of molecules, and radiative heating of gas and dust around a light source are solved by considering the transfer of extreme ultraviolet (EUV; \(13.6\) eV \(<h\nu\)), Lyman-Werner (LW; \(11.2\) eV \(<h\nu<13.6\) eV), far-ultraviolet (FUV; \(6\) eV \(<h\nu<13.6\) eV), and infrared photon (IR) photons (e.g., Hosokawa and Inutsuka, 2006). The chemistry solver is based on the scheme of Sugimura et al. (2020), and it is extended to include the network of Nelson and Langer (1997) for CO formation. The chemical network also includes O\({}^{0}\), O\({}^{+}\), and O\({}^{2+}\), whose abundances are solved with the same procedure as in Fukushima et al. (2020). We insert sink particles representing a small star cluster when the density exceeds a threshold value and other conditions are satisfied (Federrath et al., 2010). We assign photon emissivity to each sink particle in the following two ways. One is the same as in FY21, where the luminosity and spectrum are given by taking the averages of the stellar isochrone of Chen et al. (2015) and the Chabrier initial mass function (IMF, Chabrier, 2003). The other is stochastic sampling, where the total cluster mass is distributed into stellar mass bins in a probability-weighted manner based on the IMF (Fukushima and Yajima, 2022). ### Cases examined Table 1 summarizes the cases considered. Each simulation run starts from a homogeneous cloud with the mass \(M_{\rm cl}\), surface density \(\Sigma_{\rm cl}\), and radius \(R_{\rm cl}\). We also assume that a turbulent velocity field fills the cloud. Throughout the paper, we particularly focus on two representative cases, M6R28st and M6R56st, for which the cloud mass is the same as \(M_{\rm cl}=10^{6}\) M\({}_{\odot}\). Case M6R28st starts from a relatively compact cloud with \(\Sigma_{\rm cl}=400\) M\({}_{\odot}\)pc\({}^{-2}\), or with the radius \(R_{\rm cl}\simeq 28\) pc. The free-fall timescale for the initial cloud is \(t_{\rm ff}\simeq 2.5\) Myr. The other case M6R56st starts from a cloud with \(\Sigma_{\rm cl}=100\) M\({}_{\odot}\)pc\({}^{-2}\), or with the radius \(R_{\rm cl}\simeq 56\) pc. The corresponding free-fall timescale is \(t_{\rm ff}\simeq 7\) Myr. As shown in Section 3.1 below, M6R28st and M6R56st represent the typical cases of yielding YMC-like and OC-like clusters, respectively. We newly perform these simulation runs using the stochastic stellar sampling developed in Fukushima and Yajima (2022) (Section 2.1) as indicated by the label "st". Kim et al. (2016) show that the stochasticity of the stellar population impacts the emissivity only when the total stellar mass is smaller than \(10^{4}\) M\({}_{\odot}\)(see also, Fukushima and Yajima, 2022). In our work, the cluster mass is much higher than this value for all the cases examined, and these cases provide the essentially same results as in FY21. Apart from the above cases, we consider three additional cases M6R10, M5R20, and M6R60, identical to those studied in FY21, where the model names have extensions of "Z0A1". For instance, case M6R10 corresponds to M6R10Z0A1 in FY21. ### Post-process continuum radiative transfer calculations For post-process radiative transfer calculations, we convert the original AMR data of a simulation run into a data cube composed of \((128)^{3}\) cells. These cells homogeneously distribute in the whole \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & \(M_{\rm cl}\) & \(\Sigma_{\rm cl}\) & \(R_{\rm cl}\) & \(n_{\rm 0}\) & \(t_{\rm ff}\) \\ & (M\({}_{\odot}\)) & (M\({}_{\odot}\)/pc\({}^{2}\)) & (pc) & (cm\({}^{-3}\)) & (Myr) \\ \hline M6R28st (YMC) & \(10^{6}\) & 400 & 28.2 & 309 & 2.5 \\ M6R56st (OC) & \(10^{6}\) & 100 & 56.4 & 38.6 & 7.0 \\ M6R10 (YMC) & \(10^{6}\) & 3200 & 10.0 & 7000 & 0.52 \\ M6R20 (OC) & \(10^{5}\) & 80 & 20.0 & 87 & 4.7 \\ M6R60 (OC) & \(10^{6}\) & 88 & 60.0 & 32 & 7.7 \\ \hline \end{tabular} \end{table} Table 1: Cases considered computational domain. We map all the physical quantities on the original AMR grids to the cartesian grids by linear interpolation. We solve the transfer equation at a given frequency, which ranges from \(\nu_{\rm min}=0.1\) GHz to \(\nu_{\rm max}=10^{5}\) GHz, along a line of sight chosen as an axis of the data cube, \[\frac{d\nu}{ds}=-(\kappa_{\nu,\rm ff}+\kappa_{\nu,\rm d})I_{\nu}+\kappa_{\nu,\rm ff }B_{\nu}(T)+\kappa_{\nu,\rm d}B_{\nu}(T_{\rm d}), \tag{1}\] where \(I_{\nu}\) is the intensity, \(\kappa_{\nu,\rm ff}\) the free-free opacity, \(\kappa_{\nu,\rm d}\) the dust opacity, \(B_{\nu}\) the Planck function, and \(T\) and \(T_{\rm d}\) are the gas and dust temperatures. The free-free opacity is written as \[\kappa_{\nu,\rm ff}=\frac{j_{\nu,\rm ff}}{B_{\nu}(T)}, \tag{2}\] where \(j_{\nu,\rm ff}\) is the free-free emission coefficient, which is given by \[j_{\nu,\rm ff} \simeq 5.4\times 10^{-41}\rm{ergs^{-1}cm^{-3}Hz^{-1}rad^{-1}}\] \[\times T_{4}^{-1/2}n_{e}n_{p}\exp\left(-\frac{h\nu}{k_{\rm B}T} \right)g_{\rm ff},\] where \(T_{4}\equiv(T/10^{4}\) K), \(n_{e}\) and \(n_{p}\) the electron and proton number densities, \(h\) the Planck constant, \(k_{\rm B}\) the Boltzmann constant, \(g_{\rm ff}\) the Gaunt factor (e.g. Rybicki and Lightman, 1986; Draine, 2011). Regarding the dust opacity \(\kappa_{\nu,\rm d}\), we use \[\kappa_{\nu,\rm d}=0.1\left(\frac{\lambda}{300\mu{\rm m}}\right)^{-2}\rm{cm^{2} g^{-1}}, \tag{4}\] where \(\lambda\) is the wavelength. The functional form of Eq. (4) is valid for the far-IR range, and it has been typically assumed for the _Herschel_ Infrared Galactic plane survey (Hi-GAL, e.g. Molinari et al., 2010; Elia et al., 2013), which covers 70 \(\mu\)m \(\lesssim\)\(\lambda\)\(\lesssim\) 500 \(\mu\)m. Note that in our RHD simulations the dust temperature \(T_{\rm d}\) is computed by solving the monochromatic IR transfer using the Planck mean opacity given by Laor and Draine (1993). ## 3 Results ### Bimodal evolution: OC and YMC formation Here we briefly review the evolution for cases M6R56st and M6R28st, representatives of the OC and YMC formations observed in our RHD simulations. FY21 also provide similar descriptions in more detail. Fig. 1 shows the time evolution of case M6R56st, where the surface density of the cloud is relatively low, \(\Sigma_{\rm cl}=100\) M\({}_{\odot}\)pc\({}^{-2}\). In the early phase, the initial turbulent motion controls the gas dynamics and induces filamentary structures along which the star formation occurs. An H ii region appears around the star cluster by the epoch of \(t\simeq t_{\rm ff}\). The natal cloud gradually disperses owing to the expanding H ii region during \(t_{\rm ff}\lesssim t\lesssim 2t_{\rm ff}\). We see that the distribution of star cluster particles extends during that period. Most hydrogen molecules are destroyed by stellar FUV radiation (Inoguchi et al., 2020). The star cluster is only surrounded by photoionized gas at the final snapshot of \(t\simeq 2t_{\rm ff}\). Fig. 2 shows the evolution of case M6R28st, where the initial cloud surface density is higher than the above as \(\Sigma_{\rm cl}=400\) M\({}_{\odot}\)pc\({}^{-2}\). While the basic evolution looks similar to that presented in Fig. 1, Figure 1: Birth of a star cluster and destruction of a cloud simulated in the representative case of the OC formation, M6R56st. The columns of panels display snapshots at the different epochs, (1) \(t=7.2\) Myr, (2) 9.2 Myr, (3) 10.8 Myr, and (4) 14.6 Myr from left to right. The top, middle, and bottom rows display the distributions of the total gas surface density \(\Sigma\), column density of electrons \(N_{\rm cl}\), and that of hydrogen molecules \(N_{\rm H_{2}}\) measured along the line of sight, \(z\)-axis. The white dots represent the distributions of the star (cluster) particles. a denser and more compact star cluster finally appears in this case. As studied in FY21, the gravitational force from the star cluster overwhelms the thermal pressure gradient caused by the H ii regions. The electron density near the cluster centre remains much higher than in case M6R56st. We derive observational signatures of such large and dense H ii regions in Section 3.2 below. Fig. 3 presents the time evolution of the ratio of the cluster mass \(M_{*}\) to the initial cloud mass \(M_{*}/M_{\rm cl}\) and the bound fraction of the star cluster \(f_{\rm bd}\), the mass fraction of star-cluster particles that are gravitationally bound. Both cases show a common evolution that \(M_{*}/M_{\rm cl}\) reaches \(0.1\) at \(t\simeq 1.3\,t_{\rm ff}\). However, the subsequent evolution is very different among these models. Whereas \(M_{*}/M_{\rm cl}\) saturates at \(\simeq 0.13\) at \(t\simeq 1.5\,t_{\rm ff}\) for case M6R56st, it continuously increases to reach \(0.6\) until \(t\simeq 4\,t_{\rm ff}\) for case M6R28st. Early theoretical studies generally predict that the bound fraction of the star cluster is a sensitive function of the SFE; high \(f_{\rm ff}\) is realized with the SFE exceeding a few \(\times\) 10 % after the dispersal of a natal cloud (e.g., Adams 2000; Baumgardt & Kroupa 2007; Shukriaguliev et al. 2017). Similarly, FY21 show that the evolution when \(M_{*}/M_{\rm cl}\sim 0.1\) determines the subsequent fate, whether it leads to the YMC formation. In the former case of M6R56st (see also Fig. 1), \(M_{*}/M_{\rm cl}\) is relatively small at \(t\simeq 1.3\,t_{\rm ff}\), so that the bound fraction remains less than 0.1 throughout the simulation. The UV radiative feedback effectively suppresses the star formation no later than \(t\simeq 1.5\,t_{\rm ff}\). In the latter case of M6R28st (see also Fig. 2), \(M_{*}/M_{\rm cl}\) becomes relatively large before the stellar UV feedback becomes effective, i.e., before the H ii bubble sweeps a large part of the original cloud. The bound fraction exceeds 0.9 at \(t\simeq 1.5\,t_{\rm ff}\), and the gravitational potential around the star cluster becomes deep enough to capture the ambient gas. Figure 3: Star formation histories against the elapsed time normalized by the free-fall timescale for the representative cases of the OC and YMC formation, M6R56st and M6R28st (panels a and b). In each panel, the solid and dashed lines represent the cluster mass normalized by the initial cloud mass \(M_{*}/M_{\rm cl}\) and mass fraction of gravitationally bound particles \(f_{\rm bd}\), respectively. Figure 2: The same as Fig. 1 but for the representative case of the YMC formation, M6R28st. The columns of panels correspond to the different epochs of (1) \(t=2.5\) Myr, (2) 3.2 Myr, (3) 3.7 Myr, and (4) 5.0 Myr from left to right, the same times as in Fig. 1 if normalized by the initial free-fall timescale \(t_{\rm ff}\). As mentioned in Section 1, FY21 show that different initial cloud properties lead to the bimodal evolution overviewed for the above two cases M6R56st and M6R28st. We thus particularly study observational signatures of these cases, supposing that M6R56st and M6R28st demonstrate the typical YMC and OC formation processes, respectively. ### Continuum emission from ionized gas and dust around forming clusters #### 3.2.1 Continuum spectra Fig. 4 shows the temporal evolution of the continuum spectra for the representative cases of the normal OC formation (top, case M6R56st) and YMC formation (bottom, case M6R28st), corresponding to the same snapshots as in Figs. 1 and 2. These panels show the contributions from regions where the local intensity is more than 0.1 % of the peak values for given frequencies. As shown later in Section 3.2.2, the size of the emitting regions is \(\sim 10\) pc for the YMC-forming case and \(\sim\) a few \(\times 10\) - \(100\) pc for the OC-forming case. The overall shape of the spectrum is common for these cases; one component of the dust thermal emission at \(\nu\gtrsim 10^{2}-10^{3}\) GHz, and the other component of the free-free emission at the lower frequencies. Despite the similarity in the spectrum shape, there are striking differences among these cases, for instance, in the time evolution. In case M6R56st of the OC formation, the emission gradually declines at all frequencies for \(1\leq t/t_{\rm ff}\leq 2.1\), during which the SFE saturates to \(\simeq 0.2\) (Fig. 3). The brightness temperature at \(\simeq 30\) GHz is always less than 10 K throughout the evolution. This is expected with the standard picture of the H ii bubble expansion; the mean density (and also the column density) of the photoionized gas decreases as the bubble expands. In case M6R28st of the YMC formation, in contrast, the emission oppositely continues to get stronger for almost the same duration of \(1\leq t/t_{\rm ff}\leq 2\), particularly at \(\nu\gtrsim 10^{3}\) GHz corresponding to the dust thermal emission. The free-free component at the lower frequencies never decreases but rather increases slightly. The brightness temperature at \(\simeq 30\) GHz is much higher than for case M6R56st, staying at \(\sim 100\) K. These features come from the characteristic evolution of the YMC formation described in Section 3.1. The strong gravity of the forming massive cluster prevents the density of the photoionized gas from decreasing. Recall that the cluster mass continues to increase even after \(t=2\,t_{\rm ff}\) (Fig. 3). Another difference is found at the low-frequency end of the free-free continuum spectrum for \(\nu\lesssim 10\) GHz. Whereas the spectrum is almost flat for case M6R56st of the OC formation, it decreases with decreasing the frequency for case M6R28st of the YMC formation. This corresponds to the optically thick regime of the free-free emission. Such a feature is known to appear below the turnover frequency \[\nu_{\rm to}\simeq 16.0\ {\rm GHz}\left(\frac{\rm EM}{10^{9}{\rm cm}^{-6}\ {\rm pc}} \right)^{0.48}\left(\frac{T_{\rm i}}{10^{4}\ {\rm K}}\right)^{-0.64} \tag{5}\] (e.g. Mezger & Henderson, 1967; Kurtz, 2005), where \(T_{\rm i}\) is the temperature of the ionized gas and EM represents the emission measure defined as \[{\rm EM}\equiv\int\,n_{\rm e}^{2}\ ds, \tag{6}\] for which the integration is performed along the lines of sight. Fig. 5 Figure 4: Time evolution of the continuum spectrum emitted from a central part of cluster-forming regions. Panels (a) and (b) represent the cases of M6R56st and M6R28st, where the YMC and OC formation eventually occur. The vertical axis represents the intensity averaged over a part where the intensity is higher than 0.1 % of the peak value at a given frequency. In both panels, the darker line colors represent the snapshots in the later stages, the same snapshots as in Figs. 1 and 2. The direction of the vertical arrows denotes how the evolution proceeds. The thin dashed curves represent the reference spectrum of the Planck function \(B_{\nu}(T_{\rm eff})\) with the effective temperatures \(T_{\rm eff}=1000\) K, 100 K, and 10 K. Figure 5: EM map for the representative cases of the OC and YMC formation, M6R56st (top row) and M6R28st (bottom row). In each row, the left and right panels illustrate snapshots at the same epochs as in the second and fourth columns of Figs. 1 and 2. The lines of sight are also the same as these figures. The dashed circle in each panel denotes the half-mass radius of the star cluster measured from its mass center. shows the evolution of the EM distribution for the cases considered above. In case M6R56st of the OC formation, the peak value of the EM is \(\simeq 10^{7}\) cm\({}^{-6}\) pc in an early stage at \(t\simeq 1.3\)\(t_{\rm ff}\), and it decreases by orders of magnitude by the epoch of \(\simeq 2.1\)\(t_{\rm ff}\). This is consistent with Fig. 4 (a), where the absorption feature only appears at \(\nu\lesssim 1\) GHz in the earliest stage and gradually disappears. In contrast, Fig. 5 shows that for case M6R28st of the YMC formation the EM takes the much higher peak values at \(\sim 10^{9}\) cm\({}^{-6}\) pc, which hardly decreases until the latest stage of \(t=2\)\(t_{\rm ff}\). The high EM is due to the trapping effect of the photoionized gas by the gravity of the forming massive cluster. The persistent turnover feature at \(\nu\lesssim 10\) GHz in Fig. 4 (b) agrees with the EM evolution presented in Fig. 5. #### 3.2.2 Density-size diagram of H ii regions One of the important properties provided by radio observations is the density-size relation of H ii regions, which we consider with Fig. 6. It has been well established that the Galactic H ii regions obey the density-size relation \(n_{\rm e}\propto D^{-1}\), where \(n_{\rm e}\) is the mean electron number density of an H ii region and \(D\) is its diameter (Garay & Lizano, 1999; Kim & Koo, 2001). The same correlation continues over H ii regions with sizes that differ by more than four orders of magnitude, from \(\sim 0.1\) pc of ultra-compact H ii regions to over 100 pc of giant regions. The black dashed and dotted straight lines in Fig. 6 represent the best fits to the Galactic samples given by Garay & Lizano (1999) and Kim & Koo (2001), respectively. Its inclination is somehow shallower than that expected by the simple Stromgren-sphere argument assuming a constant emissivity of ionizing photons, \(n_{\rm e}\propto D^{-1.5}\). Whether the similar correlation exists for extragalactic H ii regions has been also studied for decades (Kennicutt, 1984). Interestingly, some H ii regions associated with YMCs are known to locate in a distinct region in the density-size diagram. Early studies have already suggested the presence of pc-scale H ii regions with high densities above \(n_{\rm e}>10^{3}\) cm\({}^{-3}\)(Kobulnicky & Johnson, 1999; Beck et al., 2002), which are only found for ultra-compact (\(D\lesssim 0.1\) pc) regions in the Galaxy (Churchwell, 2002). HH09 compile a large sample of such H ii regions in the literature and investigate their statistical properties. The open star symbols in Fig. 6 represent their radio samples, many of which are H ii regions found in blue compact dwarf galaxies. HH09 also point out that their density-size relation also approximately follow \(n_{\rm e}\propto D^{-1}\), which corresponds to the solid straight line in Fig. 6. We superpose our simulation data points in Fig. 6. To do that, we extract an area where the radio intensity at 36 GHz is higher than some threshold values, 0.1% and 1% of the peak value for each snapshot (upper and lower panels). We have confirmed that our results do not change if we shift the frequency to 150 GHz, which is expected by the flat spectrum in this range (Fig. 4). We compute the effective radius of the area as \[r=\sqrt{\frac{S}{\pi}}, \tag{7}\] where \(S\) is the area surface, and the mean emission measure (EM) by simply taking the arithmetic average of EM over cells contained within the area. The mean density (\(n_{\rm e}\)) is accordingly given by \[\langle n_{\rm e}\rangle=\sqrt{\frac{\langle{\rm EM}\rangle}{D}}, \tag{8}\] where \(D\) is the effective diameter \(D=2r\). As expected with the bimodal evolution for the OC- and YMC-forming cases (Section 3.1), the simulation data points for these cases separately distribute in Fig. 6. For case M6R56st of the OC formation (large reddish circles), for instance, the points reside in the relatively lower right portion of each panel. The points go to the lower-right side as time goes on, representing the standard evolution of an H ii bubble expansion where the electron number density gradually decreases with increasing time (Spitzer, 1978). These points locate near the density-size relation for the Galactic H ii regions. Although the inclination of these points is steeper than \(n_{\rm e}\propto D^{-1}\), reducing the Figure 6: Mean densities and diameters of H ii regions estimated from emission-measure maps at the frequency of 36 GHz. Panels (a) and (b) represent different threshold intensities for taking a central part, 0.1% and 1% of the peak value. In both panels, the large bluebels and reddish circles represent cases of M6R28st and M6R56st, where the YMC and OC formation eventually occur. The darker colors represent the later evolutionary stages as in Fig. 4. We also present additional cases of M6R10, M5R20, and M6R60 with different symbols of the small squares, diamonds, and triangles. M6R10 corresponds to the YMC-formation case, and M5R20 and M6R60 to the OC-formation cases. The darker colors indicate the later stages of \(t=1.0\), \(1.3\), \(1.5\), and \(2.1\)\(t_{\rm ff}\) also for these cases. In each panel, the dashed and dotted lines represent the density-size relations of Galactic H ii regions compiled by Garay & Lizano (1999) and Kim & Koo (2001), respectively. The solid line corresponds to the relation for the extragalactic ultra-dense H ii regions given by HH09. The open red star symbols represent actual samples obtained by radio interferometric observations. threshold intensity makes them spread widely around the Galactic correlation (dotted and dashed) lines. In stark contrast, data points for case M6R28st representing the YMC formation (large bluish circles) are in the relatively upper left portion of each panel, near the extragalactic radio samples by HH09. Moreover, these data points hardly move for \(t\lesssim 2\)\(t_{\rm ff}\), unlike the OC-forming case described above. These remarkable features come from the peculiar evolution of the YMC formation suggested by RHD simulations; an H ii bubble bound by the strong gravity of a newborn cluster postpones the dynamic expansion and retains the high electron density. Fig. 6 suggests that our numerical simulations successfully explain the YMC formation occurring in nearby starburst galaxies. In addition to the above two cases representing the OC and YMC formation, we also perform the same post-process calculations for other cases studied in FY21 (Table 1). As indicated by small filled symbols representing these cases, the distribution of the simulation data is bimodal, reflecting the bimodal evolution of the OC- and YMC-forming cases. We conclude that the distinct distribution of extragalactic H ii regions on the density-size diagram is the signature of the YMC formation, i.e., that of the gravity-bounded long-lasting dense H ii regions. #### 3.2.3 Dust thermal emission As already suggested by Fig. 4, the dust thermal emission at far-IR wavelengths also shows the characteristic features of the OC and YMC formation. Fig. 7 shows the evolution of the column-density-weighted dust temperature \(\bar{T}_{\rm d}\) and continuum emission maps at \(\lambda=70\)\(\mu\)m and 300 \(\mu\)m for the representative case of the OC formation, M6R56st. The dust temperature is an indicator of the strength of the local stellar radiation field, the heating source for grains. In fact, the dust temperature is highest near the cluster centre at each snapshot. The peak value gradually rises for \(t_{\rm ff}\lesssim t\lesssim 1.5\)\(t_{\rm ff}\), during which the cluster mass increases (Fig. 3). However, it decreases for \(1.5\)\(t_{\rm ff}\lesssim t\lesssim 2\)\(t_{\rm ff}\), because the cluster spatially expands only with a slight increase in mass (Fig. 1). The dust temperature at \(\sim 100\) pc away from the cluster centre monotonically increases as the cluster expands and the cloud disperses instead. For a given snapshot, the emission at \(\lambda=70\)\(\mu\)m is more centrally concentrated than at 300 \(\mu\)m because it traces the distribution of warmer grains. The radial extent of the emission at these wavelengths gradually expands with time as the cluster expands. The emission is always strongest near the cluster centre, but its peak values substantially decrease for \(1.5\)\(t_{\rm ff}\lesssim t\lesssim 2\)\(t_{\rm ff}\). This is because the H ii bubble expansion reduces the column density of the gas and dust, in addition to the decrease of \(\bar{T}_{\rm d}\). Note that the dust-to-gas mass ratio is homogeneous within the computational domain in our simulations as in FY21. Fig. 8 also clearly shows such evolution, presenting mean radial distributions of the dust continuum emission intensities at each snapshot. Figs. 9 and 10 are the same as Figs. 7 and 8 but for the representative case of the YMC formation, M6R28st. Reflecting the different evolution outlined in Section 3.1 among these cases, these figures also show the different features from those described above. For instance, Figs. 9 shows that the dust temperature \(\bar{T}_{\rm d}\) near the cluster centre is much higher than in case M6R56st for \(t\gtrsim 1.3\)\(t_{\rm ff}\), because the cluster mass continues to increase until the final snapshot. Since the cluster does not expand, unlike case M6R56st, the stellar radiation field at the cluster center gets stronger and stronger while the cluster grows in mass. The evolution of the emission maps at \(\lambda=70\) and 300 \(\mu\)m also differs from the above. Most remarkably, the maps in the later stages of \(t\gtrsim 1.5\)\(t_{\rm ff}\) show strong and centrally concentrated emission at both wavelengths. In addition to the high \(\bar{T}_{\rm d}\), this is also because of the difference in the gas density structure. Since the cluster's gravity is strong enough to even trap the photoionized gas, there remains the dense gas in the vicinity of the cluster. The column densities of the gas and dust hardly decrease for a long time. Fig. 10 suggests that for \(t\gtrsim 1.5\)\(t_{\rm ff}\) the peak intensities in case M6R28st are higher than those in case M6R56st by a few orders of magnitudes. At the final snapshot at \(t\simeq 2\)\(t_{\rm ff}\), there are sharp cusps within the central \(\sim 10\) pc, which is in stark contrast to the flat core-like distributions seen for case M6R56st (c.f. Fig. 8). ## 4 Discussions ### Alternative signatures: line emission and absorption Our focus has been on the continuum spectra to derive characteristic signatures of the YMC formation, but this should not be the only available observational clue. Alternative ways include the line emission against the continuum spectra, which is to be further studied. Various lines spanning from radio to infrared wavelengths are expected to emerge from different gas phases surrounding YMCs during their formation stage. For instance, infrared emission lines from a photodissociated region (PDR) are promising candidates (e.g. Berne et al. 2022, and references therein). Similar to the dust thermal emission studied in Section 3.2.3, these line emission maps will exhibit distinct features that differentiate between OC- and YMC-forming cases. This is one of our next projects in progress. An advantage of the line features is allowing us to infer the gas kinematics around the forming YMC. Our preparatory analyses of the simulation data show that, for the YMC-forming cases, the cluster's strong gravity causes the infall motion of the surrounding gas toward the centre of an H ii region. Such a signature of the infall motion has been really reported on Galactic ultra-compact H ii regions (Sollins et al. 2005; Beltran et al. 2006). These studies make use of the absorption feature of foreground molecular (NH\({}_{3}\)) inversion lines against the free-free continuum emission (Ho & Townes 1983). According to our simulations, the H ii region that emerges in the YMC formation is much larger than the typical ultra-compact H ii regions by two orders of magnitude. Nonetheless, they might show similar observational signatures suggesting the infall motion in future observations. Throughout this paper, we have mainly considered the YMC formation in the nearby universe. As mentioned in Section 1, however, YMC formation may play a more significant role in star formation in the early universe. Recent observations with the JWST have found more galaxies at \(z>10\) than expected (e.g. Naidu et al. 2022; Harikane et al. 2023), which prompts intensive studies and discussions (e.g. Boylan-Kolchin 2022; Yajima et al. 2022; Lovell et al. 2023; Delke et al. 2023). One possible explanation is that YMC formation, which achieves a high SFE, is the dominant mode of star formation (Inayoshi et al. 2022). In observations of distant galaxies, restframe far-IR line emissions such as [O i] 63\(\mu\)m and [C ii] 158\(\mu\)m coming from PDRs are often used because they redshift into the ALMA bands. In the same vein, [O iii] 88\(\mu\)m from H ii regions is a well-known tracer (Inoue et al. 2016; Hashimoto et al. 2018; Wristos et al. 2022). All of these line emissions can be predicted for both OC- and YMC-forming cases using our simulation data. Although individual H ii regions cannot be resolved in distant galaxies, if YMC formation is dominant for the overall star formation in galaxies at \(z>10\), it will be essential to comprehensively examine these line emissions. In such extreme environments, however, other effects that we have not considered in this paper, such as very low metallicity or a different IMF (Zackrisson et al., 2011; Chon et al., 2021, 2022; Fukushima and Yajima, 2023), may also come into play. It is also interesting to study emission line diagnostics for YMC formation, taking into account these effects. ### Possible signatures in an earlier evolutionary stage While our simulation data predict observational signatures consistent with the radio observations (e.g. Fig. 6), the simulation runs start from the idealized initial conditions. Such situations of isolated molecular clouds are not necessarily realized in the YMC formation. Although the actual triggers of the YMC formation are still uncertain, cloud-cloud collisions or large-scale converging flows have been proposed as a possible process (e.g. Maeda et al., 2021; Sameie et al., 2022; Dobbs et al., 2022). Our lack of knowledge about the realistic initial conditions of YMC formation prevents us from predicting observational signatures during the early stages of YMC formation. Ricovillas et al. (2020) and Rico-Villas et al. (2022) conduct diagnostic analyses with many molecular lines detected toward "proto-superstar clusters" candidates in NGC 253, and consider possible evolutionary sequences in the YMC formation. They suggest that there is the "super hot core" stage in the YMC formation before the appearance of an H ii region, similar to the hot core stage in the individual high-mass star formation (Garay and Lizano, 1999; Hoare et al., 2007). To improve Figure 8: Mean radial distributions of the dust continuum emission intensities for the representative case of the OC formation, M6R56st. In each panel, the points represent the intensities averaged over rings centered on the cluster mass center. The blue and red colors correspond to the different wavelengths of \(\lambda=70\) and \(300\mu\)m. The panels show the snapshots at the same epochs as in Figs. 1 and 7. The vertical dashed line in each panel represents the half-mass radius of the star cluster. Figure 7: Dust continuum emission maps for the representative case of the OC formation, M6R56st. The top, middle, and bottom rows show the time sequence of the distributions of the column-density weighted dust temperature, and emission intensities at the wavelengths \(\lambda=70\) and \(300\mu\)m. The columns of panels show the snapshots at the same epochs as in Fig. 1. In each panel, the dashed circle denotes the half-mass radius of the star cluster measured from its mass center. the accuracy of our predictions and better compare them with such observations, we need to update our simulations to reflect more realistic initial conditions. Nevertheless, we expect that our results on the observational signatures of the gravitationally-bound H ii regions should not depend on details of the initial conditions, as long as the key process enabling continuous star formation against the stellar UV feedback is accurately captured. This also remains to be examined in future studies. ## 5 Summary and Conclusions We have studied possible observational signatures of YMCs in their formation stage, i.e., the evolutionary stage where the cluster mass grows through accretion from the surrounding medium. In particular, we have considered the continuum spectra from \(\sim 10\) pc-scale dense H ii regions created around the accreting YMC, as suggested by recent RHD simulations (FY21). We have performed post-process radiative transfer calculations using the simulation snapshots to derive the spectra from radio to far-IR frequencies. For comparisons, we have also performed the same post-process calculations for the cases where normal OCs eventually appear instead of YMCs. Our findings are summarized as follows. For both simulation runs that represent OC and YMC formation, the continuum spectrum commonly consists of two components: one dominated by the thermal dust emission at \(\nu\gtrsim 10^{2}-10^{3}\) GHz, and the other dominated by the free-free emission at the lower frequencies. However, there are remarkable differences between these cases, Figure 10: The same as Fig. 8 but for the representative case of the YMC formation, M6R28st. The panels show the snapshots at the same epochs as in Figs. 2 and 9. Figure 9: The same as in Fig. 7 but for the representative case of the YMC formation, M6R28st. The columns of panels show the snapshots at the same epochs as in Fig. 2. as illustrated in Fig. 4. The spectrum for the normal OC-forming case (M6R56st) represents the standard picture of an H ii bubble expansion, where the electron density gradually decreases as the bubble expands. The intensities also decline from the radio to infrared for \(t_{\rm ff}\lesssim t\lesssim 2t_{\rm ff}\), during which the radiative feedback by the expanding bubble destroys the natal cloud and quenches the star formation. For the YMC-forming case (M6R28st), in contrast, the intensities gradually rise for the same time interval normalized by \(t_{\rm ff}\) and are always much stronger than those for the OC-forming case. Moreover, there is a turnover feature below \(\sim 10\) GHz resulting from the large emission measure. These all come from the peculiar evolution in the YMC-forming case, i.e., the rapid star formation under weak radiative feedback, which leads to the formation of a dense H ii region that remains trapped by the cluster's gravitational field. Previous radio observations provide density-size relations of Galactic and extragalactic H ii regions, which have been investigated to infer the variety of high-mass star- and cluster-forming environments (Garay & Lizano, 1999; Kim & Koo, 2001). Remarkably, it has been pointed out that some extragalactic H ii regions distribute separately from Galactic H ii regions (HH09). They are large (\(\sim 10\) pc), dense (\(\sim 10^{4}\) cm\({}^{-3}\)), and associated with YMCs. We have superposed our simulation data points on the density-size diagram based on the synthetic continuum emission maps at 36 GHz (Fig. 6). Reflecting the qualitatively different evolution between the OC- and YMC-forming cases, the corresponding data points distinctly distribute on the diagram. For the normal OC-forming cases, the simulation points are situated close to the Galactic density-size relationships. As the evolution progresses to later stages, the points shift to the less-dense and larger-size portion, in accordance with the standard expansion law of H ii bubbles. The points for the YMC-forming cases, in contrast, scatter separately from the OC-forming cases and near the observational data of extragalactic radio samples. These points hardly move on the diagram regardless of \(t/t_{\rm ff}\), reflecting that the cluster's gravity traps an H ii bubble and prevents it from dynamically expanding for a while. We propose that previous radio observations have already captured the signatures of the YMC formation suggested by recent RHD simulations. We suggest that similar trends to the above should be also found in the far-IR continuum dust thermal emission maps (Figs. 7-10). We have investigated the evolution of the maps at \(\lambda=70\)\(\mu\)m and \(300\)\(\mu\)m. For the normal OC-forming case, the radial emission distribution gradually broadens with time. As a result, the emission peak associated with the cluster center shows a decrease in intensity. A flat, core-like distribution persists within the half-mass radius of the cluster. This can be attributed to the dynamic expansion of the H ii bubble, as well as to the outward expansion of the star cluster induced by the dispersal of the surrounding molecular cloud. The YMC-forming case, in contrast, shows that the central emission peak does not weaken but rather becomes stronger over time. The radial emission distribution evolves to show a remarkably high degree of central concentration, characterized by a sharp peak at the center of the cluster. This is again due to the emergence of a dense H ii bubble gravitationally bound by the nascent YMC, which continues to accrete mass. Our study suggests that the peculiar YMC formation process found in recent RHD simulations, i.e., the formation of gravitationally-bound dense H ii regions, should indicate observational signatures. We conclude that such signatures have been captured by extragalactic radio observations. As discussed in Section 4, our work can be easily stretched to further studies that link simulations and upcoming observations. Particularly, considering observational signatures of emission and absorption lines associated with the YMC formation is of great importance, and it is also a target of our next work. ## Acknowledgements The authors sincerely appreciate Jeong-Gyu Kim, Hiroyuki Hirashita, Eric Keto, Rolf Kuiper, Yurina Nakazato, Akino Inoue, and Shu-ichiro Inutsuka for the comments and discussions. The numerical simulations were carried out on XC50 Aterui II at the Center for Computational Astrophysics (CICA) of the National Astronomical Observatory of Japan. This research could never be accomplished without the support by Grants-in-Aid for Scientific Research (TH:19H01934, 21H00041, HF:23K13139) from the Japan Society for the Promotion of Science. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2310.03804
A decomposition of light's spin angular momentum density
Light carries intrinsic spin angular momentum (SAM) when the electric or magnetic field vector rotates over time. A familiar vector equation calculates the direction of light's SAM density using the right hand rule with reference to the electric and magnetic polarisation ellipses. Using Maxwell's equations, this vector equation can be decomposed into a sum of two distinct terms, akin to the well-known Poynting vector decomposition into orbital and spin currents. We present the first general study of this spin decomposition, showing that the two terms, which we call canonical and Poynting spin, are chiral analogies to the canonical and spin momenta of light in its interaction with matter. Both canonical and Poynting spin incorporate spatial variation of the electric and magnetic fields and are influenced by optical orbital angular momentum (OAM). The decomposition allows us to show that the OAM of a linearly polarised vortex beam can impart a first-order preferential force to chiral matter in the absence of spin.
Alex J. Vernon, Sebastian Golat, Claire Rigouzzo, Eugene A. Lim, Francisco J. Rodríguez-Fortuño
2023-10-05T18:00:08Z
http://arxiv.org/abs/2310.03804v3
# A decomposition of light's spin angular momentum density ###### Abstract Light carries intrinsic spin angular momentum (SAM) when the electric or magnetic field vector rotates over time. A familiar vector equation calculates the direction of light's SAM density using the right hand rule with reference to the electric and magnetic polarisation ellipses. Using Maxwell's equations, this vector equation can be decomposed into a sum of two distinct terms, akin to the well-known Poynting vector decomposition into orbital and spin currents. We present the first general study of this spin decomposition, showing that the two terms, which we call canonical and Poynting spin, are chiral analogies to the canonical and spin momenta of light in its interaction with matter. Both canonical and Poynting spin incorporate spatial variation of the electric and magnetic fields and are influenced by optical orbital angular momentum (OAM). The decomposition allows us to show that the OAM of a linearly polarised vortex beam can impart a first-order preferential force to chiral matter in the absence of spin. ## 1 Introduction Optical angular momentum is contributed to by physical orbital and spin parts [1, 2, 3, 4]. The orbital component can be either extrinsic or intrinsic and is associated with the spatial structure of the light field. Orbital angular momentum (OAM) is most famously stirred into light by optical vortices, which have helical phase fronts twisted around a one-dimensional line singularity [5, 6, 7, 8]. Spin angular momentum (SAM), meanwhile, develops as the electric and magnetic field vectors rotate during oscillation [9], and is an intrinsic quantity, independent of co-ordinate origin. Both OAM and SAM arise from chiral structures in light, and as such, can access the chirality of matter, enhancing, pushing, twisting, and torquing with or without a parity bias [10, 11]. Many commonly observed chiroptical interactions, such as circular dichroism, arise from the coupling of chiral matter to a photon's spin state. Recent attention has also turned to OAM-dependent photon absorption and scattering, with a range of effects theorised [12, 13, 14, 15]. Chiroptical effects involving OAM often hinge on the strong longitudinal field component introduced by beam focussing [16, 17]. Unlike for photons with opposite spins, however, preferential absorption of photons with different OAM handedness does not occur in the dipole approximation for paraxial light [18, 19]. A well-known orbital and spin separation can be performed locally in the Poynting vector [20] (kinetic momentum density, when divided by \(c^{2}\)). The Poynting vector may be expressed as a sum of the orbital current, pointing in the direction of canonical momentum, and spin current, proportional to the virtual spin momentum. This is a physically meaningful decomposition, tying in to the famous Abraham-Minkowski dilemma [21, 22]. Surprisingly to most readers, a similar vector decomposition can be performed on the electromagnetic spin density \(\mathbf{S}\), which is split into, perhaps confusingly, orbital-like and spin-like contributions to the total electromagnetic spin. We name these two terms canonical and Poynting spin. This little-known decomposition is the focus of our work--amidst discussion by Shi et al who, without giving the complete decomposition explicitly, linked the two terms to longitudinal and transverse spin in some specific cases [23, 24, 25], no clear picture exists in the literature on the physical significance of the two terms in general, in the same way as for the decomposed Poynting vector. The decomposition as we present has been expressed previously (e.g. [26]), but with almost no discussion on the function of terms as component parts of \(\mathbf{S}\). We therefore have two objectives: to clarify and understand the meaning of the orbit-like and spin-like components of \(\mathbf{S}\), and to emphasise their value for the research community in interpreting electromagnetic SAM density. For example, by use of the Maxwell stress tensor, one understands that the canonical and Poynting spin vectors are responsible for different interactions between light and chiral particles. While the two decomposed spin vectors are organised independently in general monochromatic light, a remarkable implication lies in beams carrying OAM, such as vortex beams. OAM imparts a longitudinal component to both canonical and Poynting spins, even in the absence of longitudinal total spin. By applying the decomposition to a linearly polarised vortex beam, we show that a total spin-free longitudinal chiral force exists--a force which can act even under the dipole approximation, despite originating from the beam's OAM. We generalise the spin decomposition to time-dependent fields and four-vector representation, unlocking a deeper understanding of the behaviour of the two terms. We also show that the ability to split spin into two terms appears to be a general feature of wave fields, similarly to the decomposition of the Poynting vector, including those of the linearised theories of gravity [27] and acoustics, which have distinctly different vector structures. Throughout, we use bold latin letters \(\mathbf{E}\) and \(\mathbf{H}\) to represent time-harmonic complex electromagnetic field phasors, which are a function of position only [e.g. \(\mathbf{E}(\mathbf{r})=\mathbf{E}_{0}\exp(i\mathbf{k}\cdot\mathbf{r})\)], and scripted characters \(\boldsymbol{\mathcal{E}}\) and \(\boldsymbol{\mathcal{H}}\) denoting real, time-varying field vectors (e.g. \(\boldsymbol{\mathcal{E}}(\mathbf{r},t)=\text{Re}\{\mathbf{E}(\mathbf{r})\exp(- i\omega t)\}\)). ## 2 Electromagnetic spin decomposition We will first contextualise the decomposition of electromagnetic SAM density by recalling a well-known orbital and spin decomposition which exists in free-space and time-harmonic fields for the time-averaged Poynting vector, \[\mathbf{P}=\frac{1}{2}\text{Re}\{\mathbf{E}^{*}\times\mathbf{H}\}, \tag{1}\] Figure 1: The component parts of light’s total spin angular momentum density, visualised at a single point (red circle) in the 3D interference of monochromatic plane waves. Enlarged electric and magnetic field ellipses and coloured vector arrows, representing the total spin \(\mathbf{S}\), electric and magnetic spins \(\mathbf{S}_{e}\) and \(\mathbf{S}_{m}\), and the canonical \(\mathbf{s}_{c}\) and Poynting spins \(\mathbf{s}_{p}\) of the decomposition Eq. (4), are plotted. The electric and magnetic spin vectors are normal to the \(\mathbf{E}\) and \(\mathbf{H}\) polarisation ellipses, according to the right-hand rule and the sense of rotation of the instantaneous field vectors (see separated diagrams below the main combined image). Projected on the \(xy\) plane are the Poynting vector streamlines, which in this case organise a clear curling structure. This provides a visual aid for Poynting spin (large black arrow), which is informed by three-dimensional curling of the Poynting vector. In structured light where the \(\mathbf{E}\) and \(\mathbf{H}\) ellipses are de-coupled, the canonical spin and therefore chiral pressure (Eq. (5)) can point in a completely different direction to the any of the total spin or its electric and magnetic contributions. which represents the flux of active power in the light field. The Poynting vector is separated into the orbital and spin currents [20, 22], \[\mathbf{P}=\underbrace{\frac{c^{2}}{4\omega}\text{Im}\{\epsilon_{0}\mathbf{E}^{ *}\cdot(\nabla)\mathbf{E}+\mu_{0}\mathbf{H}^{*}\cdot(\nabla)\mathbf{H}\}}_{ \mathbf{p}_{o}}+\underbrace{\frac{c^{2}}{2}\nabla\times\overbrace{\frac{1}{4 \omega}\text{Im}\{\epsilon_{0}\mathbf{E}^{*}\times\mathbf{E}+\mu_{0}\mathbf{ H}^{*}\times\mathbf{H}\}}^{\mathbf{S}}}_{\mathbf{p}_{s}}, \tag{2}\] defined using the complex phasors \(\mathbf{E}\) and \(\mathbf{H}\). The inner product notation is \(\mathbf{a}\cdot(\nabla)\mathbf{b}=a_{x}\nabla b_{x}+a_{y}\nabla b_{y}+a_{z} \nabla b_{z}\). When divided by \(c^{2}\), \(\mathbf{P}\) has units of a momentum density and is termed the kinetic momentum density, while the orbital current \(\mathbf{p}_{o}\) becomes the canonical momentum density, corresponding to the expectation value of linear momentum carried by photons at each point in space (unlike kinetic momentum, the canonical momentum is a directly measurable quantity). The net flow of the instantaneous Poynting vector into or out of a volume relates to the change in electromagnetic energy density over time, according to a continuity equation. An adjacent conserved quantity of light is its chirality, which has its own continuity equation and an associated chirality flux [28, 29, 30]. Chirality \(\chi\) is one of an infinite hierarchy of conserved quantities in linear, non-dispersive media, to which helicity \(h\) also belongs [9, 30]--helicity and chirality are often conflated because they (and their fluxes) are proportional by a factor of \(\omega^{2}\) in monochromatic waves, though in general they are distinct. In monochromatic light the flux of helicity corresponds to the time-averaged SAM density (often referred to simply as'spin'), given by, \[\mathbf{S}=\frac{1}{4\omega}\text{Im}\{\epsilon_{0}\mathbf{E}^{*}\times \mathbf{E}+\mu_{0}\mathbf{H}^{*}\times\mathbf{H}\}\equiv\mathbf{S}_{e}+\mathbf{ S}_{m}. \tag{3}\] Spin's individual electric and magnetic contributions point in the normal direction to the electric and magnetic polarisation ellipses, drawn over time by the instantaneous vectors \(\boldsymbol{\mathcal{E}}(\mathbf{r},t)\) and \(\boldsymbol{\mathcal{H}}(\mathbf{r},t)\). Like the Poynting vector, and using the same procedure, spin \(\mathbf{S}\) may be split into a sum of two vectors: the canonical spin \(\mathbf{s}_{c}\) and the Poynting spin \(\mathbf{s}_{p}\). This decomposition is our main focus and is, explicitly, \[\mathbf{S}=\underbrace{\frac{1}{4\omega^{2}}\text{Re}\{\mathbf{E}^{*}\cdot( \nabla)\mathbf{H}-\mathbf{H}^{*}\cdot(\nabla)\mathbf{E}\}}_{\mathbf{s}_{c}}+ \underbrace{\frac{1}{2\omega^{2}}\nabla\times\overbrace{\frac{1}{2}\text{Re} \{\mathbf{E}^{*}\times\mathbf{H}\}}^{\mathbf{P}}}_{\mathbf{s}_{p}}. \tag{4}\] Any subsequent mention of 'canonical spin' in this work always refers to the first term, \(\mathbf{s}_{c}=\frac{1}{4\omega^{2}}\text{Re}\{\mathbf{E}^{*}\cdot(\nabla) \mathbf{H}-\mathbf{H}^{*}\cdot(\nabla)\mathbf{E}\}\), and 'Poynting spin' to the second, \(\mathbf{s}_{p}=\frac{1}{2\omega^{2}}\nabla\times\mathbf{P}\). Their sum \(\mathbf{S}\) (Eq. (3)) we refer to as total spin. This naming scheme reflects that of the decomposed kinetic momentum, split into canonical momentum and spin momentum. ## 3 Physical interpretation Previous works [23, 24] have linked the canonical \(\mathbf{s}_{c}\) and Poynting spin \(\mathbf{s}_{p}\) vectors to the longitudinal and transverse spin of light, respectively. However, this is only true in a very limited number of cases such as linearly polarised evanescent waves. It cannot be a general result because well-defined longitudinal and transverse directions do not exist for many-plane wave or multiple beam interference. In fact, it does not hold even when there is a well-defined longitudinal direction: a circularly polarised evanescent wave has both longitudinal and transverse spin components contained in the Poynting spin vector. One way to gain an understanding of the physical significance of the canonical and Poynting spins is to study the interactions between light and chiral matter. Chiral light can exert preferential forces on enantiomers with opposite handedness, with direct proportionality to the light's SAM. When light shines on a particle much smaller than the wavelength (Rayleigh regime) it gets polarised and acquires an electric \(\mathbf{p}\) and a magnetic \(\mathbf{m}\) dipole. In the linear regime, the dipole moments are proportional to the incident fields, \(\mathbf{p}=\alpha_{\text{e}}\varepsilon\mathbf{E}+i\alpha_{\text{e}}\mathbf{ H}/c\) and \(\mathbf{m}=\alpha_{\text{m}}\mathbf{H}-i\alpha_{\text{e}}\mathbf{E}/\eta\), where \(\alpha_{\text{e}}\), \(\alpha_{\text{m}}\) and \(\alpha_{\text{c}}\) are the electric, magnetic, and chiral polarisabilities of the particle. The chiral polarisability \(\alpha_{\text{e}}\) is a pseudoscalar so it changes sign between the two enantiomers (mirror-reflected versions), and vanishes unless the matter is chiral. Under these assumptions, the illuminating light exerts a chiral optical force on the particle (which changes sign for the different enantiomers) given by [31, 32, 33, 34, 35]: \[\mathbf{F}_{\text{chiral}}=\underbrace{\omega\nabla[\text{Re}(\alpha_{\text{ c}})h]}_{\text{helicity gradient}}+\underbrace{2\omega k[\text{Im}(\alpha_{\text{c}})\mathbf{s}_{\text{c}}]}_{ \text{chiral pressure}}-\underbrace{\omega\frac{k^{4}}{3\pi}[\text{Re}(\alpha_{ \text{e}}^{*}\alpha_{\text{c}})\mathbf{S}_{\text{e}}+\text{Re}(\alpha_{\text {m}}^{*}\alpha_{\text{c}})\mathbf{S}_{\text{m}}]}_{\text{spin recoil}}, \tag{5}\] where \(k=\omega/c\) is the wavenumber, \(h=-{\rm Im}({\bf E}^{*}\cdot{\bf H})/(2\omega c)\) is the cycle-averaged optical helicity density [17, 36], and \({\bf S}_{e}\) and \({\bf S}_{m}\) are the electric and magnetic parts of the total spin \({\bf S}\) from Eq. (3). One immediately sees that the canonical spin \({\bf s}_{\rm c}\), one of the two terms in the spin decomposition [Eq. (4)], appears in this force equation and is directly responsible for the chiral pressure (in this context, the product \(k{\bf s}_{\rm c}\) is sometimes called the chiral momentum [26]), while the _total_ spin appears in the spin recoil term. Poynting spin contributes (together with canoncial spin) only to the higher-order recoil force caused by the unbalanced radiation pattern of an electric-magnetic dipole. This also implies that in an electromagnetic field whose SAM is pure Poynting spin (i.e., zero canonical spin), the particle will experience no chiral pressure, only helicity gradients and relatively weaker spin recoil terms of the force. An alternative route to the physical meaning of the canonical and Poynting spins is found purely in the two terms' mathematical expressions as shown in Eq. (4). The Poynting spin, being proportional to the curl of the Poynting vector, is easily interpreted as the vorticity in energy flow. The expression for the canonical spin is much harder to interpret initially, but begins to unravel if we decompose a general electromagnetic field into circularly polarised plane waves. Any arbitrary electromagnetic field, no matter how complicated, can be expressed in momentum space by an angular spectrum (an infinite sum of plane waves of different wavevectors, weighted by an amplitude function). Using this property, we can further probe the canonical spin term in monochromatic light by separating electric and magnetic fields into two component fields of opposite helicities, indicated by the \(+\) and \(-\) subscripts, \[{\bf E}({\bf r})=\!\!\int\!\!\!\int\!\!\!\int\tilde{\bf E}({\bf k})e^{{\rm i}{ \bf k}\cdot{\bf r}}{\rm d}^{3}k=\!\!\int\!\!\!\int\!\!\!\int[\tilde{E}_{+}({\bf k })\hat{\bf e}_{+}({\bf k})+\tilde{E}_{-}({\bf k})\hat{\bf e}_{-}({\bf k})]e^{{ \rm i}{\bf k}\cdot{\bf r}}{\rm d}^{3}k={\bf E}_{+}({\bf r})+{\bf E}_{-}({\bf r}), \tag{6}\] where \(\hat{\bf e}_{\pm}({\bf k})\) represent the circularly polarised unit vectors for each plane wave with wave-vector \({\bf k}\) (see, e.g., [37]). The magnetic field's associated helicity components are obtained from Faraday's law \(\nabla\times{\bf E}=i\omega\mu{\bf H}\) as, \[{\bf H}({\bf r})=\frac{1}{\eta}\!\int\!\!\!\int\!\!\!\int\frac{{\bf k}}{k} \times\tilde{\bf E}({\bf k})e^{{\rm i}{\bf k}\cdot{\bf r}}{\rm d}^{3}k=\frac{ 1}{\eta}\!\int\!\!\!\int\!\!\!\int[-i\tilde{E}_{+}\hat{\bf e}_{+}({\bf k})+i \tilde{E}_{-}\hat{\bf e}_{-}({\bf k})]e^{{\rm i}{\bf k}\cdot{\bf r}}{\rm d}^{3 }k={\bf H}_{+}({\bf r})+{\bf H}_{-}({\bf r}). \tag{7}\] where we used the property \(({\bf k}/k)\times\hat{\bf e}_{\pm}=\mp i\hat{\bf e}_{\pm}\). Equations (6) and (7) show that the helicity-separated electric and magnetic fields are related by \({\bf H}_{\pm}({\bf r})=\mp i{\bf E}_{\pm}({\bf r})/\eta\), characteristic of a circularly polarised plane wave--this is true not only in the spectral representation, but also in the spatial representation, for any arbitrary field (a property explored in depth in [38]). This allows us to substitute helicity-separated fields into many dual quantities, including canonical momentum and canonical spin, gaining further insight. Simply substituting \({\bf E}={\bf E}_{+}+{\bf E}_{-}\) and \({\bf H}=-i\left({\bf E}_{+}-{\bf E}_{-}\right)/\eta\) into the expression for orbital current [\({\bf p}_{o}\) of Eq. (2)], we find, after some algebra, \[{\bf p}_{o}=\frac{c^{2}}{2\omega}\epsilon_{0}{\rm Im}\{{\bf E}_{+}^{*}\cdot( \nabla){\bf E}_{+}+{\bf E}_{-}^{*}\cdot(\nabla){\bf E}_{-}\}=c^{2}({\bf p}_{+} +{\bf p}_{-}), \tag{8}\] showing that the helicity segregation of the \({\bf E}\) and \({\bf H}\) fields translates to a separation of contributions to the field's momentum \({\bf p}={\bf p}_{+}+{\bf p}_{-}\) by photons of positive and negative helicity. Note that orbital current is proportional to canonical momentum by \({\bf p}_{o}=c^{2}{\bf p}\); here, \({\bf p}_{+}\) and \({\bf p}_{-}\) are helicity-separated momentum densities. Making the same substitution in the expression for canonical spin \({\bf s}_{\rm c}\) [first term of Eq. (4)] illuminates its meaning, \[\begin{split}{\bf s}_{\rm c}&=\frac{1}{4\omega^{2}}{ \rm Re}\big{\{}-\frac{i}{\eta}[{\bf E}_{+}^{*}+{\bf E}_{-}^{*}]\cdot(\nabla)[{ \bf E}_{+}-{\bf E}_{-}]+\frac{i}{\eta}[{\bf E}_{+}^{*}-{\bf E}_{-}^{*}]\cdot( \nabla)[{\bf E}_{+}+{\bf E}_{-}]\big{\}}\\ &=\frac{1}{2k\omega}\epsilon_{0}{\rm Im}\{{\bf E}_{+}^{*}\cdot( \nabla){\bf E}_{+}-{\bf E}_{-}^{*}\cdot(\nabla){\bf E}_{-}\}=\frac{1}{k} \left({\bf p}_{+}-{\bf p}_{-}\right).\end{split} \tag{9}\] Canonical spin is proportional to the difference in linear momentum densities carried by photons of oppositely signed helicity. This brings a clear physical interpretation of the canonical spin, and also helps explain why the chiral pressure force acts in the direction of \({\bf s}_{\rm c}\), as photons of opposite helicities are absorbed or scattered in different amounts by chiral particles. In a general structured field, \({\bf p}_{+}-{\bf p}_{-}\neq{\bf p}_{+}+{\bf p}_{-}\) and hence canonical spin is decoupled from any local longitudinal direction defined by canonical momentum. ## 4 Mathematical derivation In this section, we briefly lay out the mathematical steps taken to arrive at Eq. (4), before developing the expression into a more fundamental 4-vector description. For each of the canonical and Poynting spin terms, the 4-vector decomposition incorporates a time component which characterises how the two terms transform differently between reference frames. ### 3-vector decomposition Maxwell's equations enable both decompositions of the Poynting vector Eq. (2) and spin Eq. (4) by bringing the relevant vector from its usual representation into a form which can be separated into two terms using a vector identity, \[\mathbf{a}\times(\nabla\times\mathbf{b})=\mathbf{a}\cdot(\nabla)\mathbf{b}-( \mathbf{a}\cdot\nabla)\mathbf{b}, \tag{10}\] where \(\mathbf{a}\) and \(\mathbf{b}\) are arbitrary vectors, and the previously unseen notation on the right hand side means \((\mathbf{a}\cdot\nabla)\mathbf{b}=a_{x}\partial_{x}\mathbf{b}+a_{y}\partial_{ y}\mathbf{b}+a_{z}\partial_{z}\mathbf{b}\). In time-harmonic fields, the phasors \(\mathbf{E}\) and \(\mathbf{H}\) can be substituted for curls of their counterpart via Faraday's and Ampere's laws. Substituting for the un-conjugated phasors in the spin vector definition Eq. (3) gives an expression in the form of Eq. (10), \[\mathbf{S}=\frac{1}{4\omega}\mathrm{Im}\big{\{}\epsilon_{0}\mathbf{E}^{*} \times\Big{(}\frac{i}{\omega\epsilon_{0}}\nabla\times\mathbf{H}\Big{)}-\mu_{ 0}\mathbf{H}^{*}\times\Big{(}\frac{i}{\omega\mu_{0}}\nabla\times\mathbf{E} \Big{)}\big{\}}. \tag{11}\] Applying the vector identity Eq. (10), we have, \[\mathbf{S}=\frac{1}{4\omega^{2}}\mathrm{Re}\big{\{}\mathbf{E}^{*}\cdot(\nabla )\mathbf{H}-\mathbf{H}^{*}\cdot(\nabla)\mathbf{E}\big{\}}+\frac{1}{4\omega^{ 2}}\mathrm{Re}\{(\mathbf{H}^{*}\cdot\nabla)\mathbf{E}-(\mathbf{E}^{*}\cdot \nabla)\mathbf{H}\}. \tag{12}\] Gauss' law in free space (\(\nabla\cdot\mathbf{E}=0\) and \(\nabla\cdot\mathbf{H}=0\)), combined with a second vector identity \(\nabla\times(\mathbf{a}\times\mathbf{b})=\mathbf{a}(\nabla\cdot\mathbf{b})- \mathbf{b}(\nabla\cdot\mathbf{a})+(\mathbf{b}\cdot\nabla)\mathbf{a}-( \mathbf{a}\cdot\nabla)\mathbf{b}\), extracts the curl of the Poynting vector from the second \(\mathrm{Re}\{\}\) term of Eq. (12), \[\frac{1}{4\omega^{2}}\mathrm{Re}\{(\mathbf{H}^{*}\cdot\nabla)\mathbf{E}-( \mathbf{E}^{*}\cdot\nabla)\mathbf{H}\}=\frac{1}{2\omega^{2}}\nabla\times\frac {1}{2}\mathrm{Re}\{\mathbf{E}^{*}\times\mathbf{H}\}=\mathbf{s}_{p}, \tag{13}\] completing the final step in obtaining the electromagnetic spin decomposition as we present in Eq. (4). In addition to the monochromatic case, canonical and Poynting spin analogies for the flow of chirality may also be defined in polychromatic fields, expressed using the full time-dependent fields \(\boldsymbol{\mathcal{E}}(\mathbf{r},t)\) and \(\boldsymbol{\mathcal{H}}(\mathbf{r},t)\) and stemming from the instantaneous flow of chirality \(\boldsymbol{\mathcal{F}}(\mathbf{r},t)\)[28, 29, 30, 39]. \[\boldsymbol{\mathcal{F}}=\frac{1}{2}\left(\boldsymbol{\mathcal{E}}\times( \nabla\times\boldsymbol{\mathcal{H}})-\boldsymbol{\mathcal{H}}\times(\nabla \times\boldsymbol{\mathcal{E}})\right). \tag{14}\] Chiral flow \(\boldsymbol{\mathcal{F}}\), unlike the flux of helicity in time-dependent fields, is defined without the vector potentials \(\boldsymbol{\mathcal{A}}\) and \(\boldsymbol{\mathcal{C}}\), which avoids gauge complications and contains a direct relation to the instantaneous Poynting vector \(\boldsymbol{\mathcal{P}=\boldsymbol{\mathcal{E}}\times\boldsymbol{\mathcal{H}}}\). For monochromatic light, \(\boldsymbol{\mathcal{F}}\) is not time-dependent and becomes proportional to the field's SAM density \(\mathbf{S}\). Equation (14) is immediately presented in a form which can be broken into two vectors using the two identities detailed in this section, giving the extension of the time-harmonic spin decomposition to time-dependent fields, \[\boldsymbol{\mathcal{F}}=\frac{1}{2}\left[(\boldsymbol{\mathcal{E}}\cdot( \nabla)\boldsymbol{\mathcal{H}}-\boldsymbol{\mathcal{H}}\cdot(\nabla) \boldsymbol{\mathcal{E}})+\nabla\times(\boldsymbol{\mathcal{E}}\times \boldsymbol{\mathcal{H}})\right]. \tag{15}\] The above is a more general decomposition of chirality flow than Eq. (4), and is valid for polychromatic aperiodic light, at every instant in time. It is worth stressing again that the flow of chirality \(\boldsymbol{\mathcal{F}}\) is a different quantity to the flow of helicity (SAM density) for general time-dependent fields. To obtain the helicity flow equivalent of Eq. (15) in the Coulomb gauge, one simply substitutes \(\boldsymbol{\mathcal{E}}\rightarrow\boldsymbol{\mathcal{A}}\) and \(\boldsymbol{\mathcal{H}}\rightarrow\boldsymbol{\mathcal{C}}\). Equation (15) assumed a source-free medium, such that \(\nabla\cdot\boldsymbol{\mathcal{E}}=\rho/\epsilon=0\). Dropping this assumption, Eq. (15) can be further generalised by adding a third term, \(\frac{1}{2}\left[\frac{\rho}{\epsilon}\boldsymbol{\mathcal{H}}\right]\), to the decomposition. ### 4-vector representation In an attempt to extract new physical insight, we reproduce the above spin decomposition Eq. (4) in 4-vector notation. This notation is particularly elegant and useful to verify the Lorentz covariance of quantities and equations. We will begin by introducing some common definitions. Electrodynamics is intrinsically relativistic, and several related physical quantities can be expressed via well-known 4-vectors, such as the 4-potential (grouping the scalar and vector potentials) and the 4-current (grouping the charge and current densities). 4-vector notation is well known to provide an efficient way to formulate Maxwell's equations. Indeed, the time-dependent Maxwell's equations (using scripted vectors) in vacuum are a combination of four equations, two scalar equations \(\nabla\cdot\boldsymbol{\mathcal{E}}=0,\nabla\cdot\boldsymbol{\mathcal{H}}=0\) and two vector equations \(\nabla\times\boldsymbol{\mathcal{E}}=-\mu_{0}\frac{\partial\boldsymbol{\mathcal{ H}}}{\partial t},\nabla\times\boldsymbol{\mathcal{H}}=\epsilon_{0}\frac{ \partial\boldsymbol{\mathcal{E}}}{\partial t}\). In 4-vector notation, however, we can write Maxwell's equations as: \[\partial_{\alpha}\mathcal{F}^{\alpha\beta}=0,\qquad\epsilon^{\alpha\beta\gamma \lambda}\partial_{\alpha}\mathcal{F}_{\beta\gamma}=0. \tag{16}\] Throughout, the greek indices \(\mu,\nu,\alpha,\beta...\) run from 0 to 3, where the \(0^{th}\) component labels the time direction. Repeated indices are summed over. Note that the distinction between subscript/superscript placement of indices are important, as \(A_{\mu}=\eta_{\mu\nu}A^{\nu}\) where the so-called Minkowski metric \(\eta_{\mu\nu}=\text{diag}(-1,1,1,1)\). We also use Roman indices \(i=1,2,3\) to denote the three-dimensional space. In Eq. (16), \(\epsilon^{\alpha\beta\gamma\lambda}\) is the Levi-Civita symbol [40], while \(\mathcal{F}_{\alpha\beta}\) is the field strength tensor that conveniently packages the electric and magnetic fields: \[\mathcal{F}_{\mu\nu}=\begin{pmatrix}0&-\mathcal{E}_{x}/c&-\mathcal{E}_{y}/c&- \mathcal{E}_{z}/c\\ \mathcal{E}_{x}/c&0&-\mu_{0}\mathcal{H}_{z}&\mu_{0}\mathcal{H}_{y}\\ \mathcal{E}_{y}/c&\mu_{0}\mathcal{H}_{z}&0&-\mu_{0}\mathcal{H}_{x}\\ \mathcal{E}_{z}/c&-\mu_{0}\mathcal{H}_{y}&\mu_{0}\mathcal{H}_{x}&0\end{pmatrix}. \tag{17}\] The field strength tensor can be expressed as \(\mathcal{F}_{\alpha\beta}=\partial_{\alpha}\mathcal{A}_{\beta}-\partial_{ \beta}\mathcal{A}_{\alpha}\), where \(\mathcal{A}_{\alpha}\) is the 4-vector potential, that encases the scalar potential \(\varphi\) and the vector potential \(\mathbf{\mathcal{A}}\) and \(\partial_{\mu}\) is a generalisation of gradient: \[\mathcal{A}^{\mu}=\begin{pmatrix}\mathcal{A}^{0}\\ \mathcal{A}^{i}\end{pmatrix}=\begin{pmatrix}\varphi/c\\ \mathbf{\mathcal{A}}\end{pmatrix},\quad\partial_{\mu}=\begin{pmatrix}\frac{ \partial}{c\partial t}\\ \frac{\partial}{\partial i}\end{pmatrix}=\begin{pmatrix}\frac{\partial}{c \partial t}\\ \nabla\end{pmatrix}. \tag{18}\] Equation (16) is a striking example of the neatness of the tensorial notation: the physical content is the same, but the tensorial notation is clearer and exhibits Lorentz covariance. For time-harmonic fields, these tensors will also have a phasor representation (e.g. \(\mathcal{F}_{\mu\nu}(\mathbf{r},t)=\text{Re}\{F_{\mu\nu}(\mathbf{r})\exp(-i \omega t)\}\)), including the time components of the gradient, which for phasors becomes \(\partial_{0}=-i\omega/c\). We now have the tools to write, using phasors, the time-averaged spin decomposition Eq. (4) in tensorial notation. The first step is to equivalently express the spin, defined in Eq. (3), using spatial indices to replicate the vector operations, \[S^{i}=\frac{1}{4\omega}\epsilon^{ijk}\text{Im}\{\epsilon_{0}E_{j}^{*}E_{k}+ \mu_{0}H_{j}^{*}H_{k}\}. \tag{19}\] Then, the equivalent of the proposed decomposition in Eq. (4) is: \[S^{i}=\underbrace{\frac{1}{4\omega^{2}}\text{Re}\{E_{j}^{*}\partial_{i}H^{j} -H_{j}^{*}\partial_{i}E^{j}\}}_{s_{c}^{i}}+\underbrace{\frac{1}{2\omega^{2}} \epsilon^{ijk}\partial_{j}\frac{1}{2}\text{Re}\{\epsilon_{klm}E^{*l}H^{m}\}}_ {s_{p}^{i}}. \tag{20}\] The next step is to find a 4-vector, for which the spatial part would reduce to Eq. (20). It turns out that this 4-vector quantity is exactly the helicity density and flux, a 4-current density associated with conserved helicity [41] \[S^{\mu}=\frac{1}{4}\,\text{Re}\left\{A_{\nu}^{*}G^{\nu\mu}+C_{\nu}^{*}F^{\nu \mu}\right\}, \tag{21}\] where \(C^{\mu}=(\psi,-i\omega\mu_{0}^{-1}(\nabla\times\mathbf{A}))^{\intercal}\) is the magnetic equivalent to the 4-potential \(A_{\mu}\), and \(G_{\mu\nu}=\partial_{\nu}C_{\mu}-\partial_{\mu}C_{\nu}\) is the corresponding field strength tensor. Note that Eq. (21) is time-averaged and the fields are in their phasor representation. We include the instantaneous variant in the supplementary information. Working in the Coulomb gauge, i.e., the scalar and vector potentials are chosen such that \(\nabla\cdot\mathbf{A}=\phi=0\) and \(\nabla\cdot\mathbf{C}=\psi=0\), and after some algebra (see supplementary Mathematica file [42]), we find that the spatial part of the 4-vector \(S^{\mu}\) reproduces Eq. (20), whilst the temporal part is the cycle-averaged helicity density \(h=-\text{Im}\{\mathbf{E}^{*}\cdot\mathbf{H}\}/(2\omega c)\), \[S^{\mu}=\frac{1}{4\omega}\,\text{Im}\left\{\left(\begin{array}{c}-\frac{2}{ c}\mathbf{E}^{*}\cdot\mathbf{H}\\ \epsilon_{0}\mathbf{E}^{*}\times\mathbf{E}+\mu_{0}\mathbf{H}^{*}\times\mathbf{H} \end{array}\right)\right\}=\left(\begin{array}{c}h\\ \mathbf{S}\end{array}\right)\;. \tag{22}\] This is consistent with the fact that \(h\) and \(\mathbf{S}\) are density and flux associated with integrated helicity, the conserved quantity associated with the dual symmetry [9, 30] (hence \(\nabla\cdot\mathbf{S}=0\) for free-space monochromatic fields). This four-vector current \(S^{\mu}\) has a decomposition equivalent to Eq. (4), \[S^{\mu}=S^{\mu}_{C}+S^{\mu}_{P}, \tag{23}\] where the individual components are given by, \[\begin{split} S^{\mu}_{C}&=\frac{1}{4}\,\text{Re}\left\{A_{\nu}^{ *}\left(\partial^{\mu}C^{\nu}\right)-C_{\nu}^{*}\left(\partial^{\mu}A^{\nu} \right)\right\}&=\frac{1}{4\omega^{2}}\left(\begin{array}{c}-2(\omega/c) \,\text{Im}\left\{\mathbf{E}^{*}\cdot\mathbf{H}\right\}\\ \text{Re}\left\{\mathbf{E}^{*}\cdot(\nabla)\mathbf{H}-\mathbf{H}^{*}\cdot( \nabla)\mathbf{E}\right\}\end{array}\right)=\left(\begin{array}{c}h\\ \mathbf{s}_{c}\end{array}\right),\\ S^{\mu}_{P}&=\frac{1}{4}\,\text{Re}\left\{C_{\nu}^{*}\left(\partial^{\nu}A^{ \mu}\right)-A_{\nu}^{*}\left(\partial^{\nu}C^{\mu}\right)\right\}=\frac{1}{4 \omega^{2}}\left(\begin{array}{c}0\\ \text{Re}\left\{\left(\mathbf{H}^{*}\cdot\nabla\right)\mathbf{E}-\left(\mathbf{E}^ {*}\cdot\nabla\right)\mathbf{H}\right\}\end{array}\right)=\left(\begin{array}{c}0 \\ \mathbf{s}_{p}\end{array}\right),\end{split} \tag{24}\] reproducing the spin decomposition in 4-vector notation. Note that the temporal part of the total spin, the helicity density, is carried only by the canonical 4-spin term while the time component of the Poynting 4-spin is zero. Let us briefly comment on the choice of the Coulomb gauge. As can be seen from the supplementary information, the decomposition of \(S^{\mu}\) is independent of the choice of frame and gauge. As already discussed in previous work (e.g. [9, 41]), while the local density \(S^{\mu}\) is not gauge independent, upon integration one gets _integrated helicity_, which is a gauge-independent quantity that is conserved in any scenario where there is electromagnetic dual symmetry. Furthermore, the integral depends only on the transverse components of the potentials, which makes the Coulomb gauge the most convenient gauge to work with, and in this gauge, the helicity flux (spatial component of \(S^{\mu}\)) also coincides with the optical spin density. Note, however, that the gauge fixed version of this decomposition is not invariant under Lorentz boosts, unless we simultaneously change our potentials to the Coulomb gauge associated with the new boosted frame. ## 5 Spin decomposition examples It is instructive to apply the spin decomposition in Eq. (4) to some simple examples of light: a general evanescent wave and three commonly known focused beams, namely a Gaussian beam, a radial/azimuthal beam (\(l=0\)), and an \(l=1\) vortex beam (all linearly polarised in the transverse plane). What differentiates each of these fields are their energy density and phase structures, due to one-dimensionally monotonic, doughnut or Gaussian real-space amplitude profiles, and the presence of OAM, two characteristics which reorient the Poynting vector throughout space and generate different amounts of Poynting spin. Poynting spin can emerge even when the instantaneous vectors \(\boldsymbol{\mathcal{E}}\) and \(\boldsymbol{\mathcal{H}}\) do not actually rotate, because its counterpart (canonical spin) is able to counterbalance the total SAM density of the field as necessary. Interplay between spin's canonical and Poynting components creates counter-intuitive effects, such as spin-free chiral interaction forces. ### Evanescent wave The electric and magnetic field phasors of an evanescent wave of arbitrary polarisation, propagating in the \(z\) direction (\(k_{z}>k\)) and decaying along the \(x\) axis with decay constant \(\gamma=\sqrt{k_{z}^{2}-k^{2}}\), are, \[\mathbf{E}=\begin{pmatrix}A_{p}\frac{k_{z}}{k}\\ A_{s}\\ -iA_{p}\frac{\gamma}{k}\end{pmatrix}e^{ik_{z}z-\gamma x}\quad\mathbf{H}=\frac{1 }{\eta}\begin{pmatrix}-A_{s}\frac{k_{z}}{k}\\ A_{p}\\ iA_{s}\frac{1}{k}\end{pmatrix}e^{ik_{z}z-\gamma x}, \tag{25}\] where \(\eta=\sqrt{\frac{\mu_{0}}{\epsilon_{0}}}\). Choosing complex values for \(A_{s}\) and \(A_{p}\), which are the amplitudes of the evanescent wave's TE and TM modes, controls the wave's polarisation--a circularly polarised wave, for instance, has \(A_{p}=\pm iA_{s}\). The energy density of the wave decays in the \(x\) direction and is given by, \[W=\epsilon_{0}\frac{k_{z}^{2}}{k^{2}}e^{-2\gamma x}\frac{1}{2}\left(|A_{s}|^{ 2}+|A_{p}|^{2}\right). \tag{26}\] Using our formulae, we can calculate the energy-normalised total spin of the evanescent wave, as well as its canonical and Poynting spins, \[\frac{\mathbf{S}}{W}=\frac{1}{\omega k_{z}}\left[\gamma\mathbf{\hat{y}}+k \sigma\mathbf{\hat{z}}\right], \tag{27}\] \[\frac{\mathbf{s}_{c}}{W}=\frac{1}{\omega k_{z}}\left[\frac{k_{z}^{2}}{k} \sigma\mathbf{\hat{z}}\right], \tag{28}\] \[\frac{\mathbf{s}_{p}}{W}=\frac{1}{\omega k_{z}}\left[\gamma\mathbf{\hat{y}}- \frac{\gamma^{2}}{k}\sigma\mathbf{\hat{z}}\right]. \tag{29}\] The parameter \(\sigma=2\text{Im}\{A_{s}A_{p}^{*}\}/(|A_{s}|^{2}+|A_{p}|^{2})\) is the degree of circular polarisation in the sense of a plane wave (\(\sigma=\pm 1\) for circular polarisation, \(\sigma=0\) for linear polarisation). It is well-known that evanescent waves carry transverse spin independently of polarisation [26]. This property is accounted for by the \(\mathbf{\hat{y}}\) component of \(\mathbf{S}\), which is unaffected by the relationship between \(A_{s}\) and \(A_{p}\) and, interestingly, is a product solely of Poynting spin as was discovered in [23]. Meanwhile, the evanescent wave acquires a longitudinal spin component if \(\sigma\neq 0\), a component which is contributed to by both decomposed spins, and in different amounts. The over-generous canonical spin develops a larger \(\mathbf{\hat{z}}\) component than is physical for the total spin of the wave (\(\mathbf{s}_{c}\cdot\mathbf{\hat{z}}>\mathbf{S}\cdot\mathbf{\hat{z}}\)). Compensating, the Poynting spin's \(\hat{\bf z}\) component points backwards to ensure \(({\bf s}_{c}+{\bf s}_{p})\cdot\hat{\bf z}={\bf S}\cdot\hat{\bf z}\) (this becomes clear after substituting \(\gamma^{2}=k_{z}^{2}-k^{2}\)). Decomposing an evanescent wave's spin reveals a physical distinction between its transverse and longitudinal spin components. Since Poynting spin is responsible for the transverse component of \({\bf S}\), transverse chiral forces felt by an enantiomer arise as it recoils from its own radiation, rather than from a direct field interaction. This is consistent with the fact that perpendicular to the wavevector, there is no phase advance to twist the rotating field vectors into helices--the evanescent wave only acquires helicity if it carries longitudinal spin (\(h=W\sigma/\omega\)). Canonical spin, purely longitudinal on the other hand, couples directly to chiral matter to produce a significantly stronger (relatively) preferential force. ### Beams Let us first consider a tightly focussed, linearly polarised Gaussian beam, whose energy density and polarisation in the focal plane is plotted in the top row of Fig. 2(a). In the transverse \(xy\) plane, the projection of the electric and magnetic field vectors are linear (\(x\) and \(y\) polarised respectively), though due to the tight focusing both \({\bf E}\) and \({\bf H}\) have significant longitudinal \(z\) components, such that they are elliptically polarised in 3D and contribute a circulation of transverse spin. The Poynting vector magnitude is not constant across the face of the beam and therefore has a non-zero curl in the transverse plane (except in the local maximum at the beam centre). This, the total spin and the decomposed canonical and Poynting spin of the Gaussian beam are plotted in Fig. 2(a)'s column (descending). Both \({\bf E}\) and \({\bf H}\) fields have \(z\) components and discretely symmetric polarisation ellipses in 3D (after a rotation of one field by \(\pi/2\)), which appears to reduce the canonical spin of the beam to zero, \({\bf s}_{c}={\bf 0}\). All transverse spin developed by focussing of a linearly polarised Gaussian beam stems from Poynting spin, \({\bf S}={\bf s}_{p}\). A similar conclusion could be made for an evanescent wave, making it tempting to argue for a general association between transverse spin and Poynting spin. This notion is not supported by the following example, however. An \(l=0\) azimuthally polarised (in the electric field) beam and its spin decomposition is plotted in Fig 2(b). Considering dual spin (contributions from both \({\bf E}\) and \({\bf H}\)), the decomposition properties of this beam are identical for a radially polarised beam, which has an azimuthal magnetic field. Despite tight focussing, the azimuthal electric field vector does not carry a longitudinal component, as is well-known, and is completely linearly polarised in 3D. This means that the beam's wholly transverse spin is supplied by the magnetic field alone. Combining to give the total spin of the beam, both canonical and Poynting spins are non-zero--this contrasts with the Gaussian beam, whose electric and magnetic fields are both elliptical in 3D. Neither beam considered so far has twisted wavefronts carrying OAM, though due to their azimuthal transverse spin components, both the Gaussian and radial beam possess a certain chiral OAM. A chiral particle could access the spatial structure of the beam's spin field \({\bf S}\), feeling an azimuthal force (clockwise or anticlockwise depending on the enantiomer) which would cause the particle to orbit the beam centre with no need for helical phasefronts. Finally, we treat the \(x\)-polarised \(l=1\) vortex beam [Fig. 2(c)], whose OAM leads to the most surprising spin decomposition features of the three. Numerically simulating such a non-paraxial, focussed beam is a challenging task because there is no unanimous 3D definition of a vector vortex beam (only scalar and paraxial beams are well-defined). The beams treated in this section are all electric linearly polarised and generated numerically using an angular spectrum integration technique [43] which, by convention, forces the electric field to exactly match the paraxial description of the beam in the focal plane. All unenforced field components (i.e., all magnetic field components and the longitudinal electric component throughout space, and the transverse electric components outside of the focal plane) are subsequently calculated via Maxwell's equations. While the beams are, therefore, physically valid, some magnetic-biased polarisation features develop in the \(x\)-polarised vortex beam considered here. In particular, although the electric field remains perfectly linear, the magnetic field alone gains a slight ellipticity in the transverse plane, contributing to a small longitudinal total spin outside of the beam centre as has been shown to exist in focussed linearly polarised beams [16]. Longitudinal spin remains zero in the singularity, however. Breaking continuous rotational symmetry via \(x\) polarisation, both the vortex beam's electric and magnetic fields possess a \(z\) component from focussing, in contrast to the azimuthal beam for which the electric field is perfectly linearly polarised in 3D. The beam has helical wavefronts so that the average phase gradient (local wavevector) has an azimuthal component, also inherited by the Poynting vector (shown in the second row of Fig. 2(c); there is a small transverse component circulating in the \(xy\) plane). The Poynting spin \({\bf s}_{p}\propto{\bf\nabla}\times{\bf P}\), therefore, acquires a longitudinal component (even at the centre of the beam), in outright defiance of the fact that both transverse electric and magnetic fields are nearly linearly polarised, and zero in the centre, as is visible in the lowest plot of Fig. 2(c). To neutralise the longitudinal Poynting spin and suppress the total longitudinal spin, the \(z\) component of canonical spin \({\bf s}_{c}\) must be non-zero too, satisfying \(s_{cz}=-s_{pz}\) for \(S_{z}=s_{cz}+s_{pz}=0\) in the centre of the beam, and \(S_{z}=s_{cz}+s_{pz}\approx 0\) elsewhere (see the plot second from bottom). In the middle of the beam, this counteracting canonical spin points in the negative \(z\) direction as a consequence of the beam's vortex handedness (right, topological charge +1), and would switch sign in a left-handed vortex. Returning now to the chiral force equation Eq. (5), we can infer that the beam's non-zero canonical spin, proportional to chiral momentum, produces a longitudinal chiral force, even though near to the vortex centre the \({\bf E}\) and \({\bf H}\) fields have negligible ellipticity in the transverse plane. Perhaps counter-intuitively, the longitudinal chiral force is strongest in the centre of the beam where the electromagnetic field is virtually zero (due, of course, to the longitudinal curl of the Poynting vector being maximal there). This is another example of spatially chiral light coupling to chiral matter which, even under the dipole approximation and in a linearly polarised field, can feel a discriminatory force by absorbing or scattering photons carrying OAM of a certain handedness. Only recently has OAM-dependent chirality, able to couple even to chiral dipoles, been demonstrated in focussed beams, resulting from strong longitudinal field components [17, 44]. We point out, however, that the chiral pressure force which we theorise at the centre of a linearly polarised vortex appears only to require that the Poynting vector has an azimuthal component (i.e., no requirement for longitudinal field components), and should still be present in a paraxial beam. ## 6 Analogies in other wave fields Kinetic, canonical and spin momenta analogous to the terms in Eq. (2) can be identified more generally in other wave fields [45, 46, 47]. This, and the ability to split the SAM density of light into two distinct terms poses another curiosity: what does a spin decomposition of another wave field look like, particularly if, unlike the electromagnetic field, its quanta are not spin-1? Treated in this section are acoustic and gravitational waves, which both depart from light's spin-1 structure. Despite their increased complexity, linearised theories exist for each of these fields Figure 2: Spin decomposition of non-paraxial beams. The beams, each of waist \(1.5\lambda\) and separated in columns, are (a) a linearly polarised Gaussian beam (\(\mathbf{E}^{T}||\mathbf{\hat{x}}\)), (b) an azimuthally polarised (\(\mathbf{E}||\hat{\boldsymbol{\phi}}\)), \(l=0\) doughnut beam, and (c) a linearly polarised (\(\mathbf{E}^{T}||\mathbf{\hat{x}}\)) vortex beam with topological charge \(l=1\). The top row of (isometric) plots across each sub figure column shows the beam energy density in colour, as well as the Poynting vector and electric (blue) and magnetic (green) polarisation ellipses, which are elliptical due to a significant \(z\) field component. Subsequent rows are vector plots of each beam’s Poynting vector \(\mathbf{P}\), total spin \(\mathbf{S}\) Eq. (3), canonical spin \(\mathbf{s}_{c}\) and Poynting spin \(\mathbf{s}_{p}\) from Eq. (4), respectively. White arrows are projections of the corresponding vector into the \(xy\) plane, while the red arrows are projections of the vector onto longitudinal \(yz\) and \(xz\) cut planes. Within each beam, arrows in the three spin decomposition plots are drawn to a consistent scale. Each non-paraxial beam is generated using an angular spectrum integration method [43]. Defining the 3D vector vortex beam (c) is a difficult problem and the method we used produces a small and physical longitudinal spin (third row of (c)), contributed by the magnetic field, which would not be present in a (non-physical) perfect paraxial beam. under certain conditions. In a perfect fluid, acoustic waves are linear and oscillate longitudinally (spin-0), while gravitational waves are tensorial in nature (spin-2) and combine linearly when their amplitudes are sufficiently low. In formulating equivalent expressions for total SAM, canonical and Poynting spins in linearised gravity, we use the Maxwellian representation of gravity in the weak field limit [27], i.e., waves propagating over flat spacetime at a large distance from their source. Gravitational waves detected on earth [48, 49] arrive within this limit, and share some of light's characteristics; both fields have two polarisation degrees of freedom (in gravity these are the \(h_{+}\) and \(h_{\times}\) polarisations) and are massless. Some of light's strongest behaviour, such as in evanescent fields, have also been predicted with additional properties in gravitational waves [50]. Note that we take the expressions for the helicity and SAM density that are dual-symmetric (i.e., \(\mathbf{S}=\mathbf{S}_{e}+\mathbf{S}_{m}\) rather than \(\mathbf{S}=2\mathbf{S}_{e}\)), the same as in the case of electromagnetism, despite the fact that in the case of gravity we have no experimental evidence favouring its physical relevance over the asymmetric version [51]. However, if we take the point of view of [9] that only the integrated helicity is a physically meaningful quantity, the two will be equivalent. Table 1 summarises differences and similarities between linearised acoustics, electromagnetism, and linearised gravity with a focus on the spin-related quantities. An acoustic wave field can be described by a scalar pressure field \(P\) and a vector velocity field \(\mathbf{v}\) which, in linearised acoustic theory, share a Maxwell-like relation [46, 47]. The derivation of the decomposition for the acoustic field is given in the supplementary material. A gravitational wave can be described using a metric perturbation \(h_{ij}\), which can be thought of as components of a three-by-three symmetric matrix. If we consider only \(i\)-th row or column of this matrix as a vector potential, \(\mathbf{A}^{i}=h^{ij}\hat{\mathbf{e}}_{j}\), then the Maxwellian representation of gravity can be written in vector notation; the derivation of the spin decomposition then becomes simply Eq. (12). The acoustic helicity density is zero, which is a feature of spin-0 fields, but interestingly the canonical spin, which we showed in Eq. (41) to be related to helicity, is also zero. One can see that electromagnetic and gravitational waves are spin-1 and spin-2 respectively by taking a circularly polarised wave. The helicity density of a circularly polarised wave will be one(two) times the energy density and canonical spin will be one(two) times the canonical momentum for an electromagnetic(gravitational) wave. \begin{table} \begin{tabular}{c c c c} & **Linearised Acoustics** & **Electromagnetism** & **Linearised gravity** \\ \hline Field phasors & \(P=-i\omega\rho\varphi\) & \(\mathbf{E}=i\omega\mathbf{A}\) & \(\mathbf{E}^{i}=i\omega(h^{ij}\hat{\mathbf{e}}_{j})\) \\ & \(\mathbf{v}=\nabla\varphi\) & \(\mathbf{H}=\frac{1}{\mu_{0}}\nabla\times\mathbf{A}\) & \(\mathbf{H}^{i}=\frac{1}{\mu_{0}}\nabla\times(h^{ij}\hat{\mathbf{e}}_{j})\) \\ \hline Energy density & \(\frac{1}{4}\big{(}\beta|P|^{2}+\rho|\mathbf{v}|^{2}\big{)}\) & \(\frac{1}{4}\big{(}\epsilon_{0}|\mathbf{E}|^{2}+\mu_{0}|\mathbf{H}|^{2}\big{)}\) & \(\frac{1}{4}\big{(}\epsilon_{0}\mathbf{E}_{i}^{*}\cdot\mathbf{E}^{i}+\mu_{0} \mathbf{H}_{i}^{*}\cdot\mathbf{H}^{i}\big{)}\) \\ \hline Helicity density & 0 & \(-\frac{1}{2\omega c}\text{Im}\{\mathbf{E}^{*}\cdot\mathbf{H}\}\) & \(-\frac{1}{\omega c}\text{Im}\{\mathbf{E}_{i}^{*}\cdot\mathbf{H}^{i}\}\) \\ \hline Poynting vector & \(\frac{1}{2}\text{Re}\{P^{*}\mathbf{v}\}\) & \(\frac{1}{2}\text{Re}\{\mathbf{E}^{*}\times\mathbf{H}\}\) & \(\frac{1}{2}\text{Re}\{\mathbf{E}_{i}^{*}\times\mathbf{H}^{i}\}\) \\ \hline SAM density & \(\frac{1}{2\omega}\text{Im}\{\rho\mathbf{v}^{*}\times\mathbf{v}\}\) & \(\frac{1}{4\omega}\text{Im}\{\epsilon_{0}\mathbf{E}^{*}\times\mathbf{E}+\mu_{0} \mathbf{H}^{*}\times\mathbf{H}\}\) & \(\frac{1}{2\omega}\text{Im}\{\epsilon_{0}\mathbf{E}_{i}^{*}\times\mathbf{E}^{i} +\mu_{0}\mathbf{H}_{i}^{*}\times\mathbf{H}^{i}\}\) \\ \hline Canonical spin & 0 & \(\frac{1}{4\omega^{2}}\text{Re}\{\mathbf{E}^{*}\cdot(\nabla)\mathbf{H}-\mathbf{ H}^{*}\cdot(\nabla)\mathbf{E}\}\) & \(\frac{1}{2\omega^{2}}\text{Re}\{\mathbf{E}_{i}^{*}\cdot(\nabla)\mathbf{H}^{i}- \mathbf{H}_{i}^{*}\cdot(\nabla)\mathbf{E}^{i}\}\) \\ \hline Poynting spin & \(\frac{1}{2\omega^{2}}\nabla\times\frac{1}{2}\text{Re}\{P^{*}\mathbf{v}\}\) & \(\frac{1}{2\omega^{2}}\nabla\times\frac{1}{2}\text{Re}\{\mathbf{E}^{*}\times \mathbf{H}\}\) & \(\frac{1}{\omega^{2}}\nabla\times\frac{1}{2}\text{Re}\{\mathbf{E}_{i}^{*}\times \mathbf{H}^{i}\}\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of time-averaged energy, momentum and spin densities between time-harmonic waves in theories of linearised acoustics, electromagnetism and linearised gravity. Table inspired by [45, 46, 50]. For electromagnetism, the potential is considered to be in the Coulomb gauge. For linearised gravity, \(h_{ij}\) are spatial components of the metric perturbation in the transverse-traceless gauge, \(\hat{\mathbf{e}}_{i}\) are basis vectors, and any repeated indices are summed over (Einstein’s convention). Parameters \(\epsilon_{0}=1/(c^{2}\mu_{0})=c^{2}/(32\pi G)\) were chosen such that the time-averaged energy density takes the same form as the expression for the electromagnetic field. A larger version of this table using the index notation introduced in Sec. 4.2 is given in the supplementary material. Conclusions We have discussed a known but overlooked decomposition which exists for the spin angular momentum density of light, in a similar way to the Poynting vector, which can be split into orbital and spin currents. Spin is decomposed into two terms, which we call the canonical spin \({\bf s}_{c}\) and Poynting spin \({\bf s}_{p}\). We have further expressed the decomposition for time-varying fields, as well as in four-vector notation. The canonical spin is proportional to chiral linear momentum which can be transferred to chiral enantiomers in a positive or negative direction, depending on the enantiomer handedness. The resulting force is a chiral analogy to the (achiral) force due to radiation pressure--both are relatively strong first order forces and can act on matter in the dipole approximation. The mechanism for preferential photon absorption or scattering by chiral matter is normally associated with the photon's spin state (i.e., circular dichroism), and less so to the sign of its OAM (although these interactions have received recent attention). We emphasised, however, that light is capable of exerting a first-order chiral force which does not directly depend on the total SAM of light, rather, its canonical spin, which can take a non-zero value even in linearly polarised fields. This is due to the second term in the spin decomposition, the Poynting spin, which by definition (being the curl of the Poynting vector) depends strongly on light's OAM: optical vortices twisting energy flow around the beam axis naturally generate non-zero Poynting spin, regardless of polarisation. We showed that under linear polarisation, an optical vortex's Poynting spin must be compensated by canonical spin, to produce a longitudinal chiral force in the absence of longitudinal SAM. Interestingly, this OAM-dependent chiral force is strongest in the centre of the vortex, where the electromagnetic energy density is minimal, along with achiral forces such as radiation pressure and gradient force. It is possible to trap atoms and molecules (using a blue-detuned wavelength with respect to a strong resonance) in dark spots [52, 53, 54]. The ability to decompose spin appears to be a general property of wave fields, and as we demonstrated, is straightforward to perform in linearised acoustics and gravity, whose quanta are considered spin-0 and spin-2 respectively. This fact could be of significant interest to a broader community, beyond optics. ## 8 Acknowledgements C. R. acknowledges support from a Science and Technology Facilities Council (STFC) Doctoral Training Grant. F. J. R-F. and S. G. are supported by EIC-Pathfinder-CHIRALFORCE (101046961) funded by Innovate UK Horizon Europe Guarantee (UKRI project 10045438). A. J. V. is supported by EPSRC Grant EP/R513064/1. Supplementary Information This section supports the main text with four main components. In section 9.1, we provide an alternative version of Table I of the manuscript, expressed completely using index notation, which highlights the similarities and differences between vector and tensor waves. Secondly, section 9.2 is a discussion of the equivalent SAM density decomposition in linerised acoustics, which informs the first column of Table I of the manuscript. In section 9.3, details of the units of the quantities handled in the manuscript's 4-vector decomposition are given. Finally in section 9.4, the electromagnetic spin decomposition is presented in a general gauge, supporting comments made in the in 4-vector section of the main text. ### Alternative table (index notation) An alternative version of Table 1 from the manuscript is given below, where all quantities are expressed using index notation. \begin{table} \begin{tabular}{c c c c} & **Linearised acoustics** & **Electromagnetism** & **Linearised gravity** \\ \hline Potential field & Scalar field \(\varphi\) & Vector field \(A^{\mu}\) & Tensor field \(h_{\mu\nu}\) \\ \hline Choice of & n/a & Coulomb gauge: & Transverse traceless gauge: \\ gauge & n/a & \(A^{0}=0\) & \(h_{0\mu}=h^{i}{}_{i}=0\) \\ & & \(\partial_{i}A^{i}=0\) & \(\partial_{i}h^{ij}=0\) \\ \hline Helmholtz equation & \(\nabla^{2}\varphi=-k^{2}\varphi\) & \(\nabla^{2}A^{i}=-k^{2}A^{i}\) & \(\nabla^{2}h^{ij}=-k^{2}h^{ij}\) \\ \hline Fields & \(\begin{array}{c}P=-i\omega\rho\varphi\\ v^{i}=\partial^{i}\varphi\end{array}\) & \(\begin{array}{c}E^{i}=i\omega A^{i}\\ \mu_{0}H^{i}=\epsilon^{ijk}\partial_{j}A_{k}\end{array}\) & \(\begin{array}{c}E^{ij}=i\omega h^{ij}\\ \mu_{0}H^{ij}=\epsilon^{ikm}\partial_{k}h^{j}{}_{m}\end{array}\) \\ \hline Spin & \(\frac{\rho}{2\omega}\epsilon^{ijk}\text{Im}\{v_{j}^{*}v_{k}\}\) & \(\frac{1}{4\omega}\epsilon^{ijk}\text{Im}\{\epsilon_{0}E_{j}^{*}E_{k}+\mu_{0}H_{ j}^{*}H_{k}\}\) & \(\frac{1}{2\omega}\epsilon^{ijk}\text{Im}\{\epsilon_{0}E_{jl}^{*}E_{k}+\mu_{0}H_{ jl}^{*}H_{k}{}^{l}\}\) \\ \hline Canonical spin & 0 & \(\frac{1}{4\omega^{2}}\text{Re}\{E_{j}^{*}\partial_{i}H^{j}-H_{j}^{*}\partial_{ i}E^{j}\}\) & \(\frac{1}{2\omega^{2}}\text{Re}\{E_{jn}^{*}\partial_{i}H^{jn}-H_{jn}^{*}\partial_{ i}E^{jn}\}\) \\ \hline Poynting spin & \(\frac{1}{2\omega^{2}}\epsilon^{ijk}\partial_{j}\frac{1}{2}\text{Re}\{P^{*}v_{ k}\}\) & \(\frac{1}{2\omega^{2}}\epsilon^{ijk}\partial_{j}\frac{1}{2}\text{Re}\{\epsilon_{klm }E^{*l}H^{m}\}\) & \(\frac{1}{\omega^{2}}\epsilon^{ijk}\partial_{j}\frac{1}{2}\text{Re}\{\epsilon_{klm }E^{*l}{}_{n}H^{mn}\}\) \\ \hline Energy density & \(\frac{1}{4}\{\partial p^{*}p+\rho v_{i}^{*}v^{i}\}\) & \(\frac{1}{4}\{\epsilon_{0}E_{j}^{*}E^{j}+\mu_{0}H_{j}^{*}H^{j}\}\) & \(\frac{1}{4}\{\epsilon_{0}E_{jk}^{*}E^{jk}+\mu_{0}H_{jk}^{*}H^{jk}\}\) \\ \hline Maxwell(-like) equations & \(\begin{array}{c}\partial_{i}P=i\omega\rho v^{i}\\ \partial_{i}v^{i}=i\omega\beta P\\ \epsilon^{ijk}\partial_{j}v_{k}=0\end{array}\) & \(\begin{array}{c}\partial_{i}E^{i}=0\\ \partial_{i}H^{i}=0\\ \epsilon^{ijk}\partial_{j}E_{k}=i\omega\mu_{0}H^{i}\\ \epsilon^{ijk}\partial_{j}H_{k}=-i\omega\epsilon_{0}E^{i}\end{array}\) & \(\begin{array}{c}\partial_{i}E^{ij}=0\\ \partial_{i}H^{ij}=0\\ \epsilon^{ijk}\partial_{j}E_{k}=i\omega\mu_{0}H^{i}\\ \epsilon^{ijk}\partial_{j}H_{k}=-i\omega\epsilon_{0}E^{i}\end{array}\) & \(\begin{array}{c}\partial_{i}E^{ij}=0\\ \partial_{i}H^{ij}=0\\ \epsilon^{ijk}\partial_{j}E_{k}=i\omega\mu_{0}H^{i}\\ \epsilon^{ijk}\partial_{j}H_{k}=-i\omega\epsilon_{0}E^{i}\end{array}\) \\ \hline \end{tabular} \end{table} Table 2: Comparison between acoustics, electromagnetism and linearised gravity. Table inspired by [45, 46, 50]. The parameter \(\epsilon_{0}=1/(c^{2}\mu_{0})=c^{2}/(32\pi G)\) for linearised gravity was chosen such that the time-averaged energy density takes the same form as for electromagnetism [27]. ### Decomposition in Linearised acoustics An acoustic wave field can be described by a scalar pressure field \(P\) and a vector velocity field \(\mathbf{v}\) which, in linearised acoustic theory, share a Maxwell-like relation [46, 47]. The time-harmonic equations are, \[\nabla\cdot\mathbf{v}=i\beta\omega P, \tag{30}\] \[\nabla P=i\rho\omega\mathbf{v}. \tag{31}\] where the constants \(\beta\) and \(\rho\) are the acoustic medium's compressibility and mass density respectively. Compared to photons, acoustic phonons, which are spin-0 quanta in this regime, do not give acoustic fields as rich a vector structure as light. Constraining the velocity vector is the longidutinality condition, that is \(\nabla\times\mathbf{v}=\mathbf{0}\), a more restrictive analogy to light's transversality condition due to Gauss' law. Yet, \(\mathbf{v}\) can still rotate and generate acoustic SAM, which is expressed (time-averaged) by, \[\mathbf{S}_{ac}=\frac{\rho}{2\omega}\mathrm{Im}\{\mathbf{v}^{*}\times\mathbf{v}\}. \tag{32}\] In time-harmonic acoustic fields, where the velocity \(\mathbf{v}\) and displacement field \(\mathbf{r}\) vectors are related by \(\mathbf{v}=-i\omega\mathbf{r}\), particles in the acoustic medium have an elliptical motion. Meanwhile, an acoustic analogy to the Poynting vector can be defined by mixing the acoustic pressure and velocity fields, \[\mathbf{P}_{ac}=\frac{1}{2}\mathrm{Re}\{P^{*}\mathbf{v}\}. \tag{33}\] Taking the curl of \(\mathbf{P}_{ac}\) and following with use of Eq. (31), the acoustic SAM emerges in what appears to be the acoustic analogy to the presented electromagnetic spin decomposition: \[\mathbf{S}_{ac}=-\frac{1}{2\omega^{2}}\mathrm{Re}\{P^{*}(\nabla\times \mathbf{v})\}+\frac{1}{2\omega^{2}}\nabla\times\mathbf{P}_{ac}. \tag{34}\] Due to the longitudinality condition \(\nabla\times\mathbf{v=0}\), the first term in Eq. (34) vanishes leaving the acoustic spin entirely proportional to the curl of the acoustic Poynting vector. Transverse phonons for which \(\nabla\times\mathbf{v}\neq\mathbf{0}\) can occur in viscous fluids or solids, although in these media, the acoustic field could no longer be described by a linearised theory. Compared to the electromagnetic spin decompostion, however, Eq. (34) highlights the structural distinction between spin-0 and spin-1 fields as only vorticity in the flow of energy can generate SAM in a linear acoustic field. ### On units Discussions in this section are related to the 4-vector extension of the decomposition presented in section 4.2 of the manuscript. The field strength tensor \(F_{\mu\nu}\) and conjugate field strength tensor \(G_{\mu\nu}\) are not of the same units. The former has SI units \(\mathrm{kg\,s^{-4}\,A^{-1}}\) whilst the latter has SI units \(\mathrm{m^{-1}\,kg\,s^{-5}\,A^{-1}}\). This comes from the form of the conjugate field strength tensor that can be derived using the companion Mathematica notebook [42]: \[G_{\mu\nu}=\left[\begin{array}{cccc}0&H_{x}/c&H_{y}/c&H_{z}/c\\ -H_{x}/c&0&-\epsilon_{0}E_{z}&\epsilon_{0}E_{y}\\ -H_{y}/c&\epsilon_{0}E_{z}&0&-\epsilon_{0}E_{x}\\ -H_{z}/c&-\epsilon_{0}E_{y}&H_{x}/c^{2}&0\end{array}\right]. \tag{35}\] Finally, we summarize all the SI units of the mentioned tensor and constants: ### On the choice of gauge The electric field and magnetic field, respectively \(\mathbf{E}\) and \(\mathbf{H}\), can be expressed in terms of a scalar potential \(\phi\) and a vector potential \(\mathbf{A}\), \[\begin{split}&\boldsymbol{\mathcal{E}}=-\frac{\partial \boldsymbol{\mathcal{A}}}{\partial t}-\nabla\varphi\\ &\mu_{0}\boldsymbol{\mathcal{H}}=\nabla\times\boldsymbol{ \mathcal{A}}\end{split} \tag{36}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(\mathbf{E}\) & \(\mathbf{H}\) & \(\epsilon_{0}\) & \(\mu_{0}\) & \(\mathbf{A}\) & \(\phi\) & \(\mathbf{C}\) \\ \hline Units & \(\mathrm{m\,kg\,s^{-3}\,A^{-1}}\) & \(\mathrm{A\,m^{-1}}\) & \(\mathrm{kg^{-1}\,m^{-3}\,s^{4}\,A^{-2}}\) & \(\mathrm{kg\,m\,s^{-2}\,A^{-2}}\) & \(\mathrm{kg\,m\,s^{-2}\,A^{-1}}\) & \(\mathrm{kg\,m^{2}\,s^{-3}\,A^{-1}}\) & \(\mathrm{A\,m^{-1}\,s}\) \\ \hline \end{tabular} \end{table} Table 3: Summary of the SI units of every tensors and constants mentioned in this work and respectively for the conjugate vector potential: \[\begin{split}\epsilon_{0}\mathbf{\mathcal{E}}&=-\nabla \times\mathbf{\mathcal{C}}\\ \mathbf{\mathcal{H}}&=-\frac{\partial\mathbf{\mathcal{C}}}{ \partial t}-\nabla\psi\end{split} \tag{37}\] When doing computations in the Coulomb gauge, where the scalar potentials \(\varphi=\psi=0\), we can easily express the electric and magnetic field in terms of the vector potentials through: \[E^{j}=i\omega A^{j}(\omega,r),\qquad H^{j}=i\omega C^{j}(\omega,r), \tag{38}\] which implies that we can express the four vector potential as: \[A^{\mu}=\frac{1}{i\omega}\begin{pmatrix}0\\ \mathbf{E}\end{pmatrix},\qquad C^{\mu}=\frac{1}{i\omega}\begin{pmatrix}0\\ \mathbf{H}\end{pmatrix}. \tag{39}\] Working in the Coulomb gauge simplifies greatly the form of the spin decomposition, giving a clearer physics intuition. However, it is possible to derive all the results presented in the main work without choosing a gauge. In a general gauge, we found that the total spin is given by: \[\begin{split} S^{\mu}&=\frac{1}{4}\operatorname{Re}\left\{A _{\nu}^{*}G^{\nu\mu}+C_{\nu}^{*}F^{\nu\mu}\right\}=\frac{1}{4}\operatorname{ Re}\left\{\left(\begin{array}{c}\frac{1}{c}\left(\mathbf{A}\cdot\mathbf{H}^{*}- \mathbf{C}\cdot\mathbf{E}^{*}\right)\\ \mu\mathbf{H}^{*}\times\mathbf{C}+\varepsilon\mathbf{E}^{*}\times\mathbf{A} \end{array}\right)+\frac{1}{c^{2}}\left(\begin{array}{c}0\\ \phi\mathbf{H}^{*}-\psi\mathbf{E}^{*}\end{array}\right)\right\}\\ &=\frac{1}{4\omega}\operatorname{Im}\left\{\left(\begin{array}{c}-\frac{2}{c }\mathbf{E}^{*}\cdot\mathbf{H}\\ \epsilon_{0}\mathbf{E}^{*}\times\mathbf{E}+\mu_{0}\mathbf{H}^{*}\times\mathbf{ H}\end{array}\right)+\left(\begin{array}{c}\frac{1}{c}\nabla\cdot\left(\psi\mathbf{E}^{*}- \phi\mathbf{H}^{*}\right)\\ \nabla\times\left(\varepsilon\phi\mathbf{E}^{*}+\mu\psi\mathbf{H}^{*}\right) \end{array}\right)\right\}.\end{split} \tag{40}\] Interestingly, the extra terms for a general gauge are total derivatives, meaning that the integrated helicity is gauge invariant up to a boundary term. This is because generically, total derivatives vanish under suitable boundary conditions. The canonical and Poynting spin now read as: \[\begin{split} S^{\mu}_{C}&=\frac{1}{4} \operatorname{Re}\left\{A_{\nu}^{*}\left(\partial^{\mu}C^{\nu}\right)-C_{\nu }^{*}\left(\partial^{\mu}A^{\nu}\right)\right\}&=\frac{1}{4} \operatorname{Re}\left\{\left(\begin{array}{c}-2i\omega(\phi^{*}\psi)/c^{3 }+i\omega(\mathbf{A}^{*}\cdot\mathbf{C}-\mathbf{C}^{*}\cdot\mathbf{A})/c\\ \mathbf{A}^{*}\cdot(\nabla)\mathbf{C}-\mathbf{C}^{*}\cdot(\nabla)\mathbf{A}+ \frac{1}{c^{2}}(\psi^{*}\nabla\phi-\phi^{*}\nabla\psi)\end{array}\right) \right\},\\ S^{\mu}_{P}&=\frac{1}{4}\operatorname{Re}\left\{C_{\nu}^{*}\left( \partial^{\nu}A^{\mu}\right)-A_{\nu}^{*}\left(\partial^{\nu}C^{\mu}\right) \right\}&=\frac{1}{4}\operatorname{Re}\left\{\left(\begin{array}{c}2i\omega (\phi^{*}\psi)/c^{3}+\left[(\mathbf{C}\cdot\nabla)^{*}\phi-(\mathbf{A}\cdot \nabla)^{*}\psi\right]/c\\ \nabla\times(\mathbf{A}\times\mathbf{C}^{*})-\mathbf{A}^{*}(\partial_{\mu}C^ {\mu})+\mathbf{C}^{*}(\partial_{\mu}A^{\mu})\end{array}\right)\right\},\end{split} \tag{41}\] where \(\partial_{\mu}A^{\mu}=-i\omega\phi/c^{2}+\nabla\cdot\mathbf{A}\) and same for \(C\), which is quantity that is zero in both Lorentz and Coulomb gauge. An instantaneous dual symmetric gauge independent helicity current density four-vector is [41] (42) which can be decomposed into the canonical and Poynting spins in an arbitrary gauge as follows: (43) Note that \((\mathbf{\mathcal{C}}\cdot\nabla)\mathbf{\mathcal{A}}-(\mathbf{\mathcal{A}}\cdot\nabla) \mathbf{\mathcal{C}}=\nabla\times(\mathbf{\mathcal{A}}\times\mathbf{\mathcal{C}})+\mathbf{ \mathcal{C}}(\nabla\cdot\mathbf{\mathcal{A}})-\mathbf{\mathcal{A}}(\nabla\cdot\mathbf{ \mathcal{C}})\) and \(\mathcal{A}^{\mu}\partial_{\mu}\psi=\varphi(\partial\psi/\partial t)/c^{2}+\mathbf{ \mathcal{A}}\cdot\nabla\psi\) and similarly \(\mathcal{C}^{\mu}\partial_{\mu}\varphi=\psi(\partial\varphi/\partial t)/c^{2}+ \mathbf{\mathcal{C}}\cdot\nabla\varphi\).
2307.07075
Adaptive Coding and Modulation Aided Mobile Relaying for Millimeter-Wave Flying Ad-Hoc Networks
The emerging drone swarms are capable of carrying out sophisticated tasks in support of demanding Internet-of-Things (IoT) applications by synergistically working together. However, the target area may be out of the coverage of the ground station and it may be impractical to deploy a large number of drones in the target area due to cost, electromagnetic interference and flight-safety regulations. By exploiting the innate \emph{agility} and \emph{mobility} of unmanned aerial vehicles (UAVs), we conceive a mobile relaying-assisted drone swarm network architecture, which is capable of extending the coverage of the ground station and enhancing the effective end-to-end throughput. Explicitly, a swarm of drones forms a data-collecting drone swarm (DCDS) designed for sensing and collecting data with the aid of their mounted cameras and/or sensors, and a powerful relay-UAV (RUAV) acts as a mobile relay for conveying data between the DCDS and a ground station (GS). Given a time period, in order to maximize the data delivered whilst minimizing the delay imposed, we harness an $\epsilon$-multiple objective genetic algorithm ($\epsilon$-MOGA) assisted Pareto-optimization scheme. Our simulation results demonstrate that the proposed mobile relaying is capable of delivering more data. As specific examples investigated in our simulations, our mobile relaying-assisted drone swarm network is capable of delivering $45.38\%$ more data than the benchmark solutions, when a stationary relay is available, and it is capable of delivering $26.86\%$ more data than the benchmark solutions when no stationary relay is available.
Jiankang Zhang, Sheng Chen, Wei Koong Chai, Lajos Hanzo
2023-07-13T21:56:04Z
http://arxiv.org/abs/2307.07075v1
# Adaptive Coding and Modulation Aided Mobile Relaying for Millimeter-Wave Flying Ad-Hoc Networks ###### Abstract The emerging drone swarms are capable of carrying out sophisticated tasks in support of demanding Internet-of-Things (IoT) applications by synergistically working together. However, the target area may be out of the coverage of the ground station and it may be impractical to deploy a large number of drones in the target area due to cost, electromagnetic interference and flight-safety regulations. By exploiting the innate _agility_ and _mobility_ of unmanned aerial vehicles (UAVs), we conceive a mobile relaying-assisted drone swarm network architecture, which is capable of extending the coverage of the ground station and enhancing the effective end-to-end throughput. Explicitly, a swarm of drones forms a data-collecting drone swarm (DCDS) designed for sensing and collecting data with the aid of their mounted cameras and/or sensors, and a powerful relay-UAV (RUAV) acts as a mobile relay for conveying data between the DCDS and a ground station (GS). Given a time period, in order to maximize the data delivered whilst minimizing the delay imposed, we harness an \(c\)-multiple objective genetic algorithm (\(\epsilon\)-MOGA) assisted Pareto-optimization scheme. Our simulation results demonstrate that the proposed mobile relaying is capable of delivering more data. As specific examples investigated in our simulations, our mobile relaying-assisted drone swarm network is capable of delivering \(45.38\%\) more data than the benchmark solutions, when a stationary relay is available, and it is capable of delivering \(26.86\%\) more data than the benchmark solutions when no stationary relay is available. Unmanned aerial vehicle, millimeter wave, beamforming, aeronautical communications, drone swarm, adaptive coding and modulation ## I Introduction As an emerging technology, unmanned aerial vehicles (UAVs) assisted communications have been proposed for mission-critical scenarios as well as for a range of other paradigms [1, 2]. Furthermore, by autonomously forming flying ad-hoc network (FANET) [3, 4, 5] from UAVs, the dependence on the conventional terrestrial communication infrastructure can be significantly reduced. Hence FANETs offer a promising solution both for industries and various other sectors of human life [2] including but not limited to emergency communication [1], flying base station [6] delivery, monitoring and surveillance applications in such scenarios [7]. To elaborate, FANETs can be swiftly and flexibly deployed for providing rapid response to the above-mentioned emergency situations. Although the UAVs in an FANET are capable of communicating with each other relying on UAV-to-UAV communication links, a reliable high-rate communication solution is required to enable them to communicate with the GS, in order for them to complete their missions, including sending back the data collected by their cameras and other sensors as well as for receiving information to be disseminated. Typically, routing relying on multi-hop relaying is an efficient solution for exchanging information between a FANET and a GS when there are sufficiently many UAVs in the FANET for establishing at least a direct end-to-end link. Existing routing strategies may be divided into topology-based [3, 8, 9, 10] and location-based routing protocols [11, 12]. Topology-based routing methods suffer either from a huge overhead required for maintaining a routing table or a long delay during the route discovery process. By contrast, location-based routing protocols typically suffer from routing holes and blind path problems. Additionally, in order to establish end-to-end routing for both topology-based and location-based routing protocols, each UAV must have at least one other UAV within its communication range. Furthermore, there has to be at least one UAV which can directly communicate with the GS. In this scenario, Do _et al._[13] investigated a UAV-based non-orthogonal multiple access (NOMA) scheme and optimized its outage by appropriately adjusting the relay-UAV's location. However, in most cases, it is challenging to deploy a large number of UAVs within a specific area, due to cost, electromagnetic interference and flight-safety regulations. When there is an insufficient number of UAVs to ensure that a direct end-to-end link's can be established or there are obstacles, such as hills or large buildings, classical stationary relaying and routing strategies will not work. The highly dynamic topology and high mobility of UAVs also impose challenges both on routing and on link connectivity as well as concerning the signal processing delay. As a remedy, UAVs exhibit inimble maneuverability, which makes them eminently suitable for mobile relaying in delay-tolerant applications. Hence, instead of relying on routing algorithms based on multi-hop relaying, we focus on the new paradigm of _mobile relaying_[14, 15, 16, 17, 18] offered by the controllable flexibility of UAVs. The flexibility and battery-powered nature of UAVs impose some challenges, but also offer some potential opportunities for drone-based communications and data sensing as well as data collecting. Explicitly, coverage, end-to-end throughput, power consumption and link reliability have been the key metrics to be considered, which can be maximized/minimized by optimizing the UAV's position, trajectory and charging/discharging strategy. Explicitly, Frew_et al._[14] proposed to load data, carry it close to destination and offload the data with the aid of buffer on the mobile relay node, which was termed as a '_data ferry_'. Although the philosophy was pioneered by Frew, only a simple example of maintaining a reliable communication link between a static source node (SN) and a static destination node (DN) was provided, there is no specific network architecture design and network optimization. As an essential metric for communications, the maximization of the throughput has attracted extensive considerations for the study of mobile relaying. Explicitly, Zeng _et al._[17] maximized the throughput of UAV-aided mobile relaying systems by optimizing the source/relay transmit power along with the relay's trajectory. As a further development based on [17], Lin _et al._[21] maximized the throughput by jointly optimizing the source/relay transmit power along with the relay's trajectory as well as the time-slot pairing for each data packet received and forwarded. Li _et al._[19] maximized the throughput in the context of UAV-assisted cognitive mobile relay networks. Explicitly, a UAV acted as a mobile relay between the primary user transmitter (PUT) and primary user receiver (PUR) as well as the secondary user transmitter (SUT) and secondary user receiver (SUR). As a further advance, the sum rate of all UAVs was maximized by Zhao _et al._[22] by jointly optimizing the UAV trajectory and the non-orthogonal multiple access (NOMA) precoding. As further development, Liu _et al._[20] maximized the average downlink throughput by jointly optimizing the UAV trajectory, the reconfigurable intelligent surface (RIS) based passive beamforming and the source power allocation for each time slot. By contrast, Pang _et al._[23] proposed to deploy RIS on UAVs for maximizing the average achievable rate by jointly optimizing the trajectory and the RIS phase shifts. Additionally, mobile relaying has also been extended both to NOMA systems [24] and to hybrid free-space optical (FSO) as well as to radio frequency (RF) systems [25] in order to maximize the throughput and improve the relaying link reliability, respectively. But naturally, mobile relaying will not be sustainable if the UAV's battery capacity is limited and no additional power supply is available. Hence, their energy efficiency was also considered by researchers. Zhao _et al._[18] aimed for maximizing the efficiency defined as weighted sum of the energy efficiency during information transmission and the wireless power transmission efficiency. The wireless power transfer from flying energy sources to UAVs was further optimized by Oubbati _et al._[26] by relying on multiagent deep reinforcement learning. Recently, drone swarms equipped with cameras/sensors have become a promising technology in many applications, such as video monitoring, remote sensing, disaster rescue, aerial photography and reconnaissance, which typically require high-rate communication between the drone swarms and the GS. Massive multiple-input multiple-output (MIMO) schemes relying on a large number of antennas constitute a promising solution for serving a swarm of drones in high-rate and high-reliability communications [27]. Explicitly, hundreds of antennas deployed at the GS are capable of focusing the energy into narrow pencil-beams for attaining huge throughput and energy efficiency improvements with the aid of transmit precoding (TPC) for the downlink (DL) and receiver combining (RC) for the uplink (UL). Research efforts have also been devoted to measure the air-to-ground channel, analyse it and remodel it by jointly considering mobility, shadowing, line-of-sight (LoS) and dynamic propagation conditions [28, 29]. The heavily-occupied sub-6 GHz frequency band becomes not sufficient to meet ultra high-data-traffic requirements of UAV communications, the utilization of the millimeter-wave (mmWave) frequency bands has been a promising direction and feasible deployment for UAV by considering the half-wave rule of antenna theory. However, the signals transmitted in the centimetre wave and mmWave bands can easily be blocked by obstacles [30]. Thus, the communication distance becomes very limited. Additionally, most UAVs travel at a speed in the range of 30km/h to 460km/h heading in random directions, which imposes challenges in terms of their connectivity, coordination, directional communications and link adaption, etc [31]. Furthermore, it is challenging to implement the traditional link adaptation of adaptive coding and modulation (ACM), which relies on the near-instantaneous signal-to-noise-ratio (SNR) in practical aeronautical communications. This is because it is required to frequently estimate the instantaneous SNR and frequently change the ACM mode. Hence, advanced TPC and link adaptation schemes have to be conceived for tackling these challenges of drone swarm communications. Our distance-based ACM of [32] is capable of supporting high-rate aeronautical communication with a quickly judgement threshold of communication distance. In Table I we boldly and explicitly compare the main contributions of [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], which have the most similar objective of relaying data from a source node to a destination node. By observing Table I, we can see that existing mobile relaying solutions harness a UAV for mobile relaying between a GS and the terminal users/sensors either to maximize the end-to-end throughput or to reduce the outage probability, as comparative studied in Table I. But as a prerequisite, the UAV relay should be able to establish a direct communication link both with the GS and the terminal users/sensors. When the terminal users/sensors are far away from the GS and hence it is impossible to build up a communication link harnessing a single relay node, multiple relay nodes have to be deployed, which will impose a great challenge on the network in terms of cost, network design, configuration, and network optimization. The 'data ferry' philosophy pioneered by Frew [14] is capable of periodically ferrying data from the source node to the destination node even if the relay node cannot maintain direct communication with the source node and destination node at the same time. But again, there is no comprehensive network design, aiming for network optimization. Inspired by the _data ferry_ philosophy in the open literature, we conceive a distance-based ACM scheme for relay-assisted drone swarm communications relying on mmWave massive MIMO solutions, which harness a swarm of small or micro drones for data sensing/collecting of that relies on a powerful fixed wing UAV as a mobile relaying node. The powerful fixed wing UAV circulates between the data sensing/collecting drones and the GS to relay data to the GS. Furthermore, we jointly optimize multiple objectives with the aid of our Pareto-optimization algorithm, rather than optimizing a single objective or converting multiple objectives into alternative sub-optimization problems. our contributions are summarized as follows: 1. We propose a UAV communication architecture consisting of a data-collecting drone swarm (DCDS), a relay-UAV (RUAV) and a GS. Explicitly, the DCDS collect information via their cameras or sensors across the target-area, whilst the RUAV, equipped with a large-scale mmWave antenna, acts as a mobile relay for conveying data between the DCDS and GS. 2. We conceive a distance-based ACM and relay-assisted drone swarm communication scheme by switching the ACM modes based on the communication distance and exploiting the controllable-mobility of the RUAV. Explicitly, the channel qualities between RUAV and GS as well as between RUAV and DCDS are dominated by the communication distance, since the channel exhibits Rician fading instead of Rayleigh fading due to the high altitude of RUAVs. 3. We propose a buffer-aided mobile-relay-assisted drone swarm communication protocol for the challenging scenario, where the DCDS-to-RUAV and RUAV-to-GS links do not exist concurrently. We define the _effective end-to-end throughput_ metric, which is then used as one of the multiple objectives of the optimization problem formulated. Furthermore, in order to maximize the effective end-to-end throughput and to simultaneously minimize the delay imposed, we develop an \(\epsilon\)-multiple objective genetic algorithm (\(\epsilon\)-MOGA) assisted Pareto-optimization scheme for jointly optimizing the data uploading and offloading points, the maximum factor of caching data, and the minimum factor of offloading data, given a specific buffer size as well as distance between the GS and DCDS.1 Footnote 1: The factors of caching and off-loading controlling the DCDS and RUAV actions will be explicitly exemplified later. The rest of this paper is organized as follows. Section II presents on mobile relaying aided drone swarm mmWave communications model. Both the throughput of the DCDS-to-RUAV link and the throughput of the RUAV-to-GS link are analyzed in Section III. In Section IV, the multiple-objective optimization problem of relaying assisted PANETs is formulated, which includes both static relaying and mobile relaying scenarios. Our \(\epsilon\)-MOGA assisted Pareto-optimization scheme is also developed in this section. The implementation issue and computational complexity are also discussed. Section V is devoted to simulation experiments, which includes the scenarios when a the stationary relay is available and when it is not available. In Section VI, we conclude and briefly discuss our future research ideas. ## II System model We assume that a drone swarm is supported by a UAV-relay, where a GS centrally processes the signals collected by the remote drone swarms, and each drone swarm is served by the relay UAV (RUAV). We also assume that the GS is capable of simultaneously serving \(B\) RUAVs and the corresponding \(B\) drone swarms by relying on the ubiquitous orthogonal frequency-division multiplexing access (OFDMA) protocol for supporting the \(B\) RUAVs and drone swarms. Furthermore, a RUAV is simultaneously serving \(K\) drones, that is, there are \(K\) drones in a drone swarm. More specifically, we illustrate an end-to-end link between a drone swarm and the GS in Fig. 1, where the drone swarms collect data via their cameras or other sensors in the target area, whilst a RUAV actively relays the collected data to the GS for central signal processing. Explicitly, each drone only has a single antenna due to its limited constrained fuselage size and its affordable energy. A drone swarm consisting of \(K\) drones simultaneously transmits its collected information to a RUAV, which is more powerful in terms of its flying speed, energy and communication equipment. Furthermore, the RUAV also utilizes the OFDMA protocol for receiving and forwarding the DCDSs' data, which allows it to simultaneously transmit and receive data at the same time without jamming its own received signal by its own transmitted signals. The RUAV has \(N_{\text{total}}\) antennas, of which \(N_{r}\) antennas are data-receiving antennas (DRAs) utilized for receiving data, whilst \(N_{t}\) antennas are data-transmitting antennas (DTAs) used for sending data. We assume that \(N_{t}=K\leq N_{r}\leq N_{\text{total}}\), in line with the maximum attainable spatial degrees of freedom for relaying the DCDS's data. The GS has a large-scale antenna array, having \(N_{g}\) DTAs. The length of the cyclic prefix (CP) \(N_{cp}\) is higher than the channel impulse response (CIR), which indicates that there is no inter-symbol interference and the receiver can process the signals on a subcarrier-by-subcarrier basis. To simplify notations, we will omit the OFDM symbol index and the subcarrier index in our investigation. The end-to-end communication link between the DCDS and GS consists of the DCDS-to-RUAV link and the RUAV-to-GS link, which will be elaborated on in Subsection II-A and Subsection II-B, respectively. ### _Signal model of DCDS-to-RUAV_ The discrete signals received at the RUAV \(\mathbf{r}\in\mathbb{C}^{N_{r}}\) can be formulated as \[\mathbf{r}= \mathbf{H}_{0}^{(d/r)}\big{(}\mathbf{P}_{\text{rx},0}^{(d,x)}\big{)}^{ \frac{1}{2}}\mathbf{s}_{0}+\sum_{a=1}^{A}\mathbf{H}_{a}^{(d/r)}\big{(}\mathbf{P}_{\text{rx },a}^{(d,x)}\big{)}^{\frac{1}{2}}\mathbf{s}_{a}+\mathbf{n}, \tag{1}\] where \(\mathbf{H}_{a}^{(d/r)}\!\in\!\mathbb{C}^{N_{r}\times K}\) represents the uplink MIMO channel between the \(a^{\text{th}}\) drone swarm and the RUAV, \(\mathbf{s}_{a}=\big{[}s_{1}^{(a)}\ s_{2}^{(a)}\cdots s_{K}^{(d)}\big{]}^{\text{T}}\) is the \(a^{\text{th}}\) drone swarm's transmit signal vector having a normalized transmit power of \(\mathcal{E}\left\{\mathbf{s}_{a}\mathbf{s}_{a}^{\text{H}}\right\}\!=\!I_{K}\), whilst \(\mathbf{n}\!\in\!\mathbb{C}^{N_{r}}\) is the additive white Gaussian noise (AWGN) with zero mean vector and covariance matrix of \(\sigma_{n}^{2}\mathbf{I}_{N_{r}}\), and \(\mathbf{P}_{\text{rx},a}^{(d,x)}\!=\!\text{diag}\Big{\{}P_{\text{rx},a(1)}^{(d, x)},\cdots,P_{\text{rx},a(K)}^{(d,x)}\Big{\}}\) are the powers of the \(K\) drones' signals received at the RUAV. In (1), the subscript \(a=0\) denotes the desired drone swarm, whilst the subscript \(a\in\{1,2,\cdots,A\}\) denotes the \(a^{\text{th}}\) co-channel drone swarm contaminating the desired one, with \(A\) being the number of interfering drone swarms. Clearly, \(A\leq B-1\). Furthermore, in the superscript, \(x=p\) represents pilot training and \(x=s\) denotes data transmission. Still referring to (1), the received power \(P_{\text{rx},a(k)}^{(d,x)}\) is a function of the transmit power and path loss, which is given by \[P_{\text{rx},a(k)}^{(d,x)}= P_{\text{tx},a(k)}^{(d,x)}10^{-0.1L_{\text{path loss},a,k}}, \tag{2}\] where the path loss model \(L_{\text{path loss},a}\) of mmWave signals is modelled as [30] \[L_{\text{path loss},a}[\text{dB}]= \alpha+\beta 10\log_{10}\left(d_{a,k}\right)+L_{\sigma}. \tag{3}\] In (3), \(\alpha\) is the is the path loss in decibels (dB) at the reference distance \(d_{0}\) calculated using the Friis free-space path loss model, \(\beta\) is the linear slope, \(d_{a,k}\) is the distance between the Fig. 1: UAV-relay aided drone swarm communications. RUAV and the \(k\)th drone of the \(a^{\text{th}}\) drone swarm in meter, and \(L_{\sigma}\) is the shadow fading [33], which is a zero-mean Gaussian random variable with standard deviation \(\sigma\) in dB. The DCDS-to-RUAV MIMO channel is an air-to-air channel, which is dominated by its line-of-sight (LoS) component, but scattered components may still exist that impinge from the reflections mountains/buildings etc. Hence, the DCDS-to-RUAV MIMO channel is modeled as a Rican channel, which is formulated as \[\mathbf{H}_{a}^{(d/r)}= \nu_{r}\mathbf{H}_{a,a}^{(d/r)}+\zeta_{r}\mathbf{H}_{a,r}^{(d/r)}, \tag{4}\] where we have \(\nu_{r}=\sqrt{K_{\text{Rice},r}/\left(1+K_{\text{Rice},r}\right)}\) and \(\zeta_{r}=1/\left(1+K_{\text{Rice},r}\right)\) with \(K_{\text{Rice},r}\) being the Rician factor, while \(\mathbf{H}_{a,a}^{(d/r)}\) is the deterministic part of the Rician channel and \(\mathbf{H}_{a,r}^{(d/r)}\) is the scattered component of the Rician channel. Typically, the Rician factor is affected by the altitude of the UAV [5, 28, 34], where a higher UAV is more likely experience a higher Rician factor, namely a stronger LoS component. Owing to the minimum safety separation distance, the transmit antennas on different drones experience uncorrelated fading, whilst the receive antennas deployed on URAVs are located at the same site. Hence there may exist correlation between the \(N_{r}\) DRAs. Therefore, the scattered component \(\mathbf{H}_{a,r}^{(d/r)}\) can be formulated as \[\mathbf{H}_{a,r}^{(d/r)}=\left(\mathbf{R}_{\text{rx},a}^{d/r}\right)^{\frac{1}{2}}\bm {G}_{a}^{d/r}, \tag{5}\] where \(\mathbf{R}_{\text{rx},a}^{d/r}\!\in\!\mathbb{C}^{N_{r}\times N_{r}}\) is the spatial correlation matrix of the \(N_{r}\) DRAs, and the entries of \(\mathbf{G}_{a}^{d/r}\!\in\!\mathbb{C}^{N_{r}\times K}\) are independent and identically distributed (i.i.d) complex random variables obeying the distribution \(\mathcal{CN}(0,1)\). The transmitted vector \(\mathbf{s}_{0}\) can be detected by computing the inner product between the received vector \(\mathbf{r}\) and a linear receiver combing (RC) matrix \(\mathbf{W}_{0}^{(d/r)}\), which is expressed as \[\mathbf{\widehat{s}}_{0}= \sqrt{\lambda^{(d/r)}}\mathbf{W}_{0}^{(d/r)}\mathbf{H}_{0}^{(d/r)}\big{(} \mathbf{P}_{\text{rx},0}^{(d,x)}\big{)}^{\frac{1}{2}}\mathbf{s}_{0}\] \[+\sqrt{\lambda^{(d/r)}}\mathbf{W}_{0}^{(d/r)}\sum_{a=1}^{A}\mathbf{H}_{a} ^{(d/r)}\big{(}\mathbf{P}_{\text{rx},a}^{(d,x)}\big{)}^{\frac{1}{2}}\mathbf{s}_{a}+ \widetilde{\mathbf{n}}, \tag{6}\] where \(\lambda^{(d/r)}=\frac{1}{K}\text{Tr}\Big{\{}\mathcal{E}\Big{\{}\mathbf{W}_{0}^{( d/r)}\big{(}\mathbf{W}_{0}^{(d/r)}\Big{)}^{\text{H}}\Big{\}}\Big{\}}\) is a normalization factor, and \(\widetilde{\mathbf{n}}=\sqrt{\lambda^{(d/r)}}\mathbf{W}_{0}^{(d/r)}\mathbf{n}\) is the effective noise after applying the RC operation. The RC matrix based on the classical matched filter (MF) is given by \(\mathbf{W}_{0}^{(d/r)}=\left(\widetilde{\mathbf{H}}_{0}^{(d/r)}\right)^{\text{H}}\), where \(\widetilde{\mathbf{H}}_{0}^{(d/r)}\) is the estimate of \(\mathbf{H}_{0}^{(d/r)}\). Upon using the optimal minimum mean square error (MMSE) channel estimator [35], the channel estimate \(\widetilde{\mathbf{H}}_{a}^{(d/r)}\), \(a=0,1,\cdots,A\), is given by Eq. (7), where \(\widetilde{\mathbf{S}}^{(p)}\!\in\!\mathbb{C}^{K\times K}\) is the pilot symbol matrix associated with \(\widetilde{\mathbf{S}}^{(p)}\!\left(\widetilde{\mathbf{S}}^{(p)}\right)^{\text{H}}=\bm {I}_{K}\), and \(\widetilde{\mathbf{N}}\!\in\!\mathbb{C}^{N_{r}\times K}\) is the noise matrix over \(K\) pilots, while \(\mathbf{\Psi}_{\mathbf{H}_{a,r}^{(d/r)}}\) is the covariance matrix of \(\mathbf{vec}\big{(}\mathbf{H}_{a,r}^{(d/r)}\big{)}\) given by \[\mathbf{\Psi}_{\mathbf{H}_{a,r}^{(d/r)}}\!=\!\!\mathcal{E}\left\{\mathbf{vec}\left(\mathbf{H}_ {a,r}^{(d/r)}\right)\mathbf{vec}\left(\mathbf{H}_{a,r}^{(d/r)}\right)^{\text{H}}\right\} \!\!\!=\!\mathbf{I}_{K}\otimes\mathbf{R}_{\text{rx},a}^{d/r}, \tag{8}\] and \(\mathbf{\Phi}_{\mathbf{H}_{a,r}^{(d/r)}}\) in (7) is defined by \[\mathbf{\Phi}_{\mathbf{H}_{a,r}^{(d/r)}}= \left(\sigma_{n}^{2}\big{(}\mathbf{P}_{\text{rx},a}^{(r,p)}\big{)}^{-1 }\!\otimes\!\mathbf{I}_{N_{r}}\!+\!\zeta_{r}^{2}\mathbf{\Psi}_{\mathbf{H}_{a,r}^{(d/r)}}\right.\] \[+\zeta_{r}^{2}\sum_{a^{\prime}=0,a^{\prime}\neq a}^{A}\widetilde {\mathbf{P}}_{\text{rx},a^{\prime}}^{(r,p)}\mathbf{\Psi}_{\mathbf{H}_{a,r}^{(d/r)}}\right)^{ -1}, \tag{9}\] in which \(\widetilde{\mathbf{P}}_{\text{rx},a^{\prime}}^{(r,p)}\) is given as \[\widetilde{\mathbf{P}}_{\text{rx},a^{\prime}}^{(r,p)}=\left(\mathbf{P}_{ \text{rx},a^{\prime}}^{(r,p)}\big{(}\mathbf{P}_{\text{rx},a}^{(r,p)}\big{)}^{-1} \right)\otimes\mathbf{I}_{K}. \tag{10}\] The true channel \(\mathbf{vec}\big{(}\mathbf{H}_{a}^{(d/r)}\big{)}\) is equal to the MMSE estimate \(\mathbf{vec}\big{(}\widetilde{\mathbf{H}}_{a}^{(d/r)}\big{)}\) plus the channel estimation error \(\mathbf{vec}\big{(}\widetilde{\mathbf{H}}_{a}^{(d/r)}\big{)}\): \[\mathbf{vec}\big{(}\mathbf{H}_{a}^{(d/r)}\big{)}= \mathbf{vec}\big{(}\widetilde{\mathbf{H}}_{a}^{(d/r)}\big{)}+\mathbf{vec}( \widetilde{\mathbf{H}}_{a}^{(d/r)}\big{)}. \tag{11}\] Clearly, \(\mathbf{vec}\big{(}\widetilde{\mathbf{H}}_{a}^{(d/r)}\big{)}\) is independent of both \(\mathbf{vec}\big{(}\mathbf{H}_{a}^{(d/r)}\big{)}\) and \(\mathbf{vec}\big{(}\widetilde{\mathbf{H}}_{a}^{(d/r)}\big{)}\), and it obeys the distribution \(\mathcal{CN}\left(\mathbf{0}_{N_{r}K},\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{a,r}^{(d/r)}}\right)\) with the covariance matrix \(\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{a,r}^{(d/r)}}\) given by \[\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{a,r}^{(d/r)}}= \zeta_{r}^{2}\mathbf{\Psi}_{\mathbf{H}_{a,r}^{(d/r)}}-\zeta_{r}^{2}\mathbf{ \Psi}_{\widetilde{\mathbf{H}}_{a,r}^{(d/r)}}. \tag{12}\] The covariance matrix of the MMSE channel estimate \(\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{a,r}^{(d/r)}}\) in (12) is given by \[\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{a,r}^{(d/r)}}= \mathbf{\Psi}_{\mathbf{H}_{a,r}^{(d/r)}}\mathbf{\Phi}_{\mathbf{H}_{a,r}^{(d/r)}} \mathbf{\Psi}_{\mathbf{H}_{a,r}^{(d/r)}}. \tag{13}\] ### _Signal model of RUAV-to-GS_ The \(K\) drones' signals are detected and forwarded by the RUAV to the GS with aid of its \(K\) DTAs. Since there are \(B\) RUAVs, the signal vector received at the GS can be written as \[\mathbf{y}= \sqrt{P_{\text{rx},0}^{(r,x)}}\mathbf{H}_{0}^{(r/g)}\mathbf{x}_{0}+\sqrt{P_{\text{rx},b }^{(r,x)}}\sum_{b=1}^{B-1}\mathbf{H}_{b}^{(r/g)}\mathbf{x}_{b}+\mathbf{v}, \tag{14}\] where \(b\!=\!0\) indicates the desired RUAV, and \(b\!\neq\!0\) refer to the interfering RUAVs, while \(\mathbf{H}_{b}^{(r/g)}\!\in\!\mathbb{C}^{N_{s}\times K}\) is the MIMO channel matrix between the \(b\)th RUAV and the GS, \(\mathbf{x}_{b}\!\in\!\mathbb{C}^{K}\) is the \ a different shadowing factor \(L_{\sigma}^{(g)}\), since the local environment of the GS is different from that of the RUAV. The air-to-ground channel of the RUAV-to-GS link is also Rician, but it suffers from stronger scattering and reflection. Hence, the MIMO channel matrix \(\mathbf{H}_{b}^{(r/g)}\) can be expressed as \[\mathbf{H}_{b}^{(r/g)}= \nu_{g}\mathbf{H}_{b,\mathrm{d}}^{(r/g)}+\zeta_{g}\mathbf{H}_{b,\mathrm{r}}^ {(r/g)}, \tag{15}\] where \(\nu_{g}=\sqrt{K_{\text{Rice},g}/\left(1+K_{\text{Rice},g}\right)}\) and \(\zeta_{g}=1/\left(1+K_{\text{Rice},g}\right)\) with \(K_{\text{Rice},g}\) being the Rician factor, while \(\mathbf{H}_{b,\mathrm{d}}^{(r/g)}\) is the deterministic part of the Rician channel and \(\mathbf{H}_{b,\mathrm{r}}^{(r/g)}\) is the scattered component of the Rician channel. Since the RUAV has \(N_{\text{total}}\) antennas, which is much more than \(K\), it can always select \(K\) uncorrelated DTAs for forwarding its drone swarm's signals to the GS. Again, the scattered component \(\mathbf{H}_{b,\mathrm{r}}^{(r/g)}\) can be expressed as \[\mathbf{H}_{b,\mathrm{r}}^{(r/g)}=\left(\mathbf{R}_{\mathrm{rx},b}^{r/g} \right)^{\frac{1}{2}}\mathbf{G}_{b}^{r/g}, \tag{16}\] where \(\mathbf{R}_{\mathrm{rx},b}^{r/g}\in\mathbb{C}^{N_{g}\times N_{g}}\) is the spatial correlation matrix of the \(N_{g}\) DRAs and \(\mathbf{G}_{b}^{r/g}\!\in\!\mathbb{C}^{N_{g}\times K}\) has i.i.d. complex entries obeying the distribution \(\mathcal{CN}(0,1)\). Similar to the DCDS-to-RUAV signal model, the estimate of \(\mathbf{x}_{0}\) can be acquired by applying the MF-based RC, which is given by \[\mathbf{x}_{0}= \sqrt{\lambda^{(r/g)}P_{\mathrm{rx},0}^{(r,x)}}\mathbf{W}_{0}^{(r/g)} \mathbf{H}_{0}^{(r/g)}\mathbf{x}_{0}\] \[+\sqrt{\lambda^{(r/g)}}\mathbf{W}_{0}^{(r/g)}\sum_{b=1}^{B-1}\sqrt{P_ {\mathrm{rx},b}^{(r,x)}}\mathbf{H}_{b}^{(r/g)}\mathbf{x}_{b}+\widetilde{\mathbf{v}}, \tag{17}\] where \(\lambda^{(r/g)}=\frac{1}{K}\text{Tr}\!\left[\!\xi\!\left\{\mathbf{W}_{0}^{(r/g)} \!\left(\mathbf{W}_{0}^{(r/g)}\!\right)^{\mathrm{H}}\!\right\}\!\right]\) is the normalization factor, and \(\widetilde{\mathbf{v}}\!=\!\sqrt{\lambda^{(r/g)}}\mathbf{W}_{0}^{(r/g)}\mathbf{v}\) is the effective noise. The MF-based RC matrix is given by \(\mathbf{W}_{0}^{(r/g)}=\left(\widetilde{\mathbf{H}}_{0}^{(r/g)}\right)^{\mathrm{H}}\), and the MMSE channel estimate \(\widetilde{\mathbf{H}}_{b}^{(r/g)}\) of the true channel \(\mathbf{H}_{b}^{(r/g)}\) is given by Eq. (18), where \(\widetilde{\mathbf{X}}^{(p)}\!\in\!\mathbb{C}^{K\times K}\) is the pilot symbol matrix associated with \(\widetilde{\mathbf{X}}^{(p)}(\widetilde{\mathbf{X}}^{(p)})^{\mathrm{H}}\!=\!\mathbf{I}_{K}\), and \(\widetilde{\mathbf{V}}\!\in\!\mathbb{C}^{N_{g}\times K}\) is the noise matrix over \(K\) pilots, while the covariance matrix \(\mathbf{\Psi}_{\mathbf{H}_{b,\mathrm{r}}^{(r/g)}}\) of \(\mathbf{vec}\!\left(\mathbf{H}_{b}^{(r/g)}\right)\) is given by \[\mathbf{\Psi}_{\mathbf{H}_{b,\mathrm{r}}^{(r/g)}}\!\!=\!\!\mathcal{E} \left\{\mathbf{vec}\left(\mathbf{H}_{b,\mathrm{r}}^{(r/g)}\right)\mathbf{vec}\left(\mathbf{H}_ {b,\mathrm{r}}^{(r/g)}\right)^{\mathrm{H}}\!\right\}\!\!=\!\!\mathbf{I}_{K}\otimes \mathbf{R}_{\mathrm{rx},b}^{r/g}, \tag{19}\] and \(\mathbf{\Phi}_{\mathbf{H}_{b,\mathrm{r}}^{(r/g)}}\) in (18) is formulated as \[\mathbf{\Phi}_{\mathbf{H}_{b,\mathrm{r}}^{(r/g)}}= \bigg{(}\frac{\sigma_{v}^{2}}{P_{\mathrm{rx},b}^{(r,p)}}\!\otimes\! \mathbf{I}_{N_{g}K}\!+\!\zeta_{g}^{2}\mathbf{\Psi}_{\mathbf{H}_{b,\mathrm{r}}^{(r/g)}}\] \[+\frac{\zeta_{g}^{2}}{P_{\mathrm{rx},b}^{(r,p)}}\sum_{\nu=0,b^{ \prime}\neq b}^{B-1}P_{\mathrm{rx},b^{\prime}}^{(r,p)}\mathbf{\Psi}_{\mathbf{H}_{b^{ \prime},\mathrm{r}}^{(r/g)}}\bigg{)}^{-1}. \tag{20}\] More specifically, the true channel \(\mathbf{vec}\!\left(\mathbf{H}_{b}^{(r/g)}\right)\) is given by \[\mathbf{vec}\!\left(\mathbf{H}_{b}^{(r/g)}\right)= \mathbf{vec}\!\left(\widetilde{\mathbf{H}}_{b}^{(r/g)}\right)+\mathbf{vec}\! \left(\widetilde{\mathbf{H}}_{b}^{(r/g)}\right), \tag{21}\] and the channel estimation error obeys \(\mathbf{vec}\!\left(\widetilde{\mathbf{H}}_{b}^{(r/g)}\right)\sim\mathcal{CN}\left( \mathbf{0}_{N_{g}K},\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{b,\mathrm{r}}^{(r/g)}}\right)\), with \(\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{b,\mathrm{r}}^{(r/g)}}\) given by \[\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{b,\mathrm{r}}^{(r/g)}}= \zeta_{g}^{2}\mathbf{\Psi}_{\mathbf{H}_{b,\mathrm{r}}^{(r/g)}}-\zeta_{g}^{2} \mathbf{\Psi}_{\widetilde{\mathbf{H}}_{b,\mathrm{r}}^{(r/g)}}, \tag{22}\] and \[\mathbf{\Psi}_{\widetilde{\mathbf{H}}_{b,\mathrm{r}}^{(r/g)}}= \mathbf{\Psi}_{\mathbf{H}_{b,\mathrm{r}}^{(r/g)}}\mathbf{\Phi}_{\mathbf{H}_{b, \mathrm{r}}^{(r/g)}}\mathbf{\Psi}_{\mathbf{H}_{b,\mathrm{r}}^{(r/g)}}. \tag{23}\] ## III Analysis of the achievable throughput Here, we use the decode and forward relaying protocol as an example for analyzing the achievable uplink throughput. Clearly, the end-to-end uplink throughput is the minimum of the DCDS-to-RUAV link's throughput and the RUAV-to-GS link's throughput. ### _The achievable throughput of the DCDS-to-RUAV link_ The ergodic achievable uplink throughput of the \(k\)th drone for the targeted DCDS is formulated as \[C_{k}^{(d/r)}=\mathcal{E}\left\{\log_{2}\left(1+\frac{P_{\mathrm{S},k}^{(d/r)}}{P _{\mathrm{IN},k}^{(d/r)}}\right)\right\}, \tag{24}\] where \(P_{\mathrm{S},k}^{(d/r)}\) and \(P_{\mathrm{IN},k}^{(d/r)}\) are the powers of the desired signal and of the interference-plus-noise, respectively. By invoking _Lemma I_ of [36], \(C_{k}^{(d/r)}\) in (18) can be approximated as \[C_{k}^{(d/r)}\approx\log_{2}\left(1+\frac{\bar{P}_{\mathrm{S},k}^{(d/r)}}{\bar{P }_{\mathrm{IN},k}^{(d/r)}}\right), \tag{25}\] where \(\bar{P}_{\mathrm{S},k}^{(d/r)}\!=\!\mathcal{E}\left\{P_{\mathrm{S},k}^{(d/r)}\right\}\) and \(\bar{P}_{\mathrm{IN},k}^{(d/r)}\!=\!\mathcal{E}\left\{P_{\mathrm{IN},k}^{(d/r)}\right\}\). Let \(k^{*}\) represent the investigated drone of the desired drone swarm upon invoking _Lemma I_ of [37] and _Lemma 2_ of [38], \(\mathcal{E}\left\{P_{\mathrm{S},k^{*}}^{(d/r)}\right\}\) can be derived as \[\mathcal{E}\!\left\{P_{\mathrm{S},k^{*}}^{(d/r)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! in which \(\mathbf{B}_{\mathbf{H}_{0,d}^{(d/r)}}=\mathbf{vec}(\mathbf{H}_{0,d}^{(d/r)})\mathbf{vec}(\mathbf{H}_{0,d}^ {(d/r)})^{\mathrm{H}}\). The interference plus noise power \(\mathcal{E}\left\{P_{\mathrm{IN},k^{\prime}}^{(d/r)}\right\}\) can be expressed as Eq. (28). ### _The achievable throughput of the RUAV-to-GS link_ Similarly, the achievable throughput of the \(k^{\ast}\)th DTA in the targeted RUAV-to-GS link is given by \[C_{k^{\ast}}^{(r/g)} \approx\log_{2}\left(1+\frac{\bar{P}_{\mathrm{S},k^{\ast}}^{(r/g) }}{\bar{P}_{\mathrm{IN},k^{\prime}}^{(r/g)}}\right), \tag{29}\] where the signal power \(\bar{P}_{\mathrm{S},k^{\ast}}^{(r/g)}=\mathcal{E}\left\{P_{\mathrm{IN},k^{ \prime}}^{(r/g)}\right\}\) and the interference plus noise power \(\bar{P}_{\mathrm{IN},k^{\prime}}^{(r/g)}=\mathcal{E}\left\{P_{\mathrm{IN},k^{ \prime}}^{(r/g)}\right\}\) are given respectively by Eq. (30) and Eq. (31), respectively. \[\bar{P}_{\mathrm{S},k^{\ast}}^{(r/g)}= \lambda^{(r/g)}P_{\mathrm{IN},k^{\prime}}^{(r,s)}\bigg{(}\text{ Tr}\bigg{\{}\nu_{g}^{2}\mathbf{B}_{\mathbf{H}_{0,d,\downarrow,\downarrow^{\ast}}^{(r/g)}} +\zeta_{g}^{2}\mathbf{\Psi}_{\mathbf{H}_{0,r,\downarrow,\downarrow^{\ast}}^{(r/g)}} \bigg{)}\bigg{\}}\] \[+\zeta_{g}^{2}\mathbf{\Psi}_{\mathbf{\bar{H}_{0,r,\downarrow,\downarrow^{ \ast}}^{(r/g)}}}\mathbf{\Psi}_{\mathbf{\bar{H}_{0,r,\downarrow,\downarrow^{\ast}}^{(r/ g)}}}\bigg{\}}\bigg{)}^{2}, \tag{30}\] ### _Distance-based ACM_ ACM is a powerful link adaptation technique conceived for improving bandwidth efficiency (BE), which was traditionally developed in line with the instantaneous SINR. However, upon invoking (2) and (3), it can be readily seen that the received signal power decreases upon increasing the communication distance, which in turn reduces both the uplink SINR and the achievable uplink throughput. By considering the large-scale geographic distribution of aeronautical communications, we follow the philosophy of distance-based ACM proposed in our earlier work [32]. In the following, we take the DCDS-to-RUAV link as an example for briefly introducing the distance-based ACM, whilst the operations of the distance-based ACM assisted RUAV-to-GS link adaptation follows the same procedure. Note that the RUAV-to-GS link has the same system parameters, including the parameters of distance-based ACM, as the DCDS-to-RUAV link. Given the total bandwidth \(B_{\text{total}}\), the number of subcarriers \(N_{c}\), the number of CP \(N_{\text{cp}}\) and the number of ACM modes \(Q\) associated with a set of distance-based switching thresholds \(\{d_{m}^{(d/r)}\}_{\text{r}=0}^{Q}\), we have \(d_{0}^{(d/r)}=D_{\text{max}}^{(d/r)}\) and \(d_{Q}^{(d/r)}=D_{\text{min}}^{(d/r)}\), where \(D_{\text{max}}^{(d/r)}\) and \(D_{\text{min}}^{(d/r)}\) are the minimum safe separation distance and the maximum communication distance. The key operations of the MF-based RC scheme and distance-based ACM are summarized as follows. 1. **Position broadcasting**. The RUAV broadcasts its position to the DCDS. 2. **Pilot training**. The RUAV estimates the channel matrix \(\mathbf{H}_{0}^{(d/r)}\) based on the pilot symbols sent by the DCDS. 3. **ACM mode selection**. The \(k\)th drone of the desired drone swarm chooses its ACM mode based on its distance from the RUAV, \(d_{0,k}^{(d/r)}\), according to \[\text{If }d_{q}^{(d/r)}\leq d_{0,k}^{(d/r)}<d_{q-1}^{(d/r)}: \text{choose mode }q,\] \[q\in\{1,2,\cdots,Q\}.\] (32) 4. **Data transmission**. Each drone transmits its signal using the ACM mode chosen according to (32). The data rate of the \(k\)th drone is given by \[R_{\text{total},k}^{(d/r)}= B_{\text{total}}r_{c,q}\log_{2}\left(M_{q}\right)\frac{N_{c}}{N_{c} +N_{\text{cp}}},\] (33) where \(r_{c,q}\) and \(M_{q}\) are the \(q\)th ACM mode's coding rate and modulation order, respectively. 5. **Data reception**. The RUAV detects the DCDS's signal by applying the MF-based RC matrix \(\mathbf{W}_{0}^{(d/r)}\) to the received signal. ## IV Optimization problems of relaying-assisted FANET We consider a challenging scenario in that the end-to-end communication requires the assistance of an RUAV, because there is no direct communication link between the DCDS and the GS. The DCDS is capable of communicating with the remote GS either continuously or intermittently with the assistance of a RUAV as illustrated in Fig. 1, depending on the end-to-end distance from the DCDS to the GS. Explicitly, let us denote the maximum communication distances of the DCDS-to-RUAV link and the RUAV-to-GS link as \(D_{\text{max}}^{(d/r)}\) and \(D_{\text{max}}^{(r/g)}\), respectively, whilst the minimum communication distances of the DCDS-to-RUAV link and the RUAV-to-GS link are denoted as \(D_{\text{min}}^{(d/r)}\) and \(D_{\text{min}}^{(r/g)}\), respectively. The total end-to-end distance from the DCDS to the GS is denoted by \(D^{(d/g)}\). Let us assume that the drones in a swarm have the same distance from the RUAV2. Thus, we have \(d_{0,1}^{(d/r)}=d_{0,2}^{(d/r)}=\cdots=d_{0,K}^{(d/r)}=d_{0}^{(d/r)}\), where again \(d_{0,k}^{(d/r)}\) is the communication distance between the \(k\)th drone and the RUAV. The RUAV starts from the point having the minimum distance from the DCDS, say \(A_{0}\), where we have \(d_{0}^{(d/r)}=d_{0}^{(d/r)}(0)=D_{\text{min}}^{(d/r)},k=1,2,\cdots,K\). Footnote 2: This assumption approximately holds, since the drones in a swarm are relatively close to each other and the distances between the drones in a DCDS are much smaller compared to the distance between the DCDS and RUAV. Moreover, this assumption may also be made to hold by coordinately deploying the drones in a swarm, which are controllable by the RUAV for cooperation. There are two scenarios to be investigated depending on the relationship between \(\left(D_{\text{max}}^{(d/r)}+D_{\text{max}}^{(r/g)}\right)\) and \(D^{(d/g)}\), when a relay node is needed, namely that of a **static relay** when \(D_{\text{max}}^{(d/r)}<D^{(d/g)}\leq\left(D_{\text{max}}^{(d/r)}+D_{\text{max }}^{(r/g)}\right)\) and that of a **mobile relay** when \(\left(D_{\text{max}}^{(d/r)}+D_{\text{max}}^{(r/g)}\right)<D^{(d/g)}\). ### _Static relay_ When \(D_{\text{max}}^{(d/r)}<D^{(d/g)}\leq\left(D_{\text{max}}^{(d/r)}+D_{\text{max }}^{(r/g)}\right)\), there exist simultaneous communication links for both the DCDS-to-RUAV and the RUAV-to-GS. In this case, the RUAV may act as a static relay between the GS and the DCDS. OFDMA is used by the RUAV for receiving and forwarding the DCDS's data, and the delay imposed in the static-relay scenario is \(\tau_{s}=0\), corresponding to omitting the delay of signal processing associated both with reception and forwarding. However, the position of the RUAV can be optimized in order to maximize the achievable end-to-end throughput. Explicitly, this optimization problem is formulated as \[\text{Find: }\mathbf{d}_{\text{opt}}=\left(D^{(d/r)},D^{(r/g)}\right) \tag{34}\] \[\text{to maximize: }R_{e,\text{sum}}=\sum_{k=1}^{K}R_{e,k},\] (35) \[\text{subject to: }\left\{\begin{array}{l}D^{(d/r)}+D^{(r/g)}=D^{(d/g)},\\ D_{\text{min}}^{(d/r)}\leq D^{(d/r)}\leq D_{\text{max}}^{(d/r)},\\ D_{\text{min}}^{(r/g)}\leq D_{\text{min}}^{(r/g)}\leq D_{\text{max}}^{(r/g)}, \end{array}\right. \tag{36}\] where \(R_{e,k}=\min\left\{R_{\text{total},k}^{(d/r)}\left(D^{(d/r)}\right),R_{\text{ total},k}^{(r/g)}\left(D^{(r/g)}\right)\right\}\) is the \(k\)th drone's effective end-to-end rate, \(R_{\text{total},k}^{(d/r)}\left(D^{(d/r)}\right)\) and \(R_{\text{total},k}^{(r/g)}\left(D^{(r/g)}\right)\) are the DCDS-to-RUAV link data rate corresponding to \(D^{(d/r)}\) and the RUAV-to-GS link data rate corresponding to \(D^{(r/g)}\), respectively. In order to identify both the achievable maximum end-to-end throughput and the optimal position of the RUAV, we define \(D_{o}=D_{\text{max}}^{(d/r)}+D_{\text{max}}^{(r/g)}-D^{(d/g)}\) as the overlapped communication range of the DCDS and the GS, as seen in Fig. 2. The maximum achievable end-to-end throughput and the corresponding position of the RUAV can be found as follows. Given \(D_{o}\), we can readily identify that the RUAV's ACM mode switching thresholds \(d_{0}^{(r/g)}\), \(d_{1}^{(r/g)},\cdots,d_{I}^{(r/g)}\) are located within the overlapped communication range of the DCDS and the GS, whilst the DCDS's ACM mode switching thresholds \(d_{0}^{(d/r)},d_{1}^{(d/r)},\cdots,d_{J}^{(d/r)}\) are located within the overlapped communication range of the DCDS and the GS. Because the ACM mode is changed upon crossing a specific switching threshold, we can evaluate the achievable throughput at the points of \((d_{(l)}^{(r/g)},d_{(l)}^{(d/r)})\), \(l\!=\!1,2,\cdots,L\) and \(L\!=\!I\!+\!J\!-\!O\). Here, \(O\) is the number of points for any \(i\!=\!1,2,\cdots,I\) and \(j\!=\!1,2,\cdots,J\), \(d_{i}^{(r/g)}\!=\!D^{(d/g)}-d_{i}^{(d/r)}\). Furthermore, \(d_{(l)}^{(r/g)}\) is a value selected from \(\left\{d_{0}^{(r/g)},d_{1}^{(r/g)},\cdots,d_{I}^{(r/g)},D^{(d/g)}-d_{0}^{(d/r)},D^{(d/g)}-d_{1}^{(d/r)},\cdots,D^{(d/g)}-d_{J}^{(d/r)}\right\}\) and we sort \(d_{(l)}^{(r/g)}\), \(l\!=\!1,2,\cdots,L\), in the ascending order of \(d_{(1)}^{(r/g)}<d_{(2)}^{(r/g)}<\cdots<d_{(L)}^{(r/g)}\). The maximum achievable end-to-end throughput can be expressed as follows \[R_{e,\text{sum,max}}\!\!=\!\!\max\!\left\{\!\sum_{k=1}^{K}R_{e,k} \left(d_{(l)}^{(r/g)},d_{(l)}^{(d/r)}\right)\!\right\},l=1,2,\cdots,L. \tag{37}\] The case of a single point to achieve the maximum throughput only happens when \(D^{(d/g)}\!=\!d_{0}^{(r/g)}+d_{0}^{(d/r)}\). Otherwise there will be at least two critical points, within which it will be possible to achieve the maximum throughput, as indicated by the shadow areas in Fig. 2. Let us denote these critical points by \(\left(d_{(l^{*})}^{(r/g)},d_{(l^{*})}^{(d/r)}\right)\), \(l\!=\!1,2,\cdots,L^{*}\), where \(d_{(l^{*})}^{(r/g)}\) and \(d_{(l^{*})}^{(d/r)}\), \(l^{*}\!=\!1,2,\cdots,L^{*}\), have been sorted as \(d_{(1)}^{(r/g)}<d_{(2)}^{(r/g)}<\cdots<d_{(L^{*})}^{(r/g)}\) and \(d_{(1)}^{(d/r)}<d_{(2)}^{(d/r)}<\cdots<d_{(L^{*})}^{(d/r)}\), respectively. Then, the RUAV is capable of achieving the maximum throughput when its distance from the GS is in the range of \(\left[d_{(l^{*})}^{(r/g)},d_{(l^{*})}^{(d/r)}\right]\), as shown by the different patterns in Fig. 2. ### _Mobile relay for data ferry_ Given the maximum RUAV to GS communication distance \(D_{\text{max}}^{(r/g)}\) and RUAV to DCDS distance \(D_{\text{max}}^{(d/r)}\), if we have \(\left(D_{\text{max}}^{(d/r)}+D_{\text{max}}^{(r/g)}\right)<D^{(d/g)}\), the RUAV can only access either the GS or DCDS, or in fact neither of them. So, the DCDS-to-RUAV and RUAV-to-GS links do not exist concurrently. In this Fig. 2: An illustration of RUAV’s optimal position. scenario, the traditional static relaying scheme of _Section IV-B_ will no longer work. In order to relay data from the DCDS to the GS, the RUAV first has to acquire the data at a DCDS point and then fly to a GS to offload it, as illustrated in Fig. 1. Intuitively, a buffer is required by the RUAV for storing data before ferrying it to the GS. We assume that the buffer size is \(T_{b}\) Gigabit (GB). Once the RUVA offloaded the data to the GS, it will return to the DCDS point to acquire more data, which completes a whole loop, as shown in Fig. 3. Explicitly, there are eight states for a complete loop. Let us assume that the procedure will be commenced from a DCDS point for data off-loading, and the ensuing state transitions will be detailed in the rest of this subsection by frequently referring to Fig. 3. * **Data loading in the Vicinity of a DCDS.** In this state, the RUVA hovers at a DCDS for acquiring data from it. The instantaneous data stored in the RUAV's buffer \(T_{d}\) at time \(t\) in second (s) is given by \[T_{d}(t)\!\!=\!\!T_{d}(t-1)\!\!+\!\!R_{\text{total},k}^{(d/r)}\left(D_{i}^{(d/ r)}\right),\ t_{S_{1},\!\!<\!\!t}\leq t_{S_{1},\!\!<\!\!t}t_{L_{i}},\] (38) where \(D_{i}^{(d/r)}\!=\!D_{\text{opt}}^{(d/r)}\), and \(T_{d}(0)\!=\!0\), since no data is stored in the buffer at time \(t\!=\!0\). The RUAV will then fly to the GS, when the buffer-fullness reaches the upper threshold \(T_{th}^{up}\), so the data loading duration \(t_{L_{i}}\) at the \(i\)th loop is given by \[t_{L_{i}}=\left\lfloor\frac{T_{th}^{up}-T_{d}(t_{S_{i,i}})}{R_{\text{total},k }^{(d/r)}\left(D_{i}^{(d/r)}\right)}\right\rfloor,\] (39) where again, \(T_{th}^{up}\!=\!\alpha T_{b}\) is the upper threshold for loading data at a DCDS with \(0<\alpha\leq 1\) being the maximum factor of caching data, and \(t_{S_{1},i}\) is given by \[t_{S_{1},i}=\left\{\begin{array}{cc}0,&i=0,\\ t_{S_{1},i-1}+t_{P_{1},-i},&i\geq 1,\end{array}\right.\] (40) in which \(t_{P_{i}}=t_{L_{i}}+t_{P_{d},data}^{(d/r),(i)}+t_{F,no-data}^{(d/r),(i)}+t_{F,data}^{(r/g),(i)}+t_{O_{i}}+t_{B,data}^{(r/g),(i)}+t_{B,data}^{(d/r),(i)}+t_{B,data}^{(d/r),(i)}+t_{B,data}^{(d/r),(i)}+t_{B,data}^{(d/r),(i)}+t_{B,data}^{( d/r),(i)}\) is the period of the RUAV cycling from states 1 to 8 in Fig. 3. Note that we have \(t_{P_{i}}=t_{L_{0}}\). The definitions of the time periods, \(t_{F,data}^{(d/r),(i)}\), \(t_{F,no-data}^{(d/r),(i)}\), \(t_{F,data}^{(r/g),(i)}\), \(t_{B,data}^{(r/g),(i)}\), \(t_{B,no-data}^{(d/r),(i)}\) and \(t_{B,data}^{(d/r),(i)}\), can be found in Fig. 3. Moreover, the communication distance between the DCDS and the RUAV remains \(D_{i}^{(d/r)}\), i.e., \(d_{0}^{(d/r)}(t)=D_{i}^{(d/r)}\) for \(t_{S_{1},i}+1\leq t\leq t_{S_{1},i}+t_{L_{i}}\). The data cumulatively received by the GS is given by \[T_{r}(t)= T_{r}(t-1),\ t_{S_{1},i}<t\leq t_{S_{1},i}+t_{L_{i}},\] (41) with \(T_{r}(0)=0\). * **Flying towards the GS whilst continuing data loading, when the RUAV is within the maximum communication range of the DCDS.** During this specific state of Fig. 3, the RUAV is within the communication range of the DCDS. Hence, the RUAV continues to load the data whilst flying towards the GS. The distance between the RUAV and DCDS at instant \(t\) is given by \[d_{0}^{(d/r)}(t)\!\!=\!\!d_{0}^{(d/r)}(t-1)\!\!+\!\!V,\ t_{S_{2},i}\!\!<\!\!t \leq t_{S_{2},i}+t_{F,data}^{(d/r),(i)},\] (42) where \(V\) is the RUAV's flying velocity in m/s and \(t_{S_{2},i}\) is given by \[t_{S_{2},i}=t_{S_{1},i}+t_{L_{i}},\ i=0,1,2,\cdots.\] (43) The data stored in the RUAV's buffer \(T_{d}\) at instant \(t\) is given by \[T_{d}(t)= T_{d}(t-1)+R_{\text{total},k}^{(d/r)}\left(d_{0}^{(d/r)}(t)\right),\] \[t_{S_{2},i}<t\leq t_{S_{2},i}+t_{F,data}^{(d/r),(i)}.\] (44) The data received cumulatively by the GS is given by \[T_{r}(t)= T_{r}(t-1),\ t_{S_{2},i}<t\leq t_{S_{2},i}+t_{F,data}^{(d/r),(i)}.\] (45) * **Flying towards the GS but the RUAV is out of both the DCDS's and the GS's communication range.** In this state of Fig. 3, there is no DCDS-to-RUAV and RUAV-to-GS communication links, since the RUAV is out of both the DCDS's and the GS's communication range. Hence, the RUAV's buffer remains unchanged during this state. Explicitly, the data stored in the RUAV's buffer and the distance between the RUAV and the DCDS are given respectively by \[T_{d}(t)= T_{d}(t-1),\ t_{S_{3},i}<t\leq t_{S_{3},i}+t_{F,no-data}^{(i)},\] (46) \[d_{0}^{(d/r)}(t)\!\!=\!\!d_{0}^{(d/r)}(t-1)\!\!+\!V,\ t_{S_{3},i}\!\!<\!\!t \!\leq\!t_{S_{3},i}+t_{F,no-data}^{(i)},\] (47) where \(t_{S_{3},i}\) is given as \[t_{S_{3},i}=t_{S_{2},i}+t_{F,data}^{(d/r),(i)},\ i=0,1,2,\cdots.\] (48) The data received cumulatively by the GS remains unchanged as well, which is given by \[T_{r}(t)= T_{r}(t-1),\ t_{S_{3},i}<t\leq t_{S_{3},i}+t_{F,no-data}^{(i)}.\] (49) * **Flying towards the GS whilst offloading data to the GS, when the RUAV is within the communication Fig. 3: State transition of the mobile relaying. range of the GS**. In this state, the RUAV is within the communication range of the GS, but it is out of the communication range of the DCDS, as seen in Fig. 3. So, the RUAV begins to offload its data to the GS. The instantaneous distance between the RUAV as well as the GS and the data stored in the RUAV's buffer are given by \[d_{0}^{(r/g)}(t) = d_{0}^{(r/g)}(t-1)\!-\!\!V,\ t_{S_{4,i}}\!<\!\!t\!\leq\!t_{S_{4,i} }+t_{F,data}^{(r/g),(i)}, \tag{50}\] \[T_{d}(t)= T_{d}(t-1)-R_{\text{total},k}^{(r/g)}\left(d_{0}^{(r/g)}(t)\right),\] \[t_{S_{4,i}}<t\leq t_{S_{4,i}}+t_{F,data}^{(r/g),(i)}, \tag{51}\] respectively, where \(d_{0}^{(r/g)}(t_{S_{4,i}})=D_{\text{max}}^{(r/g)}\) and \(t_{S_{4,i}}\) is given by \[t_{S_{4,i}}=t_{S_{3,i}}+t_{F,no-data}^{(d/r),(i)},\ i=0,1,2,\cdots. \tag{52}\] The accumulated data received by the GS is given by \[T_{r}(t)= T_{r}(t-1)+R_{\text{total},k}^{(r/g)}\left(d_{0}^{(r/g)}(t)\right),\] \[t_{S_{4,i}}<t\leq t_{S_{4,i}}+t_{F,data}^{(r/g),(i)}. \tag{53}\] 5. **Data offloading in the vicinity of the GS**. When the RUAV arrives at the optimized near-GS point as seen in Fig. 3, it will hover at the optimized near-GS point, while offloading data to the GS. However, the distance between the RUAV and the GS remains unchanged. Hence, the distance between the RUAV as well as the GS and the data stored in the RUAV's buffer are given by \[d_{0}^{(r/g)}(t)=D_{\text{opt}}^{(r/g)},\ t_{S_{5,i}}<t\leq t_{S_ {5,i}}+t_{O_{i}},\] (54) \[T_{d}(t)=T_{d}(t-1)-R_{\text{total},k}^{(r/g)}\left(D_{\text{opt} }^{(r/g)}\right),\] \[t_{S_{5,i}}<t\leq t_{S_{5,i}}+t_{O_{i}},\] (55) respectively, where \(t_{S_{5,i}}\) is given by \[t_{S_{5,i}}=t_{S_{4,i}}+t_{F,data}^{(r/g),(i)},\ i=0,1,2,\cdots.\] (56) Furthermore, \(t_{O_{i}}\) is formulated as \[t_{O_{i}}=\left\lfloor\frac{T_{d}(t_{S_{5,i}})-T_{th}^{low}}{R_{\text{total},k }^{(d/r)}\left(D_{\text{opt}}^{(r/g)}\right)}\right\rfloor,\] (57) where \(T_{th}^{low}\!=\!\beta T_{b}\) is the lower threshold for offloading data at the GS, with \(\beta\) being the minimum factor of offloading data3. The data cumulatively received by the GS is given by \[T_{r}(t) = T_{r}(t-1)\!+\!R_{\text{total},k}^{(r/g)}\left(D_{\text{opt}}^{(r/ g)}\right),t_{S_{5,i}}\!<\!t\!\leq\!t_{S_{5,i}}+t_{O_{i}}.\] (58) 6. **Flying towards the DCDS, whilst offloading data to the GS, when the RUAV is still within the communication range of the GS**. At this state of Fig. 3, the RUAV flies towards the DCDS, whilst it continues offloading the remaining data to the GS, since it remains within the communication range of GS. The distance between the RUAV and the GS can be expressed as \[d_{0}^{(r/g)}(t) = d_{0}^{(r/g)}(t-1)\!+\!V,t_{S_{6,i}}<t\leq t_{S_{6,i}}+t_{B,data}^{(r/g ),(i)},\] (59) where \(d_{0}^{(r/g)}(t_{S_{6,i}})=D_{\text{opt}}^{(r/g)}\) and \(t_{S_{6,i}}\) is given by \[t_{S_{6,i}}=t_{S_{5,i}}+t_{O_{i}},\ i=0,1,2,\cdots.\] (60) The amount of data \(T_{d}(t)\) at instant \(t\) stored in the RUAV's buffer is given by \[T_{d}(t)= T_{d}(t-1)-R_{\text{total},k}^{(r/g)}\left(d_{0}^{(r/g)}(t)\right),\] \[t_{S_{6,i}}<t\leq t_{S_{6,i}}+t_{B,data}^{(r/g),(i)},\] (61) while the data cumulatively received by the GS is given by \[T_{r}(t)= T_{r}(t-1)+R_{\text{total},k}^{(r/g)}\left(d_{0}^{(r/g)}(t)\right),\] \[t_{S_{6,i}}<t\leq t_{S_{6,i}}+t_{B,data}^{(r/g),(i)}.\] (62) 7. **The RUAV flies towards the DCDS but is out of both DCDS's and GS's range**. Similar to state 3 of Fig. 3, because the RUAV is out of both the DCDS's and GS's communication range in this state, there is no data transmission and data reception. The RUAV's buffer remains unchanged. Explicitly, the data stored in the RUAV's buffer and the distance between the RUAV and the DCDS is given by \[T_{d}(t)= T_{d}(t-1),\ t_{S_{7,i}}<t\leq t_{S_{7,i}}+t_{B,no-data}^{(i)},\] (63) \[d_{0}^{(r/g)}(t) = d_{0}^{(r/g)}(t-1)\!+\!V,\ t_{S_{7,i}}\!<\!t\leq t_{S_{7,i}}+t_{B,no-data}^{(i)},\] (64) respectively, where \(t_{S_{7,i}}\) is formulated as \[t_{S_{7,i}}=t_{S_{6,i}}+t_{B,data}^{(r/g),(i)},\ i=0,1,2,\cdots.\] (65) The data cumulatively received by the GS remains unchanged as well, which is given by \[T_{r}(t)= T_{r}(t-1),\ t_{S_{7,i}}<t\leq t_{S_{7,i}}+t_{B,no-data}^{(i)}.\] (66) 8. **The RUAV flies towards the DCDS and starts to load data when the RUAV is within the communication range of the DCDS**. When the RUAV passes the point of \(D_{\text{max}}^{(d/r)}\), it will be within the communication range of the DCDS. Then the RUAV will automatically load data from the DCDS into its buffer. The instantaneous distance between the RUAV and the DCDS can be formulated as \[d_{0}^{(d/r)}(t) = d_{0}^{(d/r)}(t-1)\!-\!V,\ t_{S_{8,i}}\!<\!t\leq t_{S_{8,i}}+t_{B, data}^{(d/r),(i)},\] (67) where \(d_{0}^{(d/r)}(t_{S_{8,i}})=D_{\text{max}}^{(d/r)}\) and \(t_{S_{8,i}}\) is given by \[t_{S_{8,i}}=t_{S_{7,i}}+t_{B,no-data}^{(i)},\ i=0,1,2,\cdots.\] (68) The data \(T_{d}\) stored in the RUAV's buffer at time \(t\) is given by \[T_{d}(t)= T_{d}(t-1)+R_{\text{total},k}^{(d/r)}\left(d_{0}^{(d/r)}(t)\right),\] \[t_{S_{k,i}}<t\leq t_{S_{k,i}}+t_{B,data}^{(d/r),(i)}.\] (69) The data accumulated by the GS remains unchanged, as formulated in Eq. (eq70), since the RUAV is out of the communication range of the GS. \[T_{r}(t)= T_{r}(t-1),\ t_{S_{k,i}}<t\leq t_{S_{k,i}}+t_{B,data}^{(d/r),(i)}.\] (70) As discussed above, the RUAV periodically loads the data from the DCDS and offloads the data to the GS when it is flying back and forth between the DCDS and the GS. Let us define the _effective end-to-end connection delay_ as the time between the instant when the GS begins to receive the DCDS's data relayed by the RUAV and the instant when the RUAV flies over the point \(B_{\text{max}}\) at loop \(=0\), which is given by \[\tau_{0}= t_{L_{0}}+t_{F,data}^{(d/r),(0)}+t_{F,no-data}^{(0)}. \tag{71}\] Let us define the _effective end-to-end average data rate_\(R_{e}(t)\) at time \(t\) as the ratio of the accumulated transmitted data volume over the period of time considered, i.e., \[R_{e}(t)=\frac{T_{r}(t)}{t}, \tag{72}\] which also represents the end-to-end data rate experienced at time \(t\) by the GS. There is no data received by the GS at states 1, 2, 3, 7 and 8 of Fig. 3, and the minimum values of the _effective end-to-end average data rate_ curve appear at \(t_{S_{k,i}}\), \(i\!=\!1,2,\cdots\) when the state changes from 3 to 4. Furthermore, \(R_{e}(t_{S_{k,i}})<R_{e}(t_{S_{k,2}})<\cdots\) for \(i\!=\!1,2,\cdots\). Let us define \(t^{*}\) as the delay imposed, when meeting a minimum effective end-to-end average data rate \(R_{e}^{*}\). Depending on the particular value of \(R_{e}^{*}\) in comparison to \(R_{e}(t_{S_{k,i}})\), \(t^{*}\) can be determined as \[t^{*}=t:\ R_{e}(t)= R_{e}^{*}\text{ with }t_{S_{k,i-1}}<t<t_{S_{i,i-1}}\] \[\text{ if }R_{e}(t_{S_{k,i}})\leq R_{e}^{*}<R_{e}(t_{S_{k,i+1}}). \tag{73}\] The optimization problem is to find the optimal near-DCDS loading point at \(d_{\text{opt}}^{(d/r)}\) and the optimal near-GS offloading point \(d_{\text{opt}}^{(r/g)}\) as well as the optimal factor of caching data \(\alpha_{\text{opt}}\) and the optimal factor of offloading data \(\beta_{\text{opt}}\) for maximizing a given effective minimum end-to-end average data rate \(R_{e}^{*}\), whilst minimizing the delay \(t^{*}\) imposed. Without loss of generality, let us maximize \(R_{e}^{*}\), while simultaneously minimizing the delay \(t^{*}\) imposed. Explicitly, the resultant multiple-objective optimization problem is formulated as \[\text{Find}\!\left(d_{\text{opt}}^{(d/r)},d_{\text{opt}}^{(r/g)}\right)\! \text{and}\!\left(\alpha_{\text{opt}},\beta_{\text{opt}}\right)\!\!\begin{cases} \text{to}\ \text{ maximize}T_{r}\big{(}T_{\text{total}}\big{)},\\ \text{to}\ \text{ minimize}\ t^{*},\end{cases} \tag{74}\] \[\text{subject to:}\left\{\begin{array}{l}D_{\text{min}}^{(d/r)}\leq d_{ \text{opt}}^{(d/r)}\leq D_{\text{max}}^{(d/r)},\\ D_{\text{min}}^{(r/g)}\leq d_{\text{opt}}^{(r/g)}\leq D_{\text{max}}^{(r/g)}, \end{array}\right. \tag{75}\] where \(T_{\text{total}}\) is the total time period considered or the predefined working time. ### \(\epsilon\)_-MOGA assisted Pareto-Optimization_ Intuitively, there are no closed-form solutions for the twin-objective optimization problem (74) and (75), since the pair of objectives in (74) should be considered at the same time under the specific constraint of (75). Hence, we resort to the multi-objective genetic algorithm \(\epsilon\)-MOGA [39] in order to acquire the optimal Pareto-front of all solutions of this multi-objective optimization problem. The \(\epsilon\)-MOGA is an elitist multi-objective evolutionary algorithm based on the concept of \(\epsilon\)-dominance [39], which includes the operations of _initialization_, _Archive_, _Variant_, _Selection_ and _Update_ as elaborated on below. 1. **Initialization**. At the first generation of \(g=1\), where \(g\) denotes the generation index, the \(\epsilon\)-MOGA initializes its population of \(P_{s}\) 4-element individuals, denoted as \(\mathbf{P}^{(g)}\). Explicitly, the \(p_{s}\)-th individual is given by \[\mathbf{d}_{p_{s}}^{(g)}=\left[d_{p_{s},1}^{(g)}\ d_{p_{s},2}^{(g)}\ \alpha_{p_{s}}\ \beta_{p_{s}}\right]^{\mathsf{T}},\ 1\leq p_{s}\leq P_{s},\] (76) where \(d_{p_{s},1}^{(g)}\) is randomly generated within the range of \([D_{\text{min}}^{(d/r)},D_{\text{max}}^{(d/r)}]\), and \(d_{p_{s},2}^{(g)}\) is randomly generated within the range of \([D_{\text{min}}^{(r/g)},D_{\text{min}}^{(r/g)}]\), while \(\alpha_{p_{s}}\) is randomly picked from the range of \((0,\ 1]\), and \(\beta_{p_{s}}\) in the range of \([0,\ 1)\). 2. **Archive**. By calculating and comparing the objectives of the throughput \(T_{r}(t_{\text{total}})\) and the latency \(t^{*}\) for the population of \(\mathbf{P}^{(g)}\), the \(\epsilon\)-Pareto-front solution set \(\widetilde{\mathbf{R}}\) are selected. Explicitly, the individuals in the \(\epsilon\)-Pareto-front solution set \(\widetilde{\mathbf{R}}\)\(\epsilon\)-dominate all the other individuals that are not selected for inclusion into \(\widetilde{\mathbf{R}}\). An individual \(\mathbf{d}_{p_{s}}^{(g)}\)\(\epsilon\)-dominates an individual \(\mathbf{d}_{p_{s}}^{(g)}\) if and only if the objective functions of \(\mathbf{d}_{p_{s}}^{(g)}\) are not worse than the objective functions of \(\mathbf{d}_{p_{s}}^{(g)}\), and at least one objective function value of \(\mathbf{d}_{p_{s}}^{(g)}\) is better than the same objective function of \(\mathbf{d}_{p_{s}^{\prime}}^{(g)}\)[39]. Furthermore, there is also an elite population archive \(\mathbf{A}^{(g)}\). The individuals in \(\widetilde{\mathbf{R}}\) that are not \(\epsilon\)-dominated by the individuals in \(\mathbf{A}^{(g)}\) will be copied into \(\mathbf{A}^{(g)}\). Note that \(\mathbf{A}^{(1)}\) is initialized as an empty set at the first generation. 3. **Variant**. A new variant is generated by the amalgamation of the _'crossover'_ and _'mutation'_ operations, which are typically two separate operations in single-objective GA optimization. Specifically, a pair of individuals, \(\mathbf{r}^{(g,P)}\) and \(\mathbf{r}^{(g,A)}\), are randomly selected, one from the main population \(\mathbf{P}^{(g)}\) and one from the elite population \(\mathbf{A}^{(g)}\), respectively. A randomly generated value \(p_{\text{rand}}\!\in\![0,\ 1]\) is compared to the mutation factor \(p_{c/m}\) to decide which operation should be applied to \(\mathbf{r}^{(g,P)}\) and \(\mathbf{r}^{(g,A)}\). 1. **Crossover**. If \(p_{\text{rand}}>p_{c/m}\), \(\mathbf{r}^{(g,P)}=\left[r_{1}^{(g,P)}\right.\)\(\left.\begin{array}{ccc}r_{2}^{(g,P)}&r_{3}^{(g,P)}&r_{4}^{(g,P)}\end{array} \right]^{\mathsf{T}}\) and \(\mathbf{r}^{(g,A)}=\left[r_{1}^{(g,A)}\right.\)\(\left.\begin{array}{ccc}r_{2}^{(g,A)}&r_{3}^{(g,A)}&r_{4}^{(g,A)}\end{array} \right]^{\mathsf{T}}\) will cross over using the extended linear recombination, which is formulated as \[\left\{\begin{array}{rcl}\widehat{\mathbf{r}}_{1}^{(g,G)}&=&\omega\mathbf{r}^{(g,P)}+(1- \omega)\mathbf{r}^{(g,A)},\\ \widehat{\mathbf{r}}_{2}^{(g,G)}&=&(1-\omega)\mathbf{r}^{(g,P)}+\omega\mathbf{r}^{(g,A)}, \end{array}\right.\] (77) where \(\omega\) is a weighting factor of the extended linear recombination [40]. **ii) Mutation**. If \(p_{\mathrm{rand}}\leq p_{c/m}\), \(\mathbf{r}^{(g,P)}\) and \(\mathbf{r}^{(g,A)}\) will be mutated using the random mutation associated with the Gaussian distribution [39], to yield two new offspring. The crossover or mutation operations are activated \(N_{O}/2\) times, which results in a total of \(N_{O}\) new offspring in the auxiliary population \(\mathbf{G}^{(g)}\). 4. **Selection**. The selection operation of multiple-objective optimization is much more complex than that of single-objective optimization. Explicitly, the \(\epsilon\)-DMOGA calculates the multiple objective function values of the individuals in the auxiliary population \(\mathbf{G}^{(g)}\) and decides which specific individual will be selected into the elite population \(\mathbf{A}^{(g)}\) on the basis of its location in the objective space [39]. 5. **Update**. An individual \(\widehat{\mathbf{r}}_{i}^{(g,G)}\) from the auxiliary population \(\mathbf{G}^{(g)}\) is compared to an individual \(\mathbf{r}_{j}^{(g,P)}\) that is randomly selected from the main population \(\mathbf{P}^{(g)}\): if \(\widehat{\mathbf{r}}_{i}^{(g,G)}\)\(\epsilon\)-dominates \(\mathbf{r}_{j}^{(g,P)}\), \(\mathbf{r}_{j}^{(g,P)}\) is replaced by \(\widehat{\mathbf{r}}_{i}^{(g,G)}\) in \(\mathbf{P}^{(g)}\). The updating operation is continued until all the individuals in the auxiliary population \(\mathbf{G}^{(g)}\) are compared to an individual randomly selected from the main population \(\mathbf{P}^{(g)}\). 6. **Termination**. The ultimate stopping criterion would be that the Pareto-front solutions of the multiple-objective routing optimization problem have been found. However, we cannot offer any proof of evidence that the Pareto-optimal routing paths have indeed been found. In order to have limited and predicable computational complexity, we opt for halting the optimization procedure when the pre-defined maximum affordable number of generations \(g_{\mathrm{max}}\) has been exhausted. The individuals from \(A^{(g_{\mathrm{max}})}\) then comprise the near-Pareto solutions. Otherwise, we set \(g=g+1\) and go to 2) **Archive**. ### _Implementation and computational complexity_ In our mobile relaying-assisted drone swarm network architecture, a swarm of drones acting as DCDS for sensing and collecting data using their mounted cameras and/or sensors, whilst a powerful UAV acting as a mobile relaying repeats a round-trip between DCDS and the GS for relaying data from DCDS to GS. Small and micro drones relying on rotor can be used as DCDS due to their low cost and sensing capability, which can be deployed to cover multiple target areas. By contrast, the powerful fixed wing UAVs can be used as RUAV in our mobile relaying-assisted drone swarm network, which can fly at a much higher speed and have a much longer recharge period as well as a large-scale antennas. Explicitly, drones acting as DSCS are powered by built-in battery, which typically last 30 minutes. The professional drone DJI Mavic 3 is capable of lasting up to 46 minutes. The powerful RUAV may rely on fixed wing UAV, which uses aerodynamics similar to that of aircraft. It has much longer flight time, namely between 50 and 300 minutes. Nevertheless, the mobile relaying-assisted drone swarm network is indeed energy-critical, which may determine whether the mission can be completed. Wireless power transfer and energy harvesting [41] is a promising technology for powering drones and wireless sensors. However, classical energy harvesting and wireless power transfer is critically dependent on to the charging distance. Oubbuhi _et al._[26] conceived a wireless powering strategy by deploying a set of intelligent flying energy sources operating autonomously. Multiagent deep reinforcement learning was employed for optimizing the energy transfer between the flying energy sources and UAVs. Another potential solution is to use laser-guns for charging [42]. But again, our investigations in this paper do not consider the propulsion power issues, which may be further investigated under the assumption of offloading data to the GS whilst charging the RUAV. Alternatively, a powerful RUAV can be used as a wireless power station for the DCDS, whilst loading data from DCDS. Pareto optimization of network lifetime, data delivered and delay imposed can be conducted, while considering the buffer size, battery capacity, loading/offloading points and link adaptation. In our proposed optimization scheme, we maximize the data delivered in a given time period, whilst minimizing the delay imposed along with considering the working time, communication distance, and buffer size. The computational complexity is bounded by the number of generations \(g_{\mathrm{max}}\) and the population size \(P_{s}\). Some additional complexity is imposed by the crossover and mutation as well as selection operations. Roughly, the computational complexity can be quantified by the number of cost function (CF) evaluations, which is given by \((P_{s}+N_{O})g_{\mathrm{max}}\) CF-evaluation. The \(\epsilon\)-MOGA assisted Pareto-Optimization detailed in subsection IV.C can be implemented either online or offline, depending on whether the operating conditions change, such as the total distance between the GS and the DCDS, the buffer size of RUAV, the number of DCDSs and the number of antennas activated, as well as the working time (network lifetime). Typically, the buffer size of RUAV, the number of DCDSs and the number of antennas activated will remain unchanged, once the mobile relaying-assisted drone network is established. But the total distance between the GS and the DCDS may be changed, if the DCDS flies to distant areas for sensing and surveillance. The network lifetime is limited by the battery capacity, which typically remains unchanged as well. But some factors may affected the battery capacity, such as the ambient operating temperature, payload, wind and altitude. It would be unsafe to allow a drone operate until running out battery. Backup drones may be deployed to replace the DCDS following a specifically designed handover strategy to avoid service interruption. Again, Pareto optimization of the network lifetime, data delivered and delay imposed as well as wireless powering [26] can be jointly considered in future investigation. ## V Simulation Experiments In this section, we investigate the achievable performance of our distance-based ACM assisted RUAV-aided drone swarm communications system consisting of a GS, 4 RUAVs and 32 DCDSs. The GS is serving 4 RUAVs at the same time, whilst each RUAV is capable of simultaneously servicing 8 DCDSs. Specifically, we focus our attention on the achievable performance of the targeted DCDS and RUAV in the presence of realistic interference. Traditional aeronautical communications mainly use the very high frequency band spanning from 118 MHz to 137 MHz, which has been almost fully licensed. Moreover it is impossible to mount large-scale antennas on the UAVs in this frequency range. In order to avoid license restriction whilst providing high-rate aeronautical communications, it is of prime importance to explore unlicensed frequencies in the millimeter wave (mmWave) band spanning from 30 GHz to 300 GHz, where the wavelength ranges from 1mm to 10 mm, resulting in 0.5mm to 5mm TPC antenna spacing. Hence, the powerful RUAV relying on fixed wing UAVs is capable of carrying a large-scale millimeter wave (mmWave) antenna. Specifically, the wingspan of fixed wing UAV is typically 3 meters, which has enough space to mount hundreds antennas if we use 60 GHz carrier frequency. Without loss generality, both the GS and the RUAV are equipped with \(N_{r}=64\) RAs. Since the size of a drone is much smaller, and it is less powerful in term of load weight and flight duration, the DCDS consists of 8 single-TA drones and hence the number of TAs is \(N_{t}\!=\!8\). Furthermore, the RUAV will activate \(N_{t}\!=\!8\) transmit antennas for forwarding the DCDS's messages. The velocity of the RUAV is \(50\,\mathrm{m/s}\). The network is allocated a bandwidth of \(B_{\mathrm{total}}=6\) MHz at the carrier frequency of 60 GHz. The transmit power per TA is \(P_{t}\!=\!78\) mW. Typically, the UAV channel consists of a LOS path and a cluster of reflected/delayed paths [28, 29, 43]. Hence, the drones experience Rician fading, where the Rician factor is set to \(K_{\text{Rice}}\!=\!5\) dB. We consider a pair of RUAV relaying assisted FANET scenarios based on either static or mobile relaying. Hence we design two simulation experiments to investigate the achievable performance of the proposed distance-based ACM and RUAV-aided drone swarm. The minimum and maximum distances between the RUAV and the GS/drones are 0.5 km and 8 km, respectively. The minimum distance is considered for flight safety. The maximum distance is limited by the maximum communication range beyond which the throughput is zero as illustrated in Fig. 4 and Table III. To study the impact of the RUAV's buffer size on the achievable performance, both 32 GB and 64 GB buffers are considered in our simulations. The default distance-based ACM assisted RUAV-aided drone swarm communications system parameters used for our analysis and simulations are summarised in Table II, whilst the distance-based ACM modes used are detailed in Table III. ### _Distance-based ACM_ The theoretically achievable rate per TA as a function of distance is indicated by the solid curve marked by dots in Fig. 3. By designing the eight distance thresholds \(d_{q}\) for \(0\leq q\leq 7\) to ensure that the rate of mode \(q\) is lower than the theoretically achievable rate in the distance range \([d_{q},\ d_{q-1}]\), we obtain the corresponding six desired distance thresholds for this ACM, which are indicated in Fig. 3. Note that \(d_{0}\) and \(d_{7}\) represent the near-GS point and the near-DCDS point, respectively, as illustrated in Fig. 1 (b). The seven ACM modes used and the associated modulations schemes as well as coding rates, are shown in Table III. ### _Scenario I: stationary relay is available_ In _Scenario I_, both the RUAV-to-DCDS link and the RUAV-to-GS link exist at the same time. As shown in Fig. 2, there are multiple cases of _Scenario I_ depending on the distance between the DCDS and the GS. As an example of our investigation for _Scenario I_, the distance between the DCDS and the GS is 8.5 km and the RUAV hovers between them, corresponding to Case 3 of Fig. 2. But this investigation is equally applicable to the other cases upon simply changing the related parameter setting. Recalling the analysis of Subsection IV-A, the maximum end-to-end throughput of the RUAV acting as a static relay can be achieved when the RUAV's distance to the GS \(d^{(r/g)}\) is in the range of \([4.0\,\mathrm{km},\ 4.5\,\mathrm{km}]\). We select the middle point between the DCDS and the GS as the location where the RUAV hovers, i.e., we have \(d_{(*)}^{(r/g)}\!=\!4.25\) km and \(d_{(*)}^{(r/d)}\!=\!4.25\) km. The achievable maximum end-to-end throughput is 1.000 bps/Hz per TA, whilst the total throughput is 8.000 bps/Hz of all the \(N_{t}\!=\!8\) TAs. Again, as illustrated in Fig. 2 (b), if the RUAV hovers at the near-DCDS point \(d_{(*)}^{(r/g)}\!=\!8.0\) km or the near GS point \(d_{(*)}^{(r/g)}\!=\!0.5\) km, it can only achieve a minimum end-to-end throughput of 0.459 bps/Hz per TA, while the maximum throughput is 3.672 bps/Hz for all the \(N_{t}\!=\!8\) TAs. Naturally, the RUAV is also capable of acting as a mobile relay. We also want to know whether upon acting as a mobile relay it can provide a higher end-to-end throughput without Fig. 4: An examples of distance-based ACM scheme. imposing extra delay. When the RUAV acts as a mobile relay, it circles back and forth between the near-DCDS point \(d_{\text{opt}}^{(d/r)}\) and the near-GS point at \(d_{\text{opt}}^{(r/g)}\) km. Explicitly, the RUAV hovers at the near-DCDS point \(d_{\text{opt}}^{(r/d)}\) km to receive the data collected by the eight DCDSs at a maximum potential throughput. When the data gleaned fills at a certain percentage \(\alpha_{\text{opt}}\) of its buffer, it will fly to the near-GS point at \(d_{\text{opt}}^{(r/g)}\) to offload the data to the GS. Note that there is also some additional end-to-end data transmission at the throughput of \(\min\{R_{\text{total}}^{r/g},R_{\text{total}}^{d/r}\}\), since both the RUAV-to-DCDS link and the RUAV-to-GS link exist at the same time. The Pareto optimal multiple-objective solutions \(\left[d_{\text{opt}}^{(d/r)}\ \ d_{\text{opt}}^{(r/g)}\ \alpha_{\text{opt}}\ \beta_{\text{opt}}\right]\) of both the near-DCDS point, of the near-GS point, as well as the maximum factor of caching data, and the minimum factor of offloading data are also affected by the buffer size. Firstly in Fig. 5, we investigate the total amount of data transmitted given the time period of 50 minutes. Explicitly, Fig. 5(a) depicts the performance achieved when the RUAV buffer size is 32 GB, whilst Fig. 5(b) depicts the performance achieved when the RUAV buffer size is 64 GB. Naturally, the buffer size has no impact on the stationary relay. The stationary relay associated with the minimum end-to-end BE delivers the minimum total amount of data to the GS, where again BE represents bandwidth efficiency. By contrast, the stationary relay having the maximum end-to-end BE is capable of delivering about 124.3 GB more data than the stationary relay having the minimum end-to-end BE. When we exploit the mobility of the RUAV as a mobile relay, there are two Pareto optimal solutions for the RUAV having 32 GB buffer, which are \(\left[d_{32\text{G,opt1}}^{(d/r)}\right.\)\(\left.d_{32\text{G,opt1}}^{(r/g)}\right.\)\(\left.\alpha_{32\text{G,opt1}}\right.\)\(\left.\beta_{32\text{G,opt1}}\right]=\left[3450.5\,\text{m}\)\(\left.632.0\,\text{m}\right.\)\(\left.0.64\,\text{m}\right.\)\(\left.0.11\right]\) and \(\left[d_{32\text{G,opt2}}^{(d/r)}\right.\)\(\left.d_{32\text{G,opt2}}^{(r/g)}\right.\)\(\left.\alpha_{32\text{G,opt2}}\right.\)\(\left.\beta_{32\text{G,opt2}}\right]=\left[505.5\,\text{m}\right.\)\(\left.576.0\,\text{m}\right.\)\(\left.0.88\,\text{m}\right.\)\(\left.0.12\right]\), respectively. The multiple-objective Pareto optimal Solution 2 is capable of delivering 58.14 GB more data to the GS than the stationary relay having the maximum end-to-end BE. In normalized terms, it delivered 40.38% more data. The multiple-objective Pareto optimal Solution 1 delivers 15.29 GB more data than the stationary relay having the maximum end-to-end BE. But it imposes a shorter delay than the multiple-objective Pareto optimal Solution 2. The delay imposed is defined as the time of the effective end-to-end BE becomes higher than that of the stationary relay having the minimum end-to-end BE (see Eq. (73)), which can be observed in Fig. 6. When the buffer size of the RUAV is 64 GB, there are also two Pareto-front optimal solutions, which are given by \(\left[d_{64\text{G,opt1}}^{(d/r)}\right.\)\(\left.d_{64\text{G,opt1}}^{(r/g)}\right.\)\(\left.\alpha_{64\text{G,opt1}}\right.\)\(\left.\beta_{64\text{G,opt1}}\right]=\left[3496.9\,\text{m}\right.\)\(\left.\text{586.2\,m}\right.\)\(\left.0.50\,\text{m}\right]\) and \(\left[d_{64\text{G,opt2}}^{(d/r)}\right.\)\(\left.d_{64\text{G,opt2}}^{(r/g)}\right.\)\(\left.\alpha_{64\text{G,opt2}}\right.\)\(\left.\beta_{64\text{G,opt2}}\right.\)\(\left.\left. constant effective end-to-end average data rates, which are given by \(4.80\times 10^{-2}\) GB/s and \(2.20\times 10^{-2}\) GB/s, respectively. However, the effective end-to-end average data rate, defined in (72) fluctuates when the RUAV acts as mobile relay, which is caused by switching ACM modes in line with the communication distance in order to maximally exploit the link capacity. Observe from Fig. 6(a) for the buffer size of 32 GB that the effective end-to-end average data rate of the multiple-objective Pareto optimal Solution 2 is always higher than that of the stationary relay having the maximum end-to-end rate when the time passes 500 s. Furthermore, it is higher than the effective end-to-end average data rate of the multiple-objective Pareto optimal Solution 1 for \(t\geq 400\) s. If we consider the rate of the stationary relay having the minimum end-to-end rate as the required minimum effective end-to-end average data rate \(R_{e}^{*}\), the delay as defined in Eq. (73) becomes \(t^{*}=0\) s for the multiple-objective Pareto optimal Solution 1. By contrast, the delay imposed by the multiple-objective Pareto optimal Solution 2 is \(t^{*}=300\) s. Similar trends can be observed in Fig. 6(b) for the buffer size of 64 GB. The amount of data cached in the buffer of the RUAV versus time can be observed from Fig. 7. When the RUAV acts as a stationary relay, no data is cached in the buffer. Hence we only plot the data cached in the buffer when the RUAV acts as mobile relay. It can be seen from both Fig. 7(a) and Fig. 7(b) that the multiple-objective Pareto optimal Solution 2 fully exploits the capacity of the buffer and delivers the maximum data from the DCDS to the GS, but it imposes a longer delay, when aiming for reaching the required minimum effective end-to-end average data rate \(R_{e}^{*}\), as seen in Fig. 6. By contrast, the multiple-objective Pareto optimal Solution 1 does not fully exploit the capacity of the buffer and delivers less data from the DCDS to the GS than the multiple-objective Pareto optimal Solution 2, but it imposes a shorter delay. ### _Scenario II: stationary relay is unavailable_ In _Scenario II_, even the minimum-rate most robust communication link may only exist either for the RUAV-to-DCDS or for the RUAV-to-GS. Explicitly, when the distance between the DCDS and the GS is longer than 16 000 m, it is impossible to establish both the RUAV-to-DCDS link and the RUAV-to-GS link at the same time. As a specific example, we set the distance between the DCDS and the GS to 25 000 m. The minimum and maximum distances between the RUAV and the GS/drones are 500 m and 24 500 m, respectively. Recall from Fig. 3 that the maximum communication distance is 8 000 m, which means that when the distance between the RUAV and GS/drones exceeds 8 000 m, there is no communication link. When the buffer size of the RUAV is 32 GB, there are 29 Pareto optimal solutions. Here we only characterize the solution having the minimum delay and the solution having the maximum data delivered, which are \(\left[d_{32\text{G,opt1}}^{(d/r)}\ d_{32\text{G,opt1}}^{(r/g)}\ \alpha_{32\text{G,opt1}}\ \beta_{32\text{G,opt1}}\right]=\left[953. \text{m}\ \text{s}10.2\ \text{m}\ 0.50\ 0.13\right]\) and \(\left[d_{32\text{G,opt2}}^{(d/r)}\ d_{32\text{G,opt2}}^{(r/g)}\ \alpha_{32\text{G,opt2}}\ \beta_{32\text{G,opt2}}\right]=\left[779. \text{m}\ 547.2\ \text{m}\ 0.60\ 0.26\right]\), respectively. As a comparison, we also include two solutions without any optimization as our benchmarks, which are the nearest-loading-point and nearest-offloading-point solutions as well as the farthest-loading-point and farthest-offloading-point. Explicitly, they are given by \(\left[d_{32\text{G,bl}}^{(d/r)}\ d_{32\text{G,bl}}^{(r/g)}\ \alpha_{32\text{G,bl}}\ \beta_{32\text{G,bl}}\right]=\left[500. \text{m}\ 500.0\ \text{m}\ 1.0\ 0\right]\) and \(\left[d_{32\text{G,bl}}^{(d/r)}\ d_{32\text{G,bl}}^{(r/g)}\ \alpha_{32\text{G,bl}}\ \beta_{32\text{G,bl}}\right]=\left[7999. \text{m}\ 7999.\text{m}\ 1.0,0\right]\), respectively. When the buffer size of the RUAV is 64 GB, there are 25 Pareto optimal solutions. We characterize the solution having the minimum delay and the solution having the maximum data delivered, given by \(\left[d_{64\text{G,opt1}}^{(d/r)}\ d_{64\text{G,opt1}}^{(r/g)}\ \alpha_{64\text{G,opt1}}\ \beta_{64\text{G,opt1}}\right]=\left[829. \text{m}\ \ \ 3459.3\ \text{m}\ 0.50\ 0\right]\) and \(\left[d_{64\text{G,opt2}}^{(d/r)}\ \ d_{64\text{G,opt2}}^{(r/g)}\ \alpha_{64\text{G,opt2}}\ \beta_{64\text{G,opt2}}\right]=\left[7999. \text{m}\ 7999.\text{m}\ 1.0,0\right]\), respectively. When the buffer size of the RUAV is 64 GB, there are 25 Pareto optimal solutions. We characterize the solution having the minimum delay and the solution having the maximum data delivered, given by \(\left[d_{64\text{G,opt1}}^{(d/r)}\ d_{64\text{G,opt1}}^{(r/g)}\ \alpha_{64\text{G,opt1}}\ \beta_{64\text{G,opt1}}\right]=\left[829. \text{m}\ \ \ 3459.3\ \text{m}\ 0.50\ 0\right]\) and \(\left[d_{64\text{G,opt2}}^{(d/r)}\ \ d_{64\text{G,opt2}}^{(r/g)}\ \alpha_{64\text{G,opt2}}\ \beta_{64\text{G,opt2}}\right]=\left[7999. \text{m}\ 7999.9\ \text{m}\ 1.0,0\right]\), respectively. Fig. 6: The effective data rate as a function of time in _Scenario I_. \([839.3\) m \(523.2\) m \(0.85\)\(0.08]\), respectively. In this case, the nearest-loading-point and nearest-offloading-point solution as well as the farthest-loading-point and farthest-offloading-point solution are given by \(\left[\delta_{64\text{G,B1}}^{(d/r)}\ d_{64\text{G,B1}}^{(r/g)}\ d_{64\text{G,B1}} \ \delta_{64\text{G,B1}}\right]\!=\![500.0\ \text{m}\ 500.0\ \text{m}\ 1.0\ 0]\) and \(\left[\delta_{64\text{G,B2}}^{(d/r)}\ d_{64\text{G,B02}}^{(r/g)}\ d_{64\text{G,B2}}\ \delta_{64\text{G,B2}}\ \right]\!=\![7999.9\ \text{m}\ 7999.9\ \text{m}\ 1.0,0]\), respectively, which are identical to the 32 GB buffer scenario. The amount of total data transmitted from the DCDS to the GS is investigated in Fig. 8. Observe from Fig. 8(a) that both the multiple-objective Pareto optimal solutions are capable of delivering more data than the pair of benchmark Solutions, when the buffer size is 32 GB. The multiple-objective Pareto optimal Solution 2 delivers the most data from the DCDS to the GS, regardless of the buffer size. Explicitly, it delivers 12.4 GB more data from the DCDS to the GS than the benchmark solution 1 when the buffer size is 32G, and 25.87 GB more data than the benchmark Solution 1, when the buffer size is 64 GB. In other words, our solution is capable of delivering 19.24% and 26.86% extra data compared to the benchmark Solution 1 when the buffer sizes are 32 GB and 64 GB, respectively. The benchmark Solution 2 delivers the minimum data from the DCDS to the GS. In particular, it delivers no data to the GS in the period of 3000 s for the buffer size of 64 GB, because the RUAV has just completed its data loading action at the near-DCDS loading point and it is heading to the GS, but it has not yet reached the communication range of the GS. The effective end-to-end average data rate is investigated in Fig. 9, which fluctuates up and down as and when the RUAV changes its status, as illustrated in Fig. 3. Observe from Fig. 9(a) that although the multiple-objective Pareto optimal solutions do not always have higher effective end-to-end average data rate than the benchmark Solution 1, they reach higher effective end-to-end average data rate within 3000 s, when the buffer size is 32 GB. By contrast, Fig. 9(b) Fig. 8: The total data transmitted in _Scenario II_. Fig. 7: Data cached in the RUAV buffer as a function of time in _Scenario I_. shows that only the multiple-objective Pareto optimal Solution 2 reaches a higher effective end-to-end average data rate than the benchmark Solution 1 at the end of the given time period, when the buffer size is 64 GB. As expected, when the buffer size is 64 GB, the effective end-to-end average data rate of the benchmark Solution 2 is zero. The amount of data cached in the buffer of the RUAV can be observed from Fig. 10. It can be seen from both Fig. 10(a) and Fig. 10(b) that for the benchmark Solution 1, there are still lots of the data cached in the buffer of the RUAV that have not been offloaded to the GS at the end of the time period considered. Additionally, the benchmark Solution 2 has not had a chance to offload the data cached in its buffer to the GS by the end of the time period considered, when the buffer size is 64 GB. By contrast, both the multiple-objective Pareto optimal solutions have offloaded almost all the data to the GS at the end of the time period for both the 32 GB and the 64 GB buffer. ## VI Conclusions An ACM-aided and mobile relaying-assisted drone swarm network architecture, consisting of a DCDS, RUAV and GS was conceived. The DCDS is responsible for collecting data within a target area, whilst the RUAV acts as a mobile relay for hauling data from the DCDS to the GS. Furthermore, we have designed an \(\epsilon\)-MOGA assisted Pareto-optimization scheme associated with the four decision variables of near-DCDS loading point, near-GS offloading point, maximum factor of loading data, and minimum factor of offloading data, in order to maximize the data delivered from the DCDS to the GS, while imposing a minimum delay. We have investigated a pair of scenarios. In the first case, there are simultaneous communication links for both the DCDS-to-RUAV and the RUAV-to-GS, while for the second case, the DCDS-to-RUAV and RUAV-to-GS links do not exist concurrently. Our simulation results have demonstrated that our \(\epsilon\)-MOGA assisted mobile Fig. 10: Data cached in the buffer of RUAV as a function of time in _Scenario II_. Fig. 9: The effective data rate as a function of time in _Scenario II_. relaying is capable of delivering more data from the DSDC to the GS, while imposing minimum delay. In the scenario, when there are simultaneous DCDS-to-RUAV and the RUAV-to-GS links, our solution is capable of delivering 40.38% and 45.38% more data than the RUAV acting as stationary relay in the time period of 50 minutes, when the buffer sizes are 32 GB and 64 GB, respectively. In the scenario when the DCDS-to-RUAV and RUAV-to-GS links do not exist concurrently, our solution is capable of delivering 19.24% and 26.86% extra data than a non-optimized benchmark solution in the time period of 50 minutes, when the buffer sizes are 32 GB and 64 GB, respectively.
2306.10620
A Metadata-Based Ecosystem to Improve the FAIRness of Research Software
The reuse of research software is central to research efficiency and academic exchange. The application of software enables researchers with varied backgrounds to reproduce, validate, and expand upon study findings. Furthermore, the analysis of open source code aids in the comprehension, comparison, and integration of approaches. Often, however, no further use occurs because relevant software cannot be found or is incompatible with existing research processes. This results in repetitive software development, which impedes the advancement of individual researchers and entire research communities. In this article, the DataDesc ecosystem is presented, an approach to describing data models of software interfaces with detailed and machine-actionable metadata. In addition to a specialized metadata schema, an exchange format and support tools for easy collection and the automated publishing of software documentation are introduced. This approach practically increases the FAIRness, i.e., findability, accessibility, interoperability, and so the reusability of research software, as well as effectively promotes its impact on research.
Patrick Kuckertz, Jan Göpfert, Oliver Karras, David Neuroth, Julian Schönau, Rodrigo Pueblas, Stephan Ferenz, Felix Engel, Noah Pflugradt, Jann M. Weinand, Astrid Nieße, Sören Auer, Detlef Stolten
2023-06-18T19:01:08Z
http://arxiv.org/abs/2306.10620v1
# A Metadata-Based Ecosystem to Improve the FAIRness of Research Software ###### Abstract The reuse of research software is central to research efficiency and academic exchange. The application of software enables researchers with varied backgrounds to reproduce, validate, and expand upon study findings. Furthermore, the analysis of open source code aids in the comprehension, comparison, and integration of approaches. Often, however, no further use occurs because relevant software cannot be found or is incompatible with existing research processes. This results in repetitive software development, which impedes the advancement of individual researchers and entire research communities. In this article, the _DataDesc_ ecosystem is presented - an approach to describing data models of software interfaces with detailed and machine-actionable metadata. In addition to a specialized metadata schema, an exchange format and support tools for easy collection and the automated publishing of software documentation are introduced. This approach practically increases the FAIRness, i.e., findability, accessibility, interoperability, and so the reusability of research software, as well as effectively promotes its impact on research. **Keywords:** Research Data Management (RDM), FAIR, software metadata, interface description, semantic software description, software publication, software reuse, machine-interpretable, application profile, CodeMeta ## 1 Introduction Research in many academic disciplines relies on computational methods, to the degree that the utilization of software has become integral in numerous fields. Thus, the efficient discovery and reuse of research software is essential for academic progress and communication. Furthermore, the examination of open source code aids in the comprehension, comparison, and integration of methodologies, and the application of software enables users with various academic backgrounds to replicate, validate, and build upon study findings. Scientific software publications are also becoming increasingly important for measuring the research impact and so for the reputation of individual researchers [1, 2]. Finding compatible software that meets researchers' content requirements and integrates seamlessly into existing research workflows remains a significant challenge [3]. Currently available software metadata schemas, such as_CodeMeta_[4], only focus on general information and omit detailed technical descriptions of interfaces, which are important for interoperability and subsequent use [5]. At most, such information can be found on software documentation sites, where it is neither standardized nor machine-actionable. Furthermore, metadata is stored and exchanged in various formats, from which no standardized exchange format has yet been developed, that would allow the broad reuse of metadata once it has been captured [6]. Therefore, in order to make a software known on various platforms and increase its impact, metadata must often be repeatedly collected for each platform separately, which greatly increases documentation effort, which is already perceived to be high. At the same time, the broad dissemination of metadata is essential for the long-term discoverability and subsequent use of software [7]. As a result, researchers must invest considerable effort in both documenting and publishing metadata, as well as finding and integrating research software. Every time software is not found and reused and instead redundantly developed, a significant increase in avoidable programming, documentation and maintenance efforts is imposed. To address these issues, adaptations of the FAIR Guiding Principles, which aim to increase the findability, accessibility, interoperability, and reusability of research data [8], were recently adopted specifically for research software [9, 10]. Amongst other things, these principles require research software to be registered and indexed in searchable platforms, and annotated with rich metadata. In order to increase the interoperability of software components, the metadata must include interface definitions of modular software architectures, making interoperability the most challenging amongst the four high-level principles. On the one hand, all metadata must comply with domain-relevant community standards in order to be easily understandable for researchers. On the other, the metadata must be machine-actionable for automated software discovery. In practice, however, it is unclear how the postulated abstract principles may be put into action [11]. The DataDesc ecosystem presented in this article is a practical approach to improving the interoperability and findability of research software. It centers around a software metadata schema that describes the data models on which software interfaces are based. In order to capture characteristics that are usually only described in documentation, metadata elements from various established schemas were reused, combined, and supplemented with new ones. In addition, the ecosystem provides an exchange format in which this information is mapped in a machine-actionable manner. The hierarchical data structure of the OpenAPI standard was chosen as its basis to facilitate its reuse in automated processes. Finally, it includes a toolset that makes it easy to capture and publish software metadata from the source code. The remainder of this article is structured as follows: Section 2 presents a review of existing software description schemas, addressing different formats in the tension between metadata and documentation. Furthermore, automated description tools and software publication platforms are compared on the basis of the metadata formats they generate or use. Section 3 explains the different components of the DataDesc ecosystem. First, the DataDesc schema is described along with the typical data flow between individual interface components of research software on the basis of its contents, formats, value ranges, and structures. Then, an explanation of the structure of the exchange format and the individual tools that support metadata generation is given. Finally, pipelines to publication platforms are described with which the metadata can be disseminated in a partially automated way. In Section 4, the presented approach is exemplarily applied to the Framework for Integrated Energy System Assessment (ETHOS.FINE) [12] modeling framework from the energy domain, whereupon the strengths and limitations of the DataDesc schema in particular are discussed. Section 5 concludes with a summary of the key characteristics of the presented approach and provides an outlook on future work. ## 2 Related Work An overview of current software description schemas is provided in Section 2.1. Additionally, software publishing platforms and automated description tools are contrasted in sections 2.2 and 2.3. ### Software Description Schemas Software Metadata StandardsMetadata schemas (or standards) are sets of metadata elements (or terms) that are compiled to unify the description of artifacts within their scope. Many different metadata schemas exist for a variety of use cases. Whereas _Dublin Core_[13] outlines general metadata terms, the _DataCite Schema_[14] focuses on describing research data. _schema.org_ is intended to describe web pages with structured data markups but it is also widely used for other purposes [15]. With respect to research software, CodeMeta is a popular community-driven metadata standard. It is based on _schema.org_, which it augments with several additional terms. Various crosswalks exist - that is, mappings from one schema to another - between CodeMeta and other metadata schemas. CodeMeta covers many aspects of software metadata, with some terms focusing on technical details such as file size or operating system and others on administrative information like licenses and links to the software repository. It supports the unambiguous assignment of authors, contributors, licenses, and more via Uniform Resource Identifiers (URIs). The purpose of a software can be specified by means of a textual description, application categories, keywords, and a link to a README file or reference publication. Apart from a coarse classification, the declaration of a software's purpose is therefore still far from being readily machine-actionable; that is, without interpreting (or misinterpreting) natural language. Furthermore, CodeMeta does not include terms for specifying the input and output of a software, nor does it include terms for specifying features or methods implemented by a software. Similarly, the _Citation File Format_ schema defines general metadata for the citation of software repositories without describing the software's purpose and interface [16]. In the domain of geoscience, Garijo et al. developed the _Software Description Ontology_[17] by extending their own approach, namely _OntoSoft_[18]. OntoSoft elements are structured in six categories: identify, understand, execute, do research, get support, and update. The ontology captures technical metadata like programming language and dependencies and descriptive data like name, website, and contributors. The authors added a description of the input and output data also utilizing the _Scientific Variables Ontology_ and aligned _OntoSoft_ with CodeMeta. The metadata are published to an open knowledge graph [19]. Garijo et al. support the linking to other instances in the semantic web, like Wikidata, the _Scientific Variables Ontology_, and others. Additionally, they developed programs to support researchers in metadata creation and the search for software models [20, 21]. In the domain of bioinformatics, Ison et al. developed the metadata schema _biotoolsXSD_ for the software registry _bio.tools_[22, 23, 24]. The metadata is expressed as an XML schema and contains 55 elements, of which ten are mandatory. The use of the EDAM ontology as value vocabulary is required for elements such as function, input, and output. The metadata schema also contains software-specific elements like programming language, license, and operating system. The use of an ontology is not required for these. Interface Description Standards.In order to increase its technical interoperability and reusability, software can be documented by means of interface description languages. The syntax of such a language enables the formal and programming language-agnostic description of interface functions and their parameters. Well-known representatives include the Web Service Description Language (WSDL) [25] and Web Application Description Language (WADL) [26]. Both are XML-based specification standards that describe the syntactical elements of web services and, primarily, how to access them. They are utilized to simplify the information exchange in Web 2.0 application development. Whereas WSDL is used in conjunction with SOAP, WADL enables the description of HTTP (and in particular REST-conform) web services. Both languages provide machine-processable descriptions but do not support taxonomy or ontology information for semantic classification. The WSDL and WADL standards were last updated in 2007 and 2009, respectively. The OpenAPI specification is an interface description language that focuses on REST APIs [27]. By utilizing YAML and JSON, it is both machine-actionable and human-readable. By default, it is used to define the general properties of APIs, such as the version, contact, and license information or server names and addresses. However, it also defines technical aspects, mainly with respect to REST interface functions like the paths to endpoints, HTTP verbs, parameters or response code descriptions. The OpenAPI standard also allows for the annotation of custom properties using a concept called _extensions_ or _x-attributes_. These extensions provide a powerful way of describing additional functionality not covered by the standard specification. As an open and non-proprietary state-of-the-art industry standard, the OpenAPI specification is actively maintained and regularly updated. The Web Ontology Language for Web Services (OWL-S) defines ontologies built on top of the Web Ontology Language (OWL) for describing semantic web services on a technical level, making it more powerful but also more complex than regular description languages (i.a., WSDL and WADL) [28]. It describes the purpose of services, how they are accessed, and how they function. Although more powerful than comparable description languages, OWL-S is not an 'end-all-be-all' solution to service descriptions and requires domain-specific ontologies for describing domain-specific functionality. Furthermore, its focus on semantic web services greatly reduces its legibility; it was last updated in 2004 and is not suited to easy human reading. The Functional Mock-up Interface (FMI) is an open-source standard for simulation software interfaces [29]. All simulation models whose interfaces have been designed along the standard become so called functional mock-up units (FMUs). The standard ensures that all FMUs are compatible with one another and can be executed in combination on the basis of XML and binary files and C code containing functions, variables, and mathematical formulas. FMI comes with its own documentation standard, namely the FMI Description Schema, which only applies to FMI conform software. It encompasses general information regarding the FMUs such as name, version, author, and license, as well as technical information like model structures, unit, and type definitions. The schema allows structured extensions to the base standard in order to flexibly meet additional requirements. FMI is still actively maintained today and is used in many industrial companies. Non-standardized Software Description.Software is also described on web pages, where the use of specific terms is typically enforced, but without adopting a metadata schema, thereby only establishing uniformity on the web page itself. Schwarz and Lehnhoff [30], for example, describe a catalog of energy co-simulation components. They use a semantic media wiki to collect information on simulators and add descriptions to the simulation interfaces. The elements of the catalog, which can be used for a metadata schema, are not described in greater detail. The open energy modeling initiative (openmod) includes a list of energy models in their wiki [31]. For each of these, administrative and descriptive metadata are listed, such as license, link to a code repository, model class, and other. The descriptive elements include detailed information on the models. The elements are not formalized as metadata schema and controlled vocabularies are used for neither the elements nor the values. The Open Energy Platform (OEP) introduces framework and model factsheets to describe frameworks and models [32]. These descriptions have been further developed based on the non-formalized openmod metadata elements. In addition to the description by means of metadata, software is described in documentation and specification websites, providing guidance for both users and developers (e.g., see [33]). The design ideas and specific technical elements of software are typically defined along with their underlying algorithms and procedures. Specifications for the API, user manuals, and examples of applications make it possible to correctly utilize the software. Software documentation is predominantly written in natural language and, therefore, is neither machine-actionable nor easily searchable or comparable. Although such documentation provides rich information, it is not typically considered as part of software metadata. It should be noted that existing software metadata schemas do not include technical documentation about interfaces. And although interface description languages are designed to collect this information, they focus on web services and protocols. As a result, the interface information of software that is not provided as a service is primarily published as non-standardized and non-machine-actionable information on web pages, often without any connection to controlled vocabularies or ontologies. For research software, most of which is not provided as a service, there is not yet a suitable schema that enables semantic interface descriptions. However, the near-code structures of interface description languages and the ability to connect some via extensions to established software metadata schemas offer promising foundations. ### Software Description Tools Documentation is generally regarded as an essential component of software development, and yet it is frequently neglected. This is often due to the fact that considerable effort is involved in writing detailed, well-structured, and version-controlled documentation. A recommended means of alleviating this issue is the use of automated documentation tools [34], which are specifically designed to aid in the process of creating comprehensible and complete documentation for a software project. There are many such tools available, and although the general objective is the same, they differ in their approach, programming language, or input and output formats. Many of these tools, e.g., Javadoc [35] or Perldoc [36], focus on single programming languages and use source code as their main inputs. By parsing the code, they obtain information on defined types and functions and their relationships. Some documentation tools, such as Doxygen [37] or the Sphinx plugin Napoleon [38], are able to extract this kind of information from bare code; other tools, however, rely on code comments in a determinate format. In either case, additional metadata is typically conveyed via comments. This can comprise, for example, a general description or explanation of a function's parameters. Such information is mostly given as free text and is placed into the final documentation without change. MkDocs [39] and, in some cases, Sphinx [40] constitute an exception by only manually parsing created files, e.g., containing _reStructuredText_. They can, however, both be extended with plugins that automatically generate said text files from code. The output of documentation tools is nicely-formatted documentation pages, typically using HTML or LaTeX. These pages are easily readable and comprehensible to humans, but hardly machine-actionable. Roxygen2 [41] also generates intermediate files that are, in theory, machine-actionable, but, due to their custom data format, are limited in their reusability. In this regard, Swagger [42] can be distinguished from other tools. Swagger is used primarily for documenting REST APIs and provides a set of distinct but related tools for that. At its core, Swagger utilizes a YAML file standardized in the OpenAPI Specification. This file is machine-actionable and stores all metadata of an API in a structured, hierarchical way. It can be created manually or generated from code, and, when passed to the appropriate Swagger tools, is used to generate a human-readable documentation web page. Unlike many other tools, Swagger does not require specially-formatted comments within the code in order to extract the information. Furthermore, Sphinx can be extended by a plugin to enable support for OpenAPI specification files, which, as implied, makes it possible for Sphinx to generate interface descriptions from OpenAPI compliant YAML. It should be highlighted that software and, therefore, interface documentation can be parsed automatically from source code and many documentation tools are available. However, most of these rely on code comments that are formulated in natural language and which, therefore, are not directly machine-actionable. In this regard, Swagger is an exception, as it centers around a universal, machine-actionable and standardized metadata file, which is suitable for documentation pages as well as automated reuse. Even though Swagger is intended only for documenting REST interfaces, there is no lock-in to individual programming languages. Because of this inherent flexibility, it offers some potential for the development of generic software documentation workflows. ### Software Publication Platforms Software can be made discoverable and available for reuse by being published on a variety of software-specialized publication platforms. Thereby, the distinct purposes and objectives of these platforms vary. Although some store the source code of a software in versioned repositories (e.g., [43]), in particular to enable its further development, others aim at the distribution and easy integration of mature programs (e.g., [44]). Some platforms serve as registries, indexing large collections of software and making them searchable using detailed metadata (e.g., [45]). Others are dedicated to the provision of technical documentation and user guides (e.g., [46]). Furthermore, most of the software publication platforms differ in the data formats they accept and in the uploading processes they provide. Even when using similar file formats, the required information or information structures vary. Some platforms, such as Github [43], Gitlab [47], Bitbucket [48], or Sourceforge [49], ingest the source code directly without a specific required structure. Others support the inclusion of metadata configuration files. For example, Anaconda Distribution [44] requires a YAML file that describes the project. Maven Central [50] requires an XML POM file for storing metadata. Whereas PyPi [45] requires a TOML file with information about packages, NPM [51] generates a JSON file based on text prompts. Swaggerhub [52] requires an OpenAPI-conforming interface description file in YAML format, containing function and argument specifications. Like ReadTheDocs [46], some platforms require a software project to have a documentation folder according to a standard. In this specific case, Sphinx or MkDocs can be used in order to generate such a folder. Platforms like Gitbook [53], CRAN [54], or Github Pages [55] require programming language-specific files for the installation. For example, submitting a project to CRAN requires first creating a TAR.GZ file. Github Pages [55] can store project documentation via HTML files. The OEP [32], Open Research Knowledge Graph (ORKG) [56, 57, 58], or _bio.tools_[23], for example, require manually filling forms with project data in order to register it. There is no question that publishing platforms are critical to the dissemination, findability, and reusability of research software within and across academic communities. It is advantageous to employ various platforms in parallel to utilize their distinct strengths to increase the impact and transparency of a software. However, as no uniform format for the exchange and subsequent use of software metadata has yet been identified, metadata must often be collected redundantly and adapted to heterogeneous formats and processes, creating the need for a machine-actionable and programming language-agnostic exchange standard. ## 3 The DataDesc Ecosystem This section introduces the DataDesc ecosystem. As a central component, the DataDesc schema, which enables the thorough description of software interfaces, is explained in Section 3.1. Then, in Section 3.2, an exchange format as well as assistance tools are presented, enabling the gathering, storage, and reuse of machine-actionable metadata. Finally, procedures that can be used to share metadata on publishing platforms are defined in Section 3.3. DataDesc has been released with all of its components presented here under the open MIT license on GitHub [59]. ### DataDesc Schema Metadata schemas often focus on general information provision, which primarily includes the naming of organizations and persons involved in the development process and the technical and licensing conditions under which the software can be obtained and used. By specifying categories and keywords, they also make a valuable contribution to supporting the findability of software. Within these schemas, however, the description of interfaces can only be superficially embedded in general metadata elements. Although this information already provides important insights into a software, it is not sufficient to facilitate its interoperability and reusability in a machine-actionable way. Therefore, the DataDesc schema1 compensates for this omission by providing a framework for the detailed annotation of individual software interface components, leading to insights into how and with which data and programs a software can be used. The DataDesc schema promotes the reuse and integration of research software and, thereby, is an ideal extension to existing metadata schemas. Footnote 1: More precisely, DataDesc’s schema is a metadata application profile, as it combines term definitions from existing metadata schemas for a particular purpose. However, as this technical difference is not decisive, the more common term _schema_ is used. An interface, as schematically depicted in Figure 1, serves as a connection point for users and programs to interact with a software. It is composed of the functions through which data can be inputted into and retrieved from the software. These functions are distinguished from the inner functions, which form the logic of the software core. The program core can only be addressed indirectly via the interface, whereby the structures and formats of the information flow are defined by the interface functions and internal data models. An interface description performed with the DataDesc schema formally identifies the characteristics of an interface according to a collection of metadata elements, detailed in Figure 2, whose meaning and use are explained in the fol Figure 1: Schematic representation of the generic information flow between software interface and core components. lowing. The schema comprises the naming (_Name_) and description (_Description_) of all functions, which are part of an interface (_Is Part Of Interface_) and over which a software can be addressed. In order to enable easy and in particular error-free use of a software, the functions' parameters, as well as their underlying data models, must be described in detail. To adequately characterize variables serving as the input or output parameters of interface functions (_Role_), their intended and allowed data must be described in terms of contents, formats, values, and structures. Data Content Description.In order to digitally process information, it must be stored in the form of variables. In the course of software development, the data content of each variable is defined. This refers explicitly to the referencing Figure 2: Structure and content of the DataDesc schema within an OpenAPI-conforming YAML file. While a software is described with _CodeMeta_ (blue) in the info section, its objects are described in the component section with _DataDesc_ (black), which re-uses terms from _OpenAPI_ (brown) and the _Software Description Ontology_ (pink). All objects are arranged in a hierarchical structure, which is indicated by arrows pointing from child to parent objects. of real world concepts, such as the height or weight of a person, and not of data types, which specify whether variables can contain _integers_, _floats_, _strings_, or similar. A precise understanding of the meaning of the data content a software requires, processes and outputs is essential for its correct and intended use. However, capturing meanings is not a trivial task. Depending on the demand for precision and generality, describing data content involves varying degrees of effort. The easiest approach is to sensibly name variables during software development (_Name_) and explain them further in docstrings (_Description_). However, these names and free text descriptions almost always leave considerable room for interpretation as to the meaning of the data content. Instead, it is more interoperable to reference concepts from ontologies (with their respective _URIs_), which often provide unambiguous definitions that are agreed upon in the respective research domain [60]. Of course, the open collaborative development of such concepts with the broadest possible participation and agreement within a domain is a labor-intensive process requiring well-organized community infrastructures [61]. If the variable is numerical, documenting the meaning alone is insufficient for fully describing its data content. In this case, additional information about a unit is necessary, so that, for example, a duration of seven hours can be distinguished from one of seven seconds. Just as with the concepts before, a unit can be specified by a name (_Name_) and a description (_Description_), or an ontology reference (_URI_). In the context of software interfaces, specific concepts and units need not always be declared. In order to enable a greater degree of freedom in data entry and so to enable a more flexible application of a software, intended data contents can be more broadly indicated. For example, specifying the general concept _means of transport_ indicates that the software can process data about _bicycles_, _trains_, _cars_, and so forth. Likewise, instead of specific units such as _meters_, _centimeters_ or _miles_, the unit type _length_ can be used (_Unit Type_). In this context, the use of ontologies offers the advantage that they often already include information that specifically relates to more general concepts. Data Format Description.The format of a variable defines how the information it contains is to be encoded into binary data and subsequently interpreted. It provides information about which operations may be applied to the data content. The format is defined by the data type of a variable (_Data Type_). There are primitive and complex data types that can be native to programming languages or which come as custom data types provided by libraries. Primitive data types, such as _strings_, _integers_, or _booleans_, can hold single values. Complex data types, like _lists_, _tables_, _arrays_, or _classes_, can group multiple instances of primitive data types. As complex data types can also recursively contain complex data types, nested structures of arbitrary depth and complexity are possible, although their final level can only contain primitives. Complex structures of this kind are often used to define data models, which summarize the input and output data of research software into single data objects and make them centrally accessible (e.g., [62], [63]). Oftentimes, classes are used at the highest level for the representation of such data models, to which the interface functions (_Functions_) for importing and storing, as well as for reading out and exporting, are also assigned (cf. Figure 1). In order to describe not only the data types of the function parameters but also the formats nested within them, hierarchical data formats are mapped in the DataDesc schema by means of hierarchical parent-child relationships. To avoid redundant descriptions of complex data models with each function parameter based on them, the DataDesc schema also offers the option of creating separate reusable descriptions of data model classes that can also be referenced (_Name_, _Description_, _URI_, _Is Part Of Interface_). Figure 3 (a.-c.) exemplifies how four variables of the primitive data types of _integer_ and _string_ are grouped (e.g., in the format of a _class_). Different instantiations of this class are further grouped (e.g., in a _dictionary_). Files indirectly represent another complex data type, as they can also contain and group data of arbitrary types. As is shown in Figure 1, reading in files is a widely used method of transferring data to a research software, which is why an interface description must also inform regarding permitted file formats that can be processed without errors. For each variable containing a file reference, whatever the variable itself, e.g., file type _string_ or _file object_, DataDesc gives the option of specifying the format, e.g., text formats like _XML_, _HTML_, or _TEXT_ or binary formats like _PDF_, _XLS_, or _JPG_ of a referenced file (_Has File Format_). Beyond that, the character encoding of text formats, e.g., _UTF-8_, _ASCII_, or _ISO 8859-1_, can be specified to support the correct interpretation of text data (_Character Encoding_). Data Value Description.When creating software, a permissible value range must be defined for each variable, guaranteeing the technically error-free processing and consistency of content for all values from within this range. Technically, the value range is defined in many programming languages by the choice of a variable type. In Java, for example, a variable of type _float_ allows all floating point values from \(-3.4*10^{38}\) to \(3.4*10^{38}\), whereas a _boolean_ can only accept the values of _true_ and _false_. In addition, a value range can be further limited based on content considerations. For example, if a longitude is to be stored in a _float_ variable, only floating point values from \(-180\) to \(+180\) degrees should be considered valid (_Minimum Value_, _Exclusive Minimum_, _Maximum Value_, and _Exclusive Maximum_). If text variables are only allowed to contain certain patterns, this can be defined through _regular expressions_ (_Regular Expression_). For example, if a filename is to consist of only letters, numbers, and underscores, this can be defined using the expression, ~[A-Za-z0-9_]+$. If the allowed value range of a variable should be fully restricted to predefined values, e.g., _North_, _East_, _South_, and _West_, DataDesc schema offers the possibility of documenting them in the form of value sets (_Value Set_). It is also part of the value description to specify whether a variable is an optional parameter (_Required_). If this is the case, it does not need to be set when the respective function is called upon. In this context, a default value can also be specified, and is used if the variable is not explicitly set (_Has Default Value_). Data Structure Description.For variables of complex data types, the internal data structures must be described at both a technical and contextual level so as to enable the correct accessing of individual values and the determination of their respective meanings. The processing procedures of software programs are designed on the basis of specific data structures whose declaration is the task of interface descriptions. The correct function of a software is not ensured if the structure of the passed data differs from the intended data structure. Figure 3 (b.) shows four independent variables which, as they represent information characterizing the same single object, are combined in a grouping variable _company_, which itself must be described. The technical structuring of the data thereby maps their context and relates the four variables to each other and to the grouping variable: _Number of employees_ becomes _Number of employees of the company AlphaArc_. In order to capture this kind of context within an interface description, the DataDesc schema utilizes hierarchical parent-child structures to map relationships between variables. For this purpose, variables that are the child objects of data model classes or grouping variables are listed as their properties (_Properties_). In addition to grouping, the dimensional resolution of information represents a significant structural pattern. Figure 3 (a., d., and e.) shows the increasing Figure 3: Comparison of widely-used data structures based on an example of information about companies. Variables are shown in gray and their values in white, whereas dimensions are displayed in dark blue with their indices in light blue. resolution of the initially non-dimensional information: _the AlphaArc company has a total of 73 employees_. This information does not change subsequently, but the single value is broken down into individual values per store and then per store and department. Each dimension along which the information is resolved is listed as part of the variable (_Dimensions_). The DataDesc schema allows individual specification of the meaning (_Name_, _Description_, _URI_), index type (_Data Type_), and index range (_Has Minimum Value_, _Has Maximum Value_, _Value Set_, and _Value Increment_) of the dimensions. In this context, the combination of dimension indexes is unique, which is why it acts as a key and enables the unique identification and retrieval of each individual value. At the same time, an individual context is defined for each value: _15 employees is the team size of the sales department in the London store_, for example. The structural description provides not only information about the context of values but also about data access mechanisms that might be expected by interface functions (see Figure 3 (c.)). For the structure of the _company list_, which can be, e.g., in the form of a _Python dictionary_, _pandas DataFrame_, _Java HashMap_, or _SQL table_, the variable _Name_ was determined as a key index due to the identifying character of its values. As a dimension of the _company list_, the variable _Name_ allows access to individual datasets. Figure 3 (f.) shows another structure that combines grouping and resolution by adding the third dimension _year_ to the resolution of the _number of employees_ based on the two dimensions of _store_ and _department_. Here, the total number of 73 employees is not broken down further, but put in the context of a specific year, e.g., _15 employees is the team size of the sales department in the London store in the year 2010_. Together with uniform information for, e.g., the years 2015 and 2020, this third dimension turns the dataset into a time series. ### DataDesc Exchange Format and Utilities In addition to the schema for describing interfaces, the exchange format for the integrated storage and flexible subsequent use of software metadata represents the second core component of the DataDesc ecosystem. The OpenAPI specification was chosen as its foundation, as it allows a programming language-agnostic description of software that is usable by both humans and computers. A software is described in a single OpenAPI-conforming YAML file. Its basic structure consists of a hierarchical object tree, that is subdivided into the two sections of _info_ and _components_ (compare Figure 2). In the _info_ section, all general information, for example along a general software metadata schema such as CodeMeta, can be accommodated. If metadata elements are required for this that are not provided for in the OpenAPI specification, they can be added by means of _x-attribute_ extensions without violating the standard. In the _components_ section, the technical interface metadata, as described by the DataDesc schema, for example, can be specified. The _x-attributes_ again provide the opportunity to compensate for missing OpenAPI metadata elements. In addition, they form the basis for using the standard not only to describe REST-compliant interfaces; they can also be used to arrange and annotate code elements such as classes, functions, and parameters in the hierarchical object tree according to individual software interface designs. In order to support the description of software based on its general properties and the transfer of this information into the exchange format chosen in the DataDesc approach, a _browser-based input form_ was added to the ecosystem. The metadata fields of the form thereby map to the CodeMeta standard, as this is already widely used and can be applied across research domains. Thus, the uncomplicated input tool is an alternative to the JSON-based CodeMeta generator [64] and fits seamlessly into the DataDesc approach. Unlike the general metadata, the technical documentation is produced directly in the source code of a program, which is why the definition of a machine-actionable formatting of this information, as well as its automated parsing and transfer into the exchange format must be made individually for each programming language. In the context of Python software, code components related to the interface are supplemented by decorators and individually marked up by means of DataDesc schema elements. A _Python parser_ specifically developed for this schema extracts both the relevant code structures and their metadata and automatically generates the hierarchical object tree from them, which is then stored in an exchange format-compliant file. The DataDesc utilities are complemented by a _tool for merging_ the DataDesc files, so that the entire description of a software can be represented in a single concise documentation file that is easily exchangeable. Its OpenAPI conformity also ensures high interoperability, as a multitude of publicly-available tools can be applied to it [42, 65]. ### DataDesc Publication Pipelines Making it possible for developers to create software metadata and documentation only once and then flexibly reuse it is one of the main objectives of the DataDesc approach. Against this background, technical processes are defined and, where necessary, supported with scripts that enable the collected information to be disseminated on software publication platforms. These publication pipelines are unique to each platform and subject to automation. To upload data to any of the publication platforms mentioned below, a free user account must first be created. The OpenAPI-conforming YAML file can be uploaded and published on SwaggerHub [52] via its GUI or API, without any need for modifications. To publish the description on GitHub [43], it is sufficient to add the file to the software's versioned repository and reference it in the central README file. To make the documentation more visually-appealing, a link to a SwaggerHub-hosted documentation page can be included. The registration of Python-based software and its metadata in the Python Package Index (PyPi) [45] has been fully automated. With a DataDesc script utilizing restructuring and conversion tools, the YAML file and corresponding software source code are reformatted, uploaded, and published on the platform. The publication on ReadTheDocs [46] can also ingest information based on a DataDesc exchange file. In order to upload the documentation in the appropriate format, it must first be created using, for example, Sphinx with its extension for the parsing of OpenAPI specifications. Then, a GitHub repository comprising the generated documentation can be imported. The ORKG [56, 57, 58] provides a GUI and an API for uploading software metadata, which can be entered manually into a form. In addition, a script was added to the DataDesc ecosystem to automate the translation of the exchange format into the ORKG template structures [66]. Currently, this mapping must still be performed individually by each user. However, work is underway to include this functionality in the ORKG. As part of the development of the Open Energy Research Graph [67], efforts are being made to ensure that the exchange format can also be processed directly within the OEP [32]. ## 4 Application Case In this section, the application of the DataDesc approach to a research software is discussed. For this, the open source FINE framework [12] from the _Energy Transformation Pathway Optimization Suite_ (ETHOS) was chosen to illustrate a use case that is both realistic in terms of complexity and intriguing with respect to the interfaces provided. Using selected excerpts from the created interface documentation shown in Figure 4, the OpenAPI-compliant syntax of the YAML file generated using the DataDesc utilities is presented and the semantic expression capabilities of the DataDesc schema are assessed. To allow for a more in-depth review of the entire DataDesc approach, all code and documentation files created as part of this example application are published in the DataDesc repository [59]. FINE is a Python package with a five-year development history originating in the research domain of energy systems analysis [12]. It enables the modeling, optimization, and evaluation of energy systems. In addition to accounting for technical and environmental constraints, its optimization also seeks to minimize total annual energy system costs. It supports the creation and computation of spatially, temporally, and technologically highly-resolved models while integrating complexity-reduction techniques to shorten computation times. In its current version, 2.2.2 from 2022, the framework includes around 20,000 lines of code (excluding blank lines) and 10,000 lines of code documentation. Although the source code of the software project is hosted on GitHub [68], the user documentation is published on ReadTheDocs [69]. The documentation pages are based on the inline docstrings in the source code and were automatically generated using the Sphinx package. In addition, a short entry in the OEP's software framework list was written for the framework [70]. The FINE software is based on a central data model, namely the _Energy System Model_ (ESM), which is represented by the ESM and component classes. It holds multidimensional information pertaining to the energy system under investigation and comprises all characteristics of its components, e.g., for the provision, transmission, and storage of energy. As input, besides basic parameters for calculation control, the software requires the general conditions of the energy system and the techno-economic parameters of its components. As output, it provides information for the design and operation of a minimum-cost energy system. As the ESM incorporates all of this data, it simultaneously serves as the software's input and output data model. As noted in the general schematic in Figure 1, the FINE interface offers the possibility of reading the input data from files or having them transferred by preceding software. However, in the second case the information can be gathered step by step in the data model classes; for file-based information transfer, a single complex file containing all parameters must be provided. Both Excel and NetCDF file formats are accepted for this purpose. On the output side of the interface, the result data can also be saved in Excel or NetCDF form or visually depicted using a range of plotting functions. The FINE interfaces for reading Excel and NetCDF files are implemented by one function each, which mainly obtain a path to the respective input file. Here, DataDesc offers the possibility, in addition to the superficial description of the string variables, of going into depth and also describing the necessary internal structures of the input files (cf. Figure 4, lines 39-50). Thus, for the NetCDF interface, the control parameters and input variables were documented in the clearly structured hierarchical data format, which arranges the information according to entity types and in each case lists their attributes in accordance with their different dimensional characteristics. The documentation of the Excel data structure required more effort, as it does not group the data by entities, but distributes them to different spreadsheets depending on their dimensional resolutions. The resulting tables, in which the multi-dimensional attribute characteristics of different entities are mapped by means of different index columns, required precise documentation to define the boundaries between individual datasets. In both cases, documentation could be created to help users understand the given file structures and arrange their own input data accordingly. The documentation effort depended on the straightness and non-ambiguity of the data model structures. For the documentation of the programmatic interface, the constructors of the ESM and the component classes were described using DataDesc. Here, the use of Python's dataclass decorators and minor code adjustments to the constructors enabled the transformation of previously free-text docstrings into unambiguous key-value pairs. Variable names, comments, types, roles, and default values could easily be mapped to the DataDesc annotation syntax (line 7 ff). For Python native types, the variable types were annotated directly into the code using type hints. For custom types, such as Panda's dataframes, these annotations, including more detailed structural information, were written in the decorators. In addition, the value range constraints that some variables are subject to could also be integrated into the documentation with the metadata elements contained in the DataDesc schema (lines 24-25). Furthermore, the permitted value sets and also complex data structures of the FINE variables could be described in detail (lines 12-18, 32-34). To minimize the documentation effort, value sets and openapi: 3.0.0 2info: ###Infosection title: FINE - A Framework for Integrated Energy System Assessment version: 2.2.2 3x-first-release: '2018-11-12' x-programming-lang: Python components: ###Componentssection Component: ###FINE Componentclass description: The Component class includes... properties: ###Class Variables capacityMax: x-dimensions: &id001 ###ReferencableinnerPandasDataframesstructure location: ItemMinimumValue: 0 UnitType: spatial identifier time: ItemMinimumValue: 0 UnitType: temporal identifier EnergySystemModel: ###FINE EnergySystemModel class properties: ###Class Variables numberOfTimeSteps: type: integer x-DefaultValue: 8760 x-MinimumValue: 0 ###Allowedvaluerange x-ExclusiveMinimum: true required: ###Listofrequiredparameters - numberOfTimeSteps x-functions: ###Classfunctions aggregateTemporally: properties: ###Functionparameters clusterMethod: x-ValueSet: ###Allowedvaluesset -averaging - k_means x-VariableRole: input ###Inputparameter removeComponent: return: ###Returnvalue $ref: '#/components/schemas/Component' ###Referencingdatastructure readNetCDFtoEnergySystemModel: properties: ###Functionparameters filePath: ###Pathtoexternalfile type: string x-FileFormat: NetCDF x-NetCDFFolders: ###InnerstructureofreferencedNetCDFfile InputData: ###Datafolders -Conversion -Storage Parameters: ###Controlparameters -numberOfTimeSteps -verboseLogLevel Figure 4: Compilation of selected lines of code from the YAML file generated to document the FINE interfaces to represent the OpenAPI-compliant syntax and its semantics within the DataDesc schema. data structures that apply to several variables were documented only once and then referenced repeatedly (lines 12 and 38). Content dependencies, like the _costUnit_ parameter of the ESM class, which determines the currency and units of all monetary variables, can also be expressed through referencing. The description of procedural dependencies, in which the value of a variable influences the software-internal calculation processes, must be represented so far as free text comments. An example for this is the component class that contains the Boolean parameter _hasCapacityVariable_, which, if set to _true_, causes the _capacityVariableDomain_ and _capacityPerPlantUnit_ variables to be ignored in the calculations. Work is currently underway to formally integrate this form of dependencies into the schema. Another dependency type results from the fact that FINE integrates the _tsam_[71, 72] library for the purpose of temporal data aggregation and partially maps the external interface within its own interface. In a function of the ESM class, for example, the parameter aggregation method can be selected (lines 31-34). The permitted value set, which includes options like _averaging_ or _k-means_, is specified by the external library and manually included in the FINE documentation. In the future, the documentation of independent programs could be integrated and reused automatically if they are also made machine-actionable by means of the DataDesc schema. ## 5 Summary and Conclusions The FAIR principles and their adaptations to research software have received much attention and support. To effectively reuse a software, the software itself and its interfaces must be clearly defined and made understandable, ideally in a machine-actionable manner. However, most research software today is not documented or published in a way that provides detailed and machine-actionable interface descriptions. Instead, software metadata is often focused on the compact provision of general information, whereas documentation pages, including detailed, technical information are primarily in natural language and not machine-actionable. Therefore, the DataDesc ecosystem was presented in this article as an approach to describing the data models of software interfaces using detailed and machine-actionable metadata and as an extension to existing research software metadata. In pointing out that there must be a differentiation between data structures and data formats, it was shown how to consistently describe data structures, and by this support the interoperability of software to other programs and data files. In addition to a specialized metadata schema, an exchange format and support tools for the easy collection and automated publishing of software documentation were introduced. Using the FINE framework as an example, the practical applicability of DataDesc and its limitations were shown. It is hoped that DataDesc will facilitate the description of software interfaces with detailed and machine-actionable metadata enough to make it common practice, leading to increased interoperability, findability, and, therefore, reusability of research software. In future research, DataDesc is to be enhanced and applied to the description of datasets. Furthermore, extending DataDesc to programming languages beyond Python would be an interesting research direction. With both software interfaces and data being described with DataDesc, the foundation for automatically composing and executing computational workflows has been laid, which will hopefully increase the reuse of research software and the reproducibility of computational research in the future. ## 6 Credit statement Conceptualization: P.K., J.G., O.K., and S.F.; methodology: P.K., J.G., O.K., S.F., D.N., J.S., R.P., and F.E.; software: D.N., J.S., and O.K.; validation: R.P., P.K.; investigation: P.K., J.G., O.K., D.N., J.S., R.P., and S.F.; data curation: J.S.; writing - original draft: P.K., J.G., O.K., D.N., J.S., R.P. and S.F.; writing - review & editing: P.K., J.G., O.K., D.N., J.S., R.P., S.F., F.E., N.P., J.W., A.N., S.A., and D.S.; visualization: P.K., J.G.; supervision: D.S., S.A., and A.N.; project administration: P.K., O.K.; funding acquisition: D.S., S.A., and A.N. ## 7 Acknowledgements The authors would like to thank the Federal Ministry for Economic Affairs and Energy of Germany (BMWi) for supporting this work with a grant for the project LOD-GEOSS (03EI1005B). Furthermore, the authors are grateful to the German Federal Government, the German State Governments, and the Joint Science Conference (GWK) for their funding and support as part of the NFDI4Ing and the NFDI4Energy consortia. Funded by the German Research Foundation (DFG) - 442146713; 501865131. In addition, the work was supported by the Lower Saxony Ministry of Science and Culture within the Lower Saxony "Vorab" of the Volkswagen Foundation under Grant 11-76251-13-3/19-ZN3488 (ZLE), and by the Center for Digital Innovation (ZDIN). This work was also supported by the Helmholtz Association as part of the program "Energy System Design".
2308.09805
Signal Processing Based Antenna Pattern Characterization for MIMO Systems
Sophisticated antenna technologies are constantly evolving to meet the escalating data demands projected for 6G and future networks. The characterization of these emerging antenna systems poses challenges that necessitate a reevaluation of conventional techniques, which rely solely on simple measurements conducted in advanced anechoic chambers. In this study, our objective is to introduce a novel endeavour for antenna pattern characterization (APC) in next-generation multiple-input-multiple-output (MIMO) systems by utilizing the potential of signal processing tools. In contrast to traditional methods that struggle with multi-path scenarios and require specialized equipment for measurements, we endeavour to estimate the antenna pattern by exploiting information from both line-of-sight (LoS) and non-LoS contributions. This approach enables antenna pattern characterization in complex environments without the need for anechoic chambers, resulting in substantial cost savings. Furthermore, it grants a much wider research community the ability to independently perform APC for emerging complex 6G antenna systems, without relying on anechoic chambers. Simulation results demonstrate the efficacy of the proposed novel approach in accurately estimating the true antenna pattern.
Chandan Kumar Sheemar, Jorge Querol, Symeon Chatzinotas
2023-08-18T20:18:55Z
http://arxiv.org/abs/2308.09805v1
# Signal Processing Based Antenna Pattern Characterization for MIMO Systems ###### Abstract Sophisticated antenna technologies are constantly evolving to meet the escalating data demands projected for 6G and future networks. The characterization of these emerging antenna systems poses challenges that necessitate a reevaluation of conventional techniques, which rely solely on simple measurements conducted in advanced anechoic chambers. In this study, our objective is to introduce a novel endeavour for antenna pattern characterization (APC) in next-generation multiple-input-multiple-output (MIMO) systems by utilizing the potential of signal processing tools. In contrast to traditional methods that struggle with multi-path scenarios and require specialized equipment for measurements, we endeavour to estimate the antenna pattern by exploiting information from both line-of-sight (LoS) and non-LoS contributions. This approach enables antenna pattern characterization in complex environments without the need for anechoic chambers, resulting in substantial cost savings. Furthermore, it grants a much wider research community the ability to independently perform APC for emerging complex 6G antenna systems, without relying on anechoic chambers. Simulation results demonstrate the efficacy of the proposed novel approach in accurately estimating the true antenna pattern. Multi-antenna Systems, Antenna Pattern, Characterization Methods, Signal Processing ## I Introduction Emerging wireless networks are revolutionizing the way we connect and communicate in the digital age. With the rapid advancement of technology, these networks are pushing the boundaries of connectivity, speed, and reliability. From 5G networks that offer lightning fast data transfer speeds and low latency [1, 2] to the exciting potential of 6G networks on the horizon, emerging wireless networks promise to deliver seamless connectivity to a growing range of devices, including smart homes, autonomous vehicles, and the Internet of Things (IoT) [3, 4]. These networks also aim to enhance the overall user experience, enabling immersive virtual reality (VR) [5] and augmented reality (AR) applications [6], as well as supporting advanced industrial applications such as remote surgery and autonomous manufacturing. As the world becomes increasingly connected, emerging wireless networks are paving the way for a truly interconnected and intelligent future [7, 8]. Sophisticated antenna systems are an essential component for emerging 6G wireless systems, enabling network services across various frequencies and ranges [9, 10]. Over the years, antenna technologies have evolved significantly, driven by the growing demand for high-speed, high-bandwidth communication systems with improved performance, efficiency, and reliability [11, 12, 13, 14, 15]. From traditional fixed antennas to adaptive and reconfigurable antennas [16, 17, 18], the evolution of antenna technologies has led to significant advancements in wireless communication systems. With the emergence of new applications foreseen for 6G, there is a need for novel antennas that can meet the ever-increasing demands of these systems. This has led to the development of new antenna technologies, including metamaterials [19], and millimeter wave antennas [15], among others. To fully characterize the performance of the communication systems, it is first essential to investigate the radiation properties of the deployed antenna systems, i.e. perform antenna pattern characterization (APC), which captures information about several parameters, such as directivity, antenna gain and beamwidth, etc [20]. Traditionally, a measurement campaign must be conducted by taking measurements at different angles in the range of \([0,2\pi]\) to achieve accurate APC. However, the characterization of such a crucial parameter is not an easy task due to significant challenging arising in the measurement campaign. Besides, for the evolving complex antenna technologies for 6G which are expected to create very beams to serve the users, naive measurements only based strategies result to be very time-consuming, and expensive, as they necessitate a dedicated anechoic chamber [21] to nullify the effect of multi-path (MP) and electromagnetic interference, which can potentially lead to an accurate measurement campaign. Furthermore, in complex environments outside the anechoic chambers, the traditional APC techniques are prone to fail as they are incapable of distinguishing the total received information, i.e. from reflections or interference. This motivates the design of new APC methods which take into account the reflecting and/or interference in complex environments and intelligently process the measured data to extract information about the complex antenna patterns, which will be inherent to 6G communications. In this paper, we propose to exploit signal-processing tools [22, 23] to refine the measurements taken in the MIMO system in a challenging environment, which leads to accurate APC. The antenna pattern (AP) dictates the effective power irradiation in different directions. Consequently, in the direction where measurements are being conducted, the effective MIMO channel response results are affected by a scale factor, capturing the potential of a multi-antenna system in radiating different amounts of power in different directions. For an isotropic antenna, the scale factor results to be one, as the same amount of power is irradiated in each direction. To yield accurate APC for our antenna system, we first propose a minimum mean squared error (MMSE) estimator [24] for the scaled channel response which captures the power irradiations efficiency of the multi-antenna system in the direction of measurements. Then the relationship between the effective scaled channel response and the line-of-sight (LoS) and non-LoS channel response is exploited to refine the measurements to jointly estimate the MP and the antenna pattern. The proposed approach is a joint and adaptive approach which must the executed for each position where the measurements are taken. Simulation results show that the proposed design achieves significant performance improvement in terms of APC accuracy. The performance improves significantly as the transmit power at the base station (BS) increases. The rest of the paper is organized as follows: We first present the system model and problem formulation in Section II. The joint APC and MP characterization approach are proposed in Section III. Finally, Sections IV and V present the simulation results and conclusions, respectively. ## II System Model In what follows, we consider the case of a one MIMO BS consisting of \(N_{tx}\) transmit antennas, deployed at the height \(h\) from the ground in an outdoor environment, as shown in Fig. 1. We assume that an Unmanned aerial vehicle (UAV) with \(N_{rx}\) receive antennas, is flying at the same height \(h\) with a circular trajectory of radius \(d\), aiming at estimating the antenna pattern of the multi-antenna array deployed at the MIMO BS. The outdoor environment is assumed to contain reflectors which contribute to the MP. To simplify the analysis, we will disregard the takeoff and landing phases of the UAV. Additionally, we assume that the objective of the UAV is to complete a full circle of \(360^{\circ}\), meaning that the initial position \(\mathbf{q}_{s}=(x_{i},y_{s})^{T}\) and the destination position \(\mathbf{q}_{d}=(x_{d},y_{d})^{T}\) at height \(h\) are identical, i.e., \(\mathbf{q}_{s}=\mathbf{q}_{d}\). The trajectory of the UAV at height \(h\) is predetermined based on the surrounding environment of the BS. In the presence of obstacles, the radial distance \(d\) can be adjusted to ensure a line-of-sight (LoS) path between the UAV and the BS is always maintained. Let's consider the BS positioned at the center of a three-dimensional coordinate system, and denote \(\theta_{i}\) as the angle between the BS and the UAV. The complete trajectory, which consists of a single circle, is divided into \(\Theta\) evenly spaced points where the UAV pauses briefly to gather measurement data. Hence, we have \(\theta_{1}\leq\theta_{i}\leq\theta_{\Theta}\), and we can denote the position of the UAV on the two-dimensional coordinate system as \(\mathbf{q}(\theta_{i})=(x(\theta_{i}),y(\theta_{i}))^{T}\). Let us consider a flat block-fading MIMO system. We assume the radiation pattern to be constant during the time for which the UAV takes the measurements. We adopt a pilot-based approach in which the UAV is aware of the signal being transmitted from the BS. Assume that for antenna pattern characterization, for each \(\theta_{i}\) the BS transmits a sequence of \(Q\) training samples with \(Q>N_{tx}\), collected in the matrix as \(\mathbf{P}=(\mathbf{p}_{1},......,\mathbf{p}_{Q})\), with \(\mathbf{p}_{i}\in\mathcal{C}^{N_{tx}\times 1}\). Moreover, \(\mathbf{P}\) is supposed to be the same \(\forall\ \theta_{i}\). In the case of an isotropic antenna system, the power radiated is equal in all directions when the BS transmits training samples \(\mathbf{P}\). However, the evolving complex antenna systems for 6G and beyond will depend on beamforming techniques, enabling the allocation of a substantial power level in multiple desired directions using highly focused beams precisely aimed at the locations of the users. Consequently, there can be a substantial discrepancy in the power radiated across various directions. Consider the scalar \(a(\theta_{1})\), which represents the efficiency of power irradiation in the direction \(\theta_{i}\). The scalar \(a\) satisfies the following conditions \(0\leq a_{i}(\theta_{i})\leq 1\), where \(a(\theta_{i})=0\) and \(a(\theta_{i})=1\) denote the direction of radiation null or the main beam, respectively. Note that \(a\) spanned over the interval \([0,2\pi]\) also represents the antenna pattern that we wish to estimate. Consequently, \(\mathbf{P}(\theta_{i})=(\mathbf{p}_{1}(\theta_{i}),......,\mathbf{p}_{Q}( \theta_{i}))\) denotes the effective power irradiated in the direction \(\theta_{i}\), where \(\mathbf{p}_{j}(\theta_{i})=\sqrt{a(\theta_{i})}\mathbf{p}_{j},\forall j\). While \(\mathbf{P}\) is the same for each \(\theta_{i}\), the effective irradiated power \(\mathbf{P}(\theta_{i})\) is different due to the non isotropic antenna array. Let \(\mathbf{Y}(\theta_{i})\in\mathcal{C}^{N_{rx}\times Q}\) denote the effective received signal matrix at the UAV, which can be written as \[\mathbf{Y}(\theta_{i})=\mathbf{H}(\theta_{i})\sqrt{\mathbf{a}(\theta_{i})} \mathbf{P}+\mathbf{V}(\theta_{i}), \tag{1}\] where \(\mathbf{V}\in\mathcal{C}^{N_{rx}\times Q}\) denotes the noise. ### _On the MIMO Channel Model for Antenna Pattern Characterization_ In a typical scenario, the MIMO channel \(\mathbf{H}\) consists of line-of-sight (LoS) component and the MP components, denoted as \(\mathbf{H}_{\text{LoS}}\) and \(\mathbf{H}_{\text{MP}}\), respectively. For an isotropic antenna array, the LoS channel \(\mathbf{H}_{\text{LoS}}\) is independent of \(\theta_{i}\) as the path loss is the same for all points lying on the circular trajectory of the UAV of radius \(d\). However, \(\mathbf{H}_{\text{MP}}\) still depends on the angle \(\theta_{i}\) as the total number of reflective paths can add up constructively or destructively depending on the position of the UAVs and reflectors. Hence, we can write \[\mathbf{H}(\theta_{i})=\mathbf{H}_{\text{LoS}}+\mathbf{H}_{\text{MP}}(\theta_{ i}) \tag{2}\] Given the aforementioned motivation, we can model the LoS channel as Fig. 1: The measurement setup consisting of a MIMO BS and a flying UAV in the presence of reflections. \[\mathbf{H}_{\text{LoS}}(\theta_{i})=\sqrt{\alpha_{\text{LoS}}}\mathbf{a}_{r}( \theta_{i})\mathbf{a}_{t}^{H}(\theta_{i}) \tag{3}\] where \(\mathbf{a}_{r}(\theta_{r})\) and \(\mathbf{a}_{t}^{H}(\theta_{i})\) denote the receive and transmit antenna array responses, respectively, and \(\alpha_{\text{LoS}}\) denote the path-loss, which is the same for the UAV flying on the circular trajectory of radius \(d\). The matrix \(\mathbf{H}_{\text{MP}}\) can be modelled as \[\mathbf{H}_{\text{MP}}=\sum_{k=1}^{L}\sqrt{\alpha_{k}(\theta_{r})}\mathbf{a}_{ r}(\theta_{i},k)\mathbf{a}_{t}^{H}(\theta_{i},k) \tag{4}\] where \(L\) denote the total number of paths. By decomposing the effective channel in LoS and MP components, (1) can be written as \[\mathbf{Y}(\theta_{i})=\sqrt{a(\theta_{i})}\mathbf{H}_{\text{LoS}}(\theta_{i} )+\sqrt{a(\theta_{i})}\mathbf{H}_{\text{MP}}\mathbf{P}+\mathbf{V}(\theta_{i}). \tag{5}\] When the UAV performs measurements at the position \(\theta_{i}\) for APC, the total measured power from the multi-antenna system is given by \(||\mathbf{Y}(\theta_{i})||_{F}^{2}\), which is affected also by power absorbed from the multi-path. Ideally, to verify if the irradiation power satisfies the total power constraint \(\gamma\), the MP contributions should be cancelled to measure the effective receive power irradiated due to direct irradiation, which results to be \[p(\theta_{i})=||\mathbf{Y}(\theta_{i})-\sqrt{a(\theta_{i})}\mathbf{H}_{\text{ MP}}(\theta_{i})\mathbf{P}||_{F}^{2}. \tag{6}\] Adopting this approach is of extreme interest for higher frequencies such as millimeter wave, where the power from reflections becomes comparable to the LoS component. ## III Problem Formulation and Solution To accurately estimate the antenna pattern, we propose to first estimate the scaled channel response in the direction \(\theta_{i}\), denoted as \(\mathbf{H}_{a}(\theta_{i})=\sqrt{\mathbf{a}(\theta_{i})}\mathbf{H}(\theta_{i})\) in the following. To estimate the scaled channel response, we aim at finding its minimum MSE estimator, for which the optimization problem can be formulated as \[\underset{\mathbf{H}}{\text{min}}\quad||\mathbf{Y}(\theta_{i})-\mathbf{H}_{a }(\theta_{i})\mathbf{P}+\mathbf{V}(\theta_{i})||_{F}^{2} \tag{7a}\] By solving the problem above, we get the following optimal MMSE estimator for the scaled channel response \[\mathbf{\hat{H}}_{a}(\theta_{i})=\mathbf{Y}(\theta_{i})\mathbf{P}^{H}( \mathbf{P}\mathbf{P}^{H})^{-1} \tag{8}\] Given the scaled channel estimate which depends on the antenna array factor, the following equation holds \[\mathbf{\hat{H}}_{a}(\theta_{i})=\sqrt{\hat{a}(\theta_{i})}\mathbf{\hat{H}}_ {\text{LoS}}+\sqrt{\hat{a}(\theta_{i})}\mathbf{\hat{H}}_{\text{MP}}(\theta_{i}) \tag{9}\] where \(\mathbf{\hat{H}}_{\text{LoS}}\) and \(\mathbf{\hat{H}}_{\text{MP}}\) denote the estimates for the LoS and multi-path component and \(\sqrt{\hat{a}(\theta_{i})}\) denotes the estimated components of the antenna array at position \(\theta_{i}\). It is noteworthy that given the position of the BS with respect to UAV is always known which lies on a circle of radius \(d\), \(\mathbf{\hat{H}}_{\text{LoS}}\) can be easily obtained, which does not vary during the whole trajectory of the UAV. Given such information, we must find jointly \(\mathbf{\hat{H}}_{\text{MP}}(\theta_{i})\) and \(a(\theta_{i})\) given the scaled MMSE channel estimate. To do so, we consider minimizing the error between \(\mathbf{\hat{H}}_{a}(\theta_{i})\) and \(\mathbf{\hat{H}}_{\text{MP}}(\theta_{i})\) and \(a(\theta_{i})\), for which the MSSE optimization problem can be stated as \[\underset{a(\theta_{i}),\mathbf{H}_{\text{MP}}(\theta_{i})}{\text{min}}\quad|| \mathbf{\hat{H}}_{a}(\theta_{i})-\sqrt{a(\theta_{i})}\mathbf{\hat{H}}_{\text{ LoS}}-\sqrt{a(\theta_{i})}\mathbf{H}_{\text{MP}}(\theta_{i})||_{F}^{2}. \tag{10}\] We adopt an alternating optimization approach to iteratively optimize the values of \(a(\theta_{i}),\mathbf{H}_{\text{MP}}(\theta_{i})\). Note that the values of the scale factor satisfy \(0\leq a(\theta_{i})\leq 1\). At the first position \(\theta_{1}\) where the UAV takes the measurements, consider selecting the starting value of the scale factor \(a(\theta_{1})^{(0)}\in[0,1]\). For the positions with \(i\neq 1\), the starting value of \(a(\theta_{i})^{(0)}\) can be chosen as \(a(\theta_{i})^{(0)}=\hat{a}(\theta_{i-1})\), i.e. the one estimated at the previous position. Alternatively, the MP \(\mathbf{H}_{\text{MP}}(\theta_{i})\) can be first initialized. Given \(a(\theta_{1})^{(0)}\), we consider optimizing the estimate of \(\mathbf{H}_{\text{MP}}(\theta_{1})\) at the first iteration, denoted as \(\mathbf{\hat{H}}_{\text{MP}}(\theta_{1})^{(1)}\). For such a purpose, we take the derivative of the objective function (10) with respect to the conjugate of \(\mathbf{H}_{\text{MP}}(\theta_{1})^{(1)}\), which leads to the following closed-form solution \[\mathbf{\hat{H}}_{\text{MP}}(\theta_{1})^{(1)}=\frac{1}{\sqrt{a(\theta_{1})^{( 0)}}}\mathbf{\hat{H}}_{a}-\mathbf{\hat{H}}_{LoS} \tag{11}\] Given the recently computed estimate for \(\mathbf{\hat{H}}_{\text{MP}}(\theta_{1})^{(1)}\), we aim at finding the optimal antenna pattern value in position \(\theta_{1}\), by solving the following optimization problem \[\underset{a(\theta_{i})^{(1)}}{\text{min}}\quad||\mathbf{\hat{H}}_{a}(\theta_ {i})-\sqrt{a(\theta_{i})^{(1)}}\mathbf{\hat{H}}_{\text{LoS}}-\sqrt{a(\theta_{i} )^{(1)}}\mathbf{\hat{H}}_{\text{MP}}(\theta_{1})^{(1)}||_{F}^{2}. \tag{12}\] We consider solving this problem by performing a linear search for \(a(\theta_{i})^{(1)}\), restricted to the interval \([0,1]\), aiming at finding the scalar leading the MMSE. This leads to a simplified search-based approach which does not rely on heavy computations. Once the optimal \(\hat{a}(\theta_{i})^{(1)}\) has been found, the process can be repeated iteratively for both variables until convergence. Then the process must be repeated at each \(\theta_{i}\) by first collecting the measurement data \(\mathbf{Y}(\theta_{i})\) for which as the initial estimate for \(a(\theta_{i})^{(0)}\) the previously estimated value can be used. The overall procedure to find the optimal antenna pattern for the circular trajectory for the UAV is formally stated in Algorithm 1. ## IV Simulations Results In this section, we present the simulation results to evaluate the performance of the proposed signal processing-based MIMO antenna pattern characterization technique. We consider an outdoor environment and we assume that the BS and the UAV deploy phased antenna arrays with the number of transmit and receive antennas \(N_{rx}=10\) and \(N_{rx}=8\), respectively. A pilot sequence length \(Q=100\) is assumed to be transmitted at each \(\theta_{i}\). We assume that the BS transmit at the rate of \(R=25\ symbols/sec\), which requires the UAV to collect measurements at each position for 4 s. The UAV is assumed to take measurements on \(\Theta=50\) points equally distributed on the circular trajectory. By ignoring the flying time between two consecutive points to be negligible, the total flying time for the UAV results to be \(\sim 200\)s (\(3.33\) mins). The pilot sequence is designed with independent rows, which has been shown in the literature to perform optimally. We define the signal-to-noise-ratio (SNR) of our system as the transmit SNR, i.e., \[\text{SNR}=\frac{\gamma}{\sigma^{2}}, \tag{13}\] with \(\gamma\) and \(\sigma^{2}\) denoting the transmit power and the noise variance, respectively. We consider selecting \(\gamma=1\) and selecting the noise variance to meet the transmit SNR requirement. The radius of the trajectory is \(d=50\) m. We consider that the measurements take place for a BS deployed in sub 6-GHz, for which the LoS is assumed to be dominant compared to the MP case. The MIMO channel response is modelled as a Rician fading channel model with Rician factor \(\kappa=5\) dB. In Figure 2, we present the performance analysis of the proposed approach for estimating the AP in a wireless communication system, specifically focusing on a SNR of 10 dB. The obtained results clearly demonstrate the effectiveness and efficiency of our proposed approach in accurately estimating the AP. However, it is essential to note that there is a discernible mismatch observed at the estimated peaks of the AP curve, indicating a relatively poorer estimation in those instances. This mismatch can be attributed to the deliberate selection of a higher noise variance during the reporting of the results. The inclusion of these intentionally designed conditions enables a comprehensive evaluation of the performance of our proposed approach. Continuing our investigation, in Figure 3, we further explore the performance of the proposed scheme in estimating the AP, this time at a higher SNR of 20 dB. The graphical representation clearly showcases the significant gains achieved by our proposed scheme in terms of accurately estimating the AP. Comparing this scenario to the previous case, we can observe a noteworthy reduction in the occurrence of erroneous estimates at the peaks, leading to a more precise estimation of the AP. To provide a quantitative analysis, we examine the MSE in dB between the estimated AP and the ideal AP and its behaviour as a function of the SNR. In Figure 4, the plot depicts how the MSE decays as the SNR increases. It is evident that the MSE starts relatively high at low transmit SNR and rapidly decreases as the noise variance reduces. This indicates that our proposed approach exhibits improved accuracy in estimating the AP with reduced noise levels. Furthermore, to explore potential enhancements in scenarios with limited transmit SNR, we consider the impact of increasing the length of the pilot sequence \(Q\). Figure 5 demonstrates Fig. 4: MSE as a function of the transmit SNR. Fig. 3: Estimated antenna pattern design at SNR=20 dB. Fig. 2: Estimated antenna pattern design at SNR=10 dB.. the effect of varying pilot sequence lengths on reducing MSE at an SNR of 10 dB. The results illustrate that a larger pilot sequence can compensate for the lower irradiating power, effectively reducing the MSE between the estimated and ideal AP. In conclusion, our investigation highlights the effectiveness and efficiency of the proposed approach in accurately estimating AP in wireless communication systems. While some challenges arise in specific conditions with higher noise levels at the peaks of the AP, overall our algorithm still performs well. Overall, we can conclude that our proposed scheme exhibits significant gains in AP estimation accuracy at higher SNRs, and the MSE analysis confirms its improved performance as the noise variance reduces. Furthermore, we learned that the use of longer pilot sequences can further enhance the accuracy of AP estimation, particularly in scenarios with limited transmit SNR. ## V Conclusions In this paper, to achieve an accurate APC, we introduce a novel signal processing approach. By modelling the effect of antenna irradiation efficiency, we map the problem to a scaled MIMO channel estimation which captures the AP coefficient. Then, an MMSE estimator for the scaled channel response is proposed. By utilizing this estimator, we establish a relationship between the effective scaled channel response and both the LoS and non-LoS channel responses. Building on this relationship, we refine the measurements and jointly estimate the MP and the antenna pattern. Our approach is characterized by its joint and adaptive nature, requiring execution for each position where measurements are taken. This adaptability allows us to account for variations in the environment and optimize the estimation process accordingly. Simulation results demonstrate the efficacy of our proposed design, showcasing significant improvements in terms of APC accuracy.
2305.06512
Discriminating coherent states superpositions by line shapes
This article investigates the effect of near non-resonant levels on the spectral lines of atoms interacting with an electromagnetic field. Specifically, we examine the AC Stark effect that occurs when the field frequency matches the transition frequency between two lower levels and the field has a small average number of photons ($|\alpha|^2 <4$). Our research demonstrates that the changes in spectral line shape can be used to distinguish between Schr\"odinger cat states with opposite phases in $\pi$, namely, the states $|\alpha\rangle + |-\alpha\rangle$ and $|\alpha\rangle - |-\alpha\rangle$.
L. Hernández-Sánchez, I. Ramos-Prieto, F. Soto-Eguibar, H. M. Moya-Cessa
2023-05-11T01:25:14Z
http://arxiv.org/abs/2305.06512v1
# Discriminando superposiciones de estados coherentes mediante formas de linea ###### Abstract Este articulo investiga el efecto de niveles cercanos no resonantes en las lineas espectrales de los atomos que interaction con un campo electromagnetico. Especificatione, examimamos el efecto AC Stark que ocurre cuando la frecuencia del campo e coincide con la frecuencia de transicion entre dos niveles mas bajos y el campo tiene un numero promedio pequeno de fotones (\(\left|\alpha\right|^{2}<4\)). Nuestra investigacion demuestra que los cambios en la forma de la linea espectral se pueden utilizar para distinguir entre los estados de gato de Schrodinger on fases opuestas en \(\pi\), a saber, los estados \(\left|\alpha\right>+\left|-\alpha\right>\) y \(\left|\alpha\right>-\left|-\alpha\right>\) ## I Introduccion Durante las ultimas decadas ha habido un considerable interes en las propiedades de los estados superpuestos de la luz, especialmente en la superposicion de dos estados coherentes [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13], debido a que exhiben propiedades muy distinta s a las de los estados que los component. Estas propiedades incluyen la comppresicion en cuadratura [3], el tipo de estadistic (sub-poissoniana o super-poissoniana) [2; 9], y la capacidad para caracterizar estados no clasisicos de la luz [1; 2; 3; 4], entre muchas orras [5; 6; 7; 8; 9; 10; 11; 12; 13]. En el modelo de Jaynes-Cummings [14], el problema de la interaccion entre un solo modo del campo electromagnetico, preparado en una superposicion de estados coherentes, y un atomo de dos niveles y ha sido abordado en estudios previos [3; 7]. Estos estudios han demonstrado que el tiempo de colangos y resugimiente de la inversion atomica se reducte a la mitad en comparacion con un estado coherente [15]. Cabe mencionar que se han desarrollado generalizaciones del modelo de Jaynes-Cummings para abordar casos especificos, como la interaccion con un campo inicialmente preparado en un estado coherente comprimido [16; 17], la interaccion con un medio no lineal de tipo Kerr [18], las interacciones con dos excitaciones [19], las interacciones de tipo optomecanico [20], y el acoplamiente entre dos hamiltonianos de Jaynes-Cummings utilizando algebras de Lie [21]. Por otro lado, en trabajos experimentales de micromaser [22], se han observado asimetrias y cambios en las formas de linea; es decir, se ha observado una variacion de la inversion atomica promedio en funcion de la desintonia, la cual se ha attribuido a los efectos de los niveles cercanos no resonantes [22]. Sin embargo, para poder visualizar la firma estadistic del campo en la dinamica atomica, fenomenologicamente se agrega un termo de tipo AC Stark que genera niveles virtuales cerca de la resonancia [23]. Aunque se han ideado diversas estretagias para poder discriminar entre dos o mas estados cuanticos en superposicion, inicamente en algunos casos se ha podido estabecer un mecanismo para diferenciarlos [24; 25; 26; 27; 28]. La posibilidad de discernir entre una superposicion de estados coherentes con un distribucion de fotones par o impar, mediante las formas de linea, es la motivacion de este articulo. En este trabajo, abordamos el estudio del modelo de Jaynes-Cummings con el termino de AC Stark que considera los niveles cercanos no resonantes. En la seccion II reselvemos la ecuacion de Schrodinger, y consideramos los casos particulares de cuando inicialmente el atomo esta en su primer estado excitado, y cuando el atomo originalmente esta en el estado base. Tomando comodicion inicial del campo un estado coherente, en la seccion III, analizamos los efectos de los niveles cercanos no resonantes en la inversion atomica y mostramos como las formas de linea se ensanachan a medida que el numero promedio de fotones aumenta. En la seccion IV, extendemos el analisis a una superposicion de estados coherentes par e impar con las mismas condiciones atomicas que en la seccion III. Mostramos ahora que, al tomar en cuenta los niveles cercanos no resonantes y para un numero promedio de fotones suficientemente pequeno, las formas de linea permiten discernir entre un estado gato de Schrodinger par e impar. Finalmente en la seccion V escribimos nuestras conclusiones. ## II Modelo de Jaynes-Cummings y Estados Virtuales no Resonantes Consideremos un atomo con un estado base \(\left|g\right>\), un estado excitado \(\left|e\right>\) y estados superiores denotados por \(\left|j\right>\), con \(j=0,1,2,...,\infty\). El atomo interactua con un campo de un solo modo, como se muestra en la figura 1. Suponemos que el campo esta aproximadamente en sintonia con la frecuencia de transicion entre los invies \(\left|g\right>\) y \(\left|e\right>\) del atomo, pero fera de estunonia con los niveles cercanos \(\left|j\right>\) (efecto AC Stark). El Hamiltoniano que describe este sistema se expresa como [22; 23; 29; 30] \[\hat{H}=\frac{\omega_{eg}}{2}\hat{\sigma}_{z}+\omega_{c}\hat{a}^{\dagger}\hat{ a}+\chi\hat{a}^{\dagger}\hat{a}\hat{\sigma}_{z}+g\left(\hat{\sigma}_{+}\hat{a}+\hat{ \sigma}_{-}\hat{a}^{\dagger}\right), \tag{1}\] donde \(g\) es la constante de acoplamiento entre el sistema de dos niveles y el campo (en la aproximacion dipolar), minetras que \(\chi\) es el parametro que cuantifica la intensidad de la interecicon en el efecto AC Stark, debido a la presencia de niveles virtuales cercanos no resonantes. Se utilizan los operadores de creacion y aniquilacion, \(\hat{a}^{\dagger}\) y \(\hat{a}\), que satisfacen la relacion de comnutacion \(\left[\hat{a},\hat{a}^{\dagger}\right]=1\). Ademas, para describir la parte atomica del sistema, se utilizan los operadores \(\hat{\sigma}_{+}=\left|e\right\rangle\left\langle g\right|\), \(\hat{\sigma}_{-}=\left|g\right\rangle\left\langle e\right|\) y \(\hat{\sigma}_{z}=\left|e\right\rangle\left\langle e\right|-\left|g\right\rangle \left\langle g\right|\), los cuales cumplen las relaciones de commutacion \(\left[\hat{\sigma}_{+},\hat{\sigma}_{-}\right]=\hat{\sigma}_{z}\) y \(\left[\hat{\sigma}_{z},\hat{\sigma}_{\pm}\right]=\pm 2\hat{\sigma}_{\pm}\). Para resolver la ecuacion de Schrodinger de este sistema, realizamos la transformacion unitaria dependiente del tiempo \(\hat{\mathcal{R}}=\exp\left[i\omega_{c}t(\hat{n}+\hat{\sigma}_{z}/2)\right]\), que nos conduce a la representacion de interaccion, cuyo Hamiltoniano es \[\begin{split}\hat{\mathcal{H}}&=\hat{\mathcal{R}} \hat{H}\hat{\mathcal{R}}^{\dagger}-\mathrm{i}\hat{\mathcal{R}}\partial_{t} \hat{\mathcal{R}}^{\dagger},\\ &=\left(\frac{\Delta}{2}+\chi\hat{n}\right)\hat{\sigma}_{z}+g \left(\hat{\sigma}_{+}\hat{a}+\hat{\sigma}_{-}\hat{a}^{\dagger}\right),\end{split} \tag{2}\] donde \(\Delta=\omega_{eg}-\omega_{c}\) es la desintonia entre la frecuencia de campo y la frecuencia de transicion atomica. Para resolver la ecuacion de Schrodinger en el esquema de interaccion, utilizamos el metodo tradicional [31; 32], que consiste en proponer un desarrollo del vector de estado atomo-campo al tiempo \(t\) como una combinacion lineal o una superposicion de estados de Fock \(\left\{\left|n\right\rangle\right\}\). Esta superposicion se puede escribir como \[\left|\Psi(t)\right\rangle=\sum_{n=0}^{\infty}\left[C_{n}(t)\left|n\right\rangle \left|e\right\rangle+D_{n}(t)\left|n+1\right\rangle\left|g\right\rangle\right], \tag{3}\] y el problema se reduce a resolver el siguiente sistema de ecuaciones diferenciales ordinarias acopladas \[\mathrm{i}\frac{d}{dt}\begin{bmatrix}C_{n}(t)\\ D_{n}(t)\end{bmatrix} =\begin{bmatrix}\chi n+\frac{\Delta}{2}&g\sqrt{n+1}\\ g\sqrt{n+1}&-\chi(n+1)-\frac{\Delta}{2}\end{bmatrix}\begin{bmatrix}C_{n}(t)\\ D_{n}(t)\end{bmatrix}.\] \[n=0,1,2,\ldots. \tag{4}\] La solucion general de estas ecuaciones diferenciales es \[\begin{bmatrix}C_{n}(t)\\ D_{n}(t)\end{bmatrix}= \exp\left(\mathrm{i}\frac{\chi t}{2}\right)\begin{bmatrix}M_{11}(t) &M_{12}(t)\\ M_{21}(t)&M_{22}(t)\end{bmatrix}\begin{bmatrix}C_{n}(0)\\ D_{n}(0)\end{bmatrix}, \tag{5}\] \[n=0,1,2,\ldots,\] donde \[\begin{split} M_{11}(t)&=\cos\left(\frac{\beta_{n}t}{2} \right)-\mathrm{i}\frac{\Delta+\chi(2n+1)}{\beta_{n}}\sin\left(\frac{\beta_{n}t }{2}\right),\\ M_{12}(t)&=-\mathrm{i}\frac{2g\sqrt{n+1}}{\beta_{n}}\sin\left(\frac{\beta_{ n}t}{2}\right),\\ M_{22}(t)&=M_{11}^{*}(t),\quad M_{21}(t)=M_{12}(t),\quad n=0,1,2,\ldots. \end{split} \tag{6}\] Las cantidades \(\left|C_{n}(0)\right|^{2}\) y \(\left|D_{n}(0)\right|^{2}\) determinan la distribucion inicial de fotones del campo en el estado excitado y estado base del atomo, respectivamente. Mientras que \(\beta_{n}\), \[\beta_{n}=\sqrt{\left[\Delta+\chi(2n+1)\right]^{2}+4g^{2}(n+1)}, \tag{7}\] es la frecuencia generalizada de Rabi debida a los cambios de AC-Stark, que son las variaciones en la energia de un atomo debido a la presencia de un campo electrico no resonante. Una vez dada la condicion inicial atomo-campo \(\left|\Psi(0)\right\rangle\), es posible obtener la evolucion temporal de cualquier observable del sistema. En este caso, nos enfoacmos en la inversion atomica, \(W(t)=\left\langle\Psi(t)|\hat{\sigma}_{z}|\Psi(t)\right\rangle\), la cual determina los cambios atomicos de pollacion y contiene la firma estadistica del campo. Asi, la probabilidad de que el atomo este en su estado excitado menos la probabilidad de que este en el estado base se determina de forma general por la siguiente expresion \[W(t)=\sum_{n=0}^{\infty}\left(\left|C_{n}(t)\right|^{2}-\left|D_{n}(t)\right|^{2} \right). \tag{8}\] Utilizando la solucion dada en (5), es sencillo escribir Figure 1: Esquema de niveles que indican el par de estados atomicos excitados casi resonantes con frecuencia de transicion \(\omega_{eg}\), la frecuencia de campo \(\omega_{c}\), y un conjunto de niveles no resonantes que participant solo virtualmente en la excitacion y son responsables de los cambios de AC Stark a la frecuencia de transicion \(\omega_{eg}\). \[W(t)= \sum_{n=0}^{\infty}\Big{[}\Big{(}\big{|}M_{11}(t)\big{|}^{2}-\big{|} M_{12}(t)\big{|}^{2}\Big{)}\left(\left|C_{n}(0)\right|^{2}-\big{|}D_{n}(0) \right|^{2}\Big{)}\] \[+2M_{12}(t)\left(C_{n}(0)^{*}D_{n}(0)M_{11}(t)^{*}-C_{n}(0)D_{n}(0 )^{*}M_{11}(t)\right)\big{]}; \tag{9}\] y si ahora sustituimos los valores de los coeficientes dados en (6), obtenemos \[W(t)= \sum_{n=0}^{\infty}\frac{1}{\beta_{n}^{2}}\left\{\left[\Delta+(2n+ 1)\,\chi\right]^{2}+4g^{2}(n+1)\cos\left(\beta_{n}t\right)\right\}\Big{(} \left|C_{n}(0)\right|^{2}-\big{|}D_{n}(0)\big{|}^{2}\Big{)}\] \[-\sum_{n=0}^{\infty}\frac{4g\sqrt{n+1}}{\beta_{n}^{2}}\left[ \Delta+(2n+1)\,\chi\right]\left(\cos\left(\beta_{n}t\right)-1\right)C_{n}(0)D _{n}(0). \tag{10}\] Si suponemos que el atomo se encuentra inicialmente en su estado excitado, es decir, \(\left|\Psi(0)\right\rangle=\sum_{n=0}^{\infty}C_{n}(0)\left|n\right\rangle \left|e\right\rangle\), (\(D_{n}(0)=0\) para \(n=0,1,2,\dots\)) podemos obtener la inversion atomica de la siguiente manera \[W_{\rm e}(t)=\sum_{n=0}^{\infty}\frac{P_{n}}{\beta_{n}^{2}} \bigg{\{} \left[\Delta+(2n+1)\chi\right]^{2} \tag{11}\] \[+4g^{2}(n+1)\cos(\beta_{n}t)\bigg{\}},\] donde hacemos la identificacion \(\left|C_{n}(0)\right|^{2}=P_{n}\) para \(n=0,1,2,\dots\), siendo \(P_{n}\) la distribucion de probabilidad de fotones. Si suponemos ahora que el atomo esta inicialmente en su estado base, o sea, \(\left|\Psi(0)\right\rangle=\sum_{n=0}^{\infty}D_{n}(0)\left|n+1\right\rangle \left|g\right\rangle\) (\(C_{n}(0)=0\) para \(n=0,1,2,\dots\)), la inversion atomica es \[W_{\rm b}(t)=-\sum_{n=0}^{\infty}\frac{P_{n+1}}{\beta_{n}^{2}} \bigg{\{} \left[\Delta+(2n+1)\chi\right]^{2} \tag{12}\] \[+4g^{2}(n+1)\cos(\beta_{n}t)\bigg{\}},\] donde ahora debemos identificar \(\left|D_{n}(0)\right|^{2}=P_{n+1}\) para \(n=0,1,2,\dots\). Esta ultima identificacion tiene sentido fisico: si analizamos la expresion (3), que nos da la funcion de onda del sistema completo, nos damos cuenta que desde un principio hemos supuesto que hay un cuanto de energia, y por lo tanto, si el atomo esta en el estado base, la probabilidad de que en el campo haya cero fotones es nula. Una manera de poder analizar las posibles variaciones de las probabilidades de transicion entre el nived base y el primer nivel excitado en funcion de la desintonia, es usando las formas de linea, que no dependen de la duracion del tiempo de interaccion \(t\). Nos centramos en la inversion atomica promedio \(\overline{W}(\Delta)\)[23] \[\overline{W}(\Delta)=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}W(t)\,dt. \tag{13}\] Dado que \[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\cos(\beta_{n}t)\,dt=0, \tag{14}\] y usando (10), tenemos que \[\overline{W}(\Delta) =\sum_{n=0}^{\infty}\left[\frac{\Delta+(2n+1)\,\chi}{\beta_{n}} \right]^{2}\left(\left|C_{n}(0)\right|^{2}-\big{|}D_{n}(0)\big{|}^{2}\right) \tag{15}\] \[+\sum_{n=0}^{\infty}4g\sqrt{n+1}\,\left[\frac{\Delta+(2n+1)\, \chi}{\beta_{n}^{2}}\right]C_{n}(0)D_{n}(0).\] En el caso en que el atomo se encuentra inicialmente en el estado excitado, obtenemos \[\overline{W}_{\rm e}(\Delta)=\sum_{n=0}^{\infty}P_{n}\left[\frac{\Delta+(2n+1) \chi}{\beta_{n}}\right]^{2}, \tag{16}\] mientras que cuando el atomo se encuentra inicialmente en el estado base utilizamos la ecuacion (12), y llegamos a la expresion \[\overline{W}_{\rm b}(\Delta)=-\sum_{n=0}^{\infty}P_{n+1}\left[\frac{\Delta+(2n +1)\chi}{\beta_{n}}\right]^{2}. \tag{17}\] ## III Estados coherentes Consideramos ahora que el campo esta inicialmente en un estado coherente \(|\alpha\rangle\); por lo tanto, la distribucion de fotones es \[P_{n}=e^{-\bar{n}}\frac{\bar{n}^{n}}{n!},\qquad n=0,1,2,\dots, \tag{18}\] donde \(\bar{n}=|\alpha|^{2}\) es el numero promedio de fotones. En la figura 2 mostramos la inversion atomica \(W(t)\) cuando \(\alpha=4\), y para el atomo consideramos dos casos, cuando inicialmente esta en su estado excitado, y cuando inicialmente esta en su estado base. Primeramente, en la grafica 2a, mostramos la distribucion de fotones. En la grafica 2b mostramos la inversion atomica \(W(t)\) cuando \(\chi=0\); observamos el colapso y resurginiento convencionales del modelo de Jaynes-Cummings [15]. Sin embargo, cuando se tienen en cuenta los niveles cercanos no resonantes con \(\chi=0.5\), grafica 2c, la inversion atomica se acerca en promedio a su valor inicial debido a que los niveles fuera de resonancia suprimen la eflecia del campo para estimular transicions fuera de su estado inicial. Ademas, se puede observar que el tiempo para que aparezca el primer resurginiento se acorta al aumentar los valores de \(\chi\) en ambas condiciones iniciales del atomo [23]. En la figura 3 graficamos la inversion atomica promedio \(\overline{W}(\Delta)\) para la condicion inicial del atomo en el estado excitado (superficie naranja) y para cuando inicialmente el atomo esta en su estado base (superficie azul); las graficas muestran la inversion atomica promedio como funcion de \(\Delta\), que es la desintonia, y de \(\overline{n}\), que es el numero promedio de fotones. Observamos que las formas de linea se van ensanchando a medida que aumentamos el numero promedio de fotones \(\bar{n}=|\alpha|^{2}\), sin embargo, estas mantienen su forma y simetria respecto al origen. En el caso en que el atomo esta inicialmente en su estado base (superficie azul), la presencia de una excitacion adicional entre el estado base y el estado excitado, causa que el pico maximo en \(\Delta=0\) aumente su altura a medida que aumenta el numero promedio de fotones, llegando a su punto maximo alrededor de \(\bar{n}\approx 4\). ## IV Estados Gato de Schrodinger Consideramos ahora que el estado inicial del campo es un estado gato de Schrodinger. Los estados gato de Schrodinger estan definidos como \[|\psi\rangle=\frac{1}{\mathcal{N}}\left(|\alpha\rangle+e^{\mathrm{i}\phi}|- \alpha\rangle\right), \tag{19}\] donde \(\mathcal{N}\) es la constante de normalizacion dada por \[\mathcal{N}=\sqrt{2\left[1+e^{-2|\alpha|^{2}}\cos(\phi)\right]}; \tag{20}\] es decir, son una superposicion de dos estados coherentes con la misma amplitud, pero con diferentes fases[33]; el nombre de estos estados fue acunado por Schrodinger mismo en el ano de 1935 [34]. La distribucion de probabilidad fotones esta dada por \[P_{n} = |\langle n|\psi\rangle|^{2}\] \[= \frac{2}{\mathcal{N}^{2}}\frac{e^{-|\alpha|^{2}}}{n!}|\alpha|^{2 }\left[1+(-1)^{n}\cos(\phi)\right],\;n=0,1,2,\ldots.\] Notese que cuando \(\phi=0\), la probabilidad de que el numero de fotones sea impar es cero (\(P_{2n+1}=0,\;n=0,1,2,3,\ldots\)); mientras que para \(\phi=\pi\), la probabilidad de que el numero de fotones sea par es cero (\(P_{2n}=0,\;n=0,1,2,3,\ldots\)). Por eso, con cierta frecuencia, a esos dos estados se les llama par e impar, respectivamente; asi que los estados pares e impares son una superposicion de dos estados coherentes con la misma amplitud, pero con fases opuestas [33]. En la figura 4, presentamos la distribucion de fotones cuando \(\alpha=4\), para los estados par e impar. Estudianos ahora la inversion atomica promedio, las formas de linea, cuando el estado inicial del campo es un estado gato de Schrodinger, especificamente vamos a considerar los estados pares e impares. En la siguiente imagen, Fig. 5, mostramos la diferencia en las formas de linea entre los estados gato de Schrodinger par e impar, variando \(\alpha\) desde 0 hasta 2, cuando el atomo esta inicialmente en el estado excitado; en esa grafica \(\chi=0.5\), valor que esta dentro del rango de validez de la aproximacion supuesta para que el Hamiltoniano (1) sea valido, y hemos hecho \(g=1.0\). Observamos que para un numero promedio de fotones \(\bar{n}=|\alpha|^{2}<4\), es posible discernir entre un estado gato de Schrodinger par o impar, debido a la firma estadistica del estado de vacio del campo electromagntico en \(P_{2n}\). El efecto se pierde a medida que aumenta el numero promedio de fotones, ya que la probabilidad de encontrar \(n\) fotones en un estado coherente Figure 4: La distribución de probabilidad de fotones para los estados gato de Schrödinger. Figure 5: Diferencia de las formas de linea cuando el estado inicial del atomo es el excitado, y el del campo los estados gato de Schrödinger par e impar con \(\alpha\) variando desde 0 hasta 2. Los valores de los parámetros son \(\chi=0.5\) y \(g=1.0\). Figure 3: Inversion atómica promedio \(\overline{W}(\Delta)\) para la condicion inicial del atomo en el estado excitado (superficie naranja) y para cuando inicialmente el atomo está en su estado base (superficie azul). Las graficas muestran la inversion atómica promedio como función de \(\Delta\) y de \(\overline{n}\). Los valores de los parámetros son \(\chi=0\) y \(g=1.0\). se concentra alrededor de \(|\alpha|^{2}\). Pasemos ahora a analizar que sucede cuando el atomo esta inicialmente en el estado base. En la Fig. 6, mostramos la diferencia en las formas de linea de los estados gato par e impar, en el caso en que originalmente el atomo se halle en el estado base; nuevamente consideramos una variacion de \(\alpha\) entre 0 y 2, y ademas, como en el caso anterior, \(\chi=0.5\) y \(g=1.0\). Observamos que la diferencia en las formas de linea entre los estados pares e impares es mas notoria que en el caso previo, debido a la existencia de una excitacion adicional entre el estado base y el estado excitado, como se menciono en la seccion anterior. Sin embargo, esta diferencia en las formas de linea (cuando el atomo se encuentra en el estado base o excitado) no permite discernir entre estados pares e impares cuando el numero promedio de fotones es \(\bar{n}>4\). La razon de esta imposibilidad es que el rango de validez del Hamiltoniano (1) esta limitado a valores de \(\chi\) menores que 1 [30], y a que para numero promedio de fotones lo suficientemente grande, la distribucion de probabilidad para los estados coherentes pares e impares son aproximadamente indistinguibles. ## V Conclusions En este trabajo consideramos un atomo con un estado base, un primer estado excitado y estados superiores. El atomo interactua con un campo electromagnetico de un solo modo. Suponemos que el campo esta aproximadamente en sintonia con la frecuencia de transicion entre los dos primeros niveles del atomo, pero fuera de sintonia con los niveles cercanos. Demostramos que las formas de linea, es decir, las formas de transicion atomica medidas a traves de la inversion atomica promedio \(\overline{W}(\Delta)\) en funcion de la desintonia, permeta distingui entre estados gato de Schrodinger pares e impares. Ademas, al inicializar el atomo en el estado base, se produce una excitacion adicional entre el estado base y el estado excitado, lo que aumenta las diferencias entre las formas de transicion atomica, y facilita dicha distincion. Sin embargo, el rango de validez del Hamiltoniano (1) esta limitada a valores de \(\chi\) menores que 1 [30], lo que limita la capacidad de distingui entre estados pares e impares cuando el numero promedio de fotones es mayor que 4 (\(\bar{n}=|\alpha|^{2}>4\)). ## Agradecimientos L. Hernandez Sanchez agradece al Instituto Nacional de Astrofisica, Optica y Electronica (INAOE) y al Consejo Nacional de Ciencia y Tecnologia (CONACyT) por la beca doctoral otorgada (No. CVU: 736710).
2304.09508
POLAR -- I: linking the 21-cm signal from the epoch of reionization to galaxy formation
To self-consistently model galactic properties, reionization of the intergalactic medium, and the associated 21-cm signal, we have developed the algorithm polar by integrating the one-dimensional radiative transfer code grizzly with the semi-analytical galaxy formation code L-Galaxies 2020. Our proof-of-concept results are consistent with observations of the star formation rate history, UV luminosity function and the CMB Thomson scattering optical depth. We then investigate how different galaxy formation models affect UV luminosity functions and 21-cm power spectra, and find that while the former are most sensitive to the parameters describing the merger of halos, the latter have a stronger dependence on the supernovae feedback parameters, and both are affected by the escape fraction model.
Qing-Bo Ma, Raghunath Ghara, Benedetta Ciardi, Ilian T. Iliev, Léon V. E. Koopmans, Garrelt Mellema, Rajesh Mondal, Saleem Zaroubi
2023-04-19T08:56:04Z
http://arxiv.org/abs/2304.09508v1
# POLAR - I: linking the 21-cm signal from the epoch of reionization to galaxy formation ###### Abstract To self-consistently model galactic properties, reionization of the intergalactic medium, and the associated 21-cm signal, we have developed the algorithm polar by integrating the one-dimensional radiative transfer code Grizzly with the semi-analytical galaxy formation code L-Galaxies 2020. Our proof-of-concept results are consistent with observations of the star formation rate history, UV luminosity function and the CMB Thomson scattering optical depth. We then investigate how different galaxy formation models affect UV luminosity functions and 21-cm power spectra, and find that while the former are most sensitive to the parameters describing the merger of halos, the latter have a stronger dependence on the supernovae feedback parameters, and both are affected by the escape fraction model. keywords: dark ages, reionization, first stars - galaxies: formation - methods: numerical ## 1 Introduction The Epoch of Reionization (EoR) refers to the period when the Universe transitioned from a nearby fully neutral to a highly ionized phase, following the formation of the first galaxies and stars (Furlanetto et al., 2006; Dayal and Ferrara, 2018). Observations of the Gunn-Peterson (GP) absorption trough in the spectra of high-\(z\) QSOs suggest that the EoR is finished at \(z\sim 6\)(e.g. Fan et al., 2006), although the long GP troughs detected in the Ly\(\alpha\) forest at \(z\sim 6\)(e.g. Becker et al., 2015; Bosman et al., 2022) indicate a later ending. The most recent observations of the Cosmic Microwave Background (CMB), e.g. with the _Planck_ satellite (Planck Collaboration et al., 2020), have measured a Thomson scattering optical depth \(\tau=0.054\pm 0.007\), which implies a mid-point redshift of the EoR (i.e. a global ionization fraction \(\bar{x}_{\rm HII}=0.5\)) at \(z=7.68\pm 0.79\). The initial phases of the EoR are still poorly known, although the Experiment to Detect the Global EoR Signature (EDGES) project reported an absorption profile of global 21-cm signal at 78 MHz (i.e. \(z\sim 17\)) (Bowman et al., 2018), which can be used to put some constraints on the first sources of ionizing radiation. Note that this result is still strongly debated (e.g. Hills et al., 2018; Singh et al., 2022), and has not been confirmed by the SARAS 3 project (Bevins et al., 2022). The galaxies that formed during the EoR are expected to be the main sources of ionization of neutral hydrogen (HI). The properties of these \(z>6\) galaxies have been studied with hydrodynamical and/or radiative transfer simulations, such as THESAN (Kannan et al., 2022), ASTRID (Bird et al., 2022), EROC (Esmerian and Grodin, 2021), SPHINX (Rosdahl et al., 2018) and FIRE (Ma et al., 2018), as well as with more efficient semi-analytical/numerical approaches such as the ReionYuga (Mondal et al., 2017), the ASTRAEUS (Hutter et al., 2021) and the MERAXES (Mutch et al., 2016; Balu et al., 2023) models. All these simulations predict galactic properties that are generally consistent with high-\(z\) observations, e.g. in terms of galaxy stellar mass functions, UV luminosity functions, and star formation history (see the comparisons by e.g. Kannan et al., 2022). With the development of new observational facilities, an increasing number of high-\(z\) galaxies have been detected (Stark, 2016). For example, the Hubble Space Telescope (HST) and the Spitzer telescope have already provided abundant data to build rest-frame UV luminosity functions and stellar mass functions of galaxies at \(z>6\)(e.g. Bouwens et al., 2021; Stefano et al., 2021). The Atacama Large Millimeter/sub-millimeter Array (ALMA) telescope has also identified several high-\(z\) galaxies through e.g. the [CII] line (Bouwens et al., 2020). Despite having been collecting data for less than one year, the James Webb Space Telescope (JWST) has already found many new high-\(z\) galaxies (e.g. Donana et al., 2023; Harikane et al., 2023), possibly as high as \(z\sim 17\)(Harikane et al., 2023). JWST is expected to observe many more such galaxies in the near future (Steinhardt et al., 2021), thus offering the possibility to massively improve our knowledge of primeval objects. The UV and X-ray radiation emitted in high-\(z\) galaxies, e.g. from stellar sources, X-ray binaries and accreting massive black holes, is expected to change the ionization and temperature state of the HI within the intergalactic medium (IGM) (Islam et al., 2019; Eide et al., 2020). The radiation emitted through the hyperfine structure transition of high-\(z\) HI (with a rest-frame wavelength of \(\sim\)21-cm) can be measured by modern low-frequency radio facilities (Furlanetto et al., 2006). Some early results from 21-cm telescopes have put upper limits on the 21-cm power spectra \(\Delta^{2}_{\rm 21cm}\) from the EoR, e.g. a 2-\(\sigma\) upper limit of \(\Delta^{2}_{\rm 21cm}<(73)^{2}\,{\rm mK}^{2}\) at \(k=0.075\,h\,{\rm cMpc}^{-1}\) and \(z\approx 9.1\) from the Low-Frequency Array (LOFAR) (Mertens et al., 2020), of \(\Delta^{2}_{\rm 21cm}\leq(43)^{2}\,{\rm mK}^{2}\) at \(k=0.14\,h\,{\rm cMpc}^{-1}\) and \(z=6.5\) from the Murchison Widefield Array (MWA) (Trout et al., 2020), and of \(\Delta^{2}_{\rm 21cm}\leq(30.76)^{2}\,{\rm mK}^{2}\) at \(k=0.192\,h\,{\rm cMpc}^{-1}\) and \(z=7.9\) from the Hydrogen Epoch of Reionization Array (HERA) (Abudrashidova et al., 2022). These results are already used to rule out some extreme EoR models (Ghara et al., 2020; Mondal et al., 2020; Ghara et al., 2021; Greig et al., 2021, 2022). While analysis of more data from such facilities will set increasingly tighter upper limits (and possibly also a measurement) of the 21-cm power spectrum, the planned Square Kilometre Array (SKA) is expected to provide also 3-D topological images of the 21-cm signal (Koopmans et al., 2015; Mellema et al., 2015; Ghara et al., 2017). Since both the infrared to sub-mm radiation from high-\(z\) galaxies and the 21-cm signal are produced during the EoR, the combination of observations in different frequency bands would provide a deeper understanding of the physical processes at play during the EoR. With this idea in mind, some codes have been developed to constrain EoR models with Markov Chain Monte Carlo (MCMC) techniques used in combination with multi-frequency observations, e.g. the semi-numerical model by Park et al. (2019, 2020) based on 21CMMC (Greig and Mesinger, 2015) and 21cmFAST (Mesinger et al., 2011), as well as analytical models for 21-cm power spectra and galaxy luminosity functions (e.g. Zhang et al., 2022). While these approaches take advantage of both observations of high-\(z\) galaxies (e.g. the UV luminosity functions) and 21-cm power spectra, they do not physically model the properties of galaxies, but estimate the UV luminosity functions and the budget of ionization photons based on the halo mass function model. In this paper, we describe polar, a novel semi-numerical model designed to obtain both the high-\(z\) galaxy properties and the 21-cm signal in a fast and robust way, by including the semi-analytical galaxy formation model L-Galaxies 2020 (Henriques et al., 2020) within the one-dimensional radiative transfer code grizLy(Ghara et al., 2018), which is an updated version of bears(Thomas et al., 2009). Since polar is fast and thus able to produce a large number of different galaxy and reionization models, we will use it in combination with MCMC techniques and observations of e.g. UV luminosity functions and 21-cm power spectra to provide tighter constraints on both the galaxy and IGM properties. In this paper, we introduce the new algorithm and how some selected observables are affected by different choices of the parameters used to describe the formation and evolution of galaxies, as well as the escape of ionizing radiation, while in a companion paper we will extend the formalism to include an MCMC analysis and to constrain the parameters. The paper is organized as follows: we describe L-Galaxies 2020 and grizzly in Section 2, the resulting galaxy properties and EoR signal are presented in Section 3, while a discussion and the conclusions are found in Section 4. The cosmological parameters adopted in this paper are the final results of the _Planck_ project (Planck Collaboration et al., 2020), i.e. \(\Omega_{\Lambda}=0.685\), \(\Omega_{m}=0.315\), \(\Omega_{b}=0.0493\), \(h=0.674\), \(\sigma_{8}=0.811\) and \(n_{s}=0.965\). ## 2 Methods To follow the formation and evolution of galaxies, we combine merger trees from _N_-body dark-matter simulations with the semi-analytic model (SAM) L-Galaxies 2020 (abbreviated as LG20 in the following, Henriques et al., 2020), while the 1-D radiative transfer (RT) code grizzly(Ghara et al., 2015, 2018) is used to model the gas ionization and 21-cm signal. While we refer the readers to the original papers for the details about these tools, in the following section we describe the key aspects that are relevant to this work. ### Dark-matter simulations The _N_-body dark-matter simulations are run with the gadget-4 code (Springel et al., 2021), with a box length of \(100\,h^{-1}\)cMpc and a particle number of \(1024^{3}\), i.e. a particle mass \(1.2\times 10^{6}\,{\rm M}_{\odot}\). In the following, we will refer to these simulations as L100. The dark-matter halos are identified with a Friend-of-Friend (FoF) algorithm (Springel et al., 2001), while Subfind is used to identify gravitationally bound sub-halos within halos. The merger trees are constructed by following Springel et al. (2005). Note that the sub-halos are chosen to have at least 20 dark-matter particles, i.e. the minimum mass is \(\sim 2.4\times 10^{9}\,{\rm M}_{\odot}\). We employ a total of 56 snapshots equally spaced in time in the redshift range \(z=6-20\). To resolve the effects of fainter galaxies during the EoR, we also run a smaller simulation with the same \(1024^{3}\) particles but box length \(35\,h^{-1}\)cMpc (abbreviated as L35), able to resolve sub-halos with a minimum mass of \(\sim 1.0\times 10^{8}\,{\rm M}_{\odot}\). As a reference, Fig. 1 shows the halo mass functions (\(\Phi_{\rm halo}\)) at \(z=7\) from the two Figure 1: Mass functions of halos (\(\Phi_{\rm halo}\)) at \(z=7\) from simulations L100 (cyan thick line) and L35 (magenta thin line). As a reference, the Sheth-Tormen (ST) halo mass function is shown as dashed black line. simulations, where \(\Phi_{\rm halo}={\rm d}\,n_{\rm halo}/{\rm d}\,{\rm log}_{10}(M_{\rm halo})\), with \(n_{\rm halo}\) the number density of halos (in units of \({\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm\rm{\rm{\rm\rm\{\rm\{ \rm{ \rm }}}}}}}}}}}}}}}\)) and \(M_{\rm halo}\) halo mass. As a reference, we also show the Sheth-Tormen (ST) halo mass function at \(z=7\), which is computed with the COLIBR1 library. L35 covers a halo mass range of \((1.7\times 10^{8}-3.6\times 10^{11})\) M\({}_{\odot}\), while L100 has halos with mass \((4.2\times 10^{9}-10^{12})\) M\({}_{\odot}\). Within the range \((4.2\times 10^{9}-3.6\times 10^{11})\) M\({}_{\odot}\), the halo mass functions of these two simulations are broadly consistent, and both of them are roughly consistent with the ST halo mass function. Footnote 1: [https://github.com/GabrieleParimhelli/COLIBRI](https://github.com/GabrieleParimhelli/COLIBRI) ### The semi-analytic code L-Galaxies 2020 LG20 includes almost all the known physical processes related to galaxy formation (Henriques et al., 2020), e.g. gas cooling, star formation, galaxy merger, supernovae feedback, black hole growth and AGN feedback. Compared to the previous version (i.e. Henriques et al., 2015), LG20 adds molecular hydrogen formation, chemical enrichment and spatial tracking of the gas and stellar disc in galaxies, models of stellar population synthesis, dust, tidal effects, and reincorporation of ejected gas. Specifically, the star formation rate (SFR) is proportional to the H\({}_{2}\) surface density (Fu et al., 2013), i.e \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}}}\), where the star formation efficiency \(\alpha_{\rm{H_{2}}}\) is a free parameter, and \(t_{\rm dyn}\) is the galactic dynamical timescale as a function of halo mass. The H\({}_{2}\) surface density \(\Sigma_{\rm H_{2}}\) is modeled through the cold gas mass, the H\({}_{2}\) fraction within the H surface density, and the metallicity. A burst of star formation happens after a halo falls into a larger system, i.e. halo merger, with a time delay \(t_{\rm friction}\) due to dynamical friction. In LG20, \(t_{\rm friction}\) is computed with the formulation of Binney and Tremaine (1987), which depends on the mass and radius of the two merging halos: \[t_{\rm friction}=\alpha_{\rm friction}\frac{V_{200c}r_{\rm sat}^{2}}{GM_{\rm sat, tot}{\rm ln}\Lambda}, \tag{1}\] where the efficiency factor \(\alpha_{\rm friction}\) is a free parameter, \(G\) is the gravitational constant, \(r_{\rm sat}\) is the radius of the satellite galaxy, \(M_{\rm sat,tot}\) is the sum of dark-matter and baryonic mass of the satellite galaxy, \({\rm ln}\Lambda={\rm ln}(1+M_{200c}/M_{\rm sat,tot})\) is the Coulomb logarithm, \(M_{200c}\) and \(V_{200c}\) are the virial mass and velocity of the major halo with over-density larger than 200 times the critical value of cosmic density. The SFR triggered by mergers is modeled through the "collisional starburst" formulation (Somerville et al., 2001): \[{\rm SFR_{burst}}=\alpha_{\rm SF,\ burst}\left(\frac{M_{1}}{M_{2}}\right)^{ \beta_{\rm SF,\ burst}}M_{\rm cold,\ tot}, \tag{2}\] where \(\alpha_{\rm SF,\ burst}\) and \(\beta_{\rm SF,\ burst}\) are two free parameters describing the star formation efficiency of a burst, \(M_{1}\) and \(M_{2}\) are the baryonic mass of the two merging galaxies with \(M_{1}<M_{2}\), and \(M_{\rm cold,tot}\) is their total cold gas mass. Supernovae explosions happen at the end of the stellar lifetime, reheating the cold gas and enriching the ISM with metals. In LG20, the mass reheated by supernovae is proportional to the stellar mass returned into the ISM (\(\Delta M_{\star}\)), i.e. \(\Delta M_{\rm reheat}=\epsilon_{\rm disk}\Delta M_{\star}\), where \(\epsilon_{\rm disk}\) is the efficiency factor given by (Henriques et al., 2020): \[\epsilon_{\rm disk}=\epsilon_{\rm reheat}\times\left[0.5+\left(\frac{V_{\rm max }}{V_{\rm reheat}}\right)^{-\beta_{\rm reheat}}\right], \tag{3}\] where \(\epsilon_{\rm reheat}\), \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) are three free parameters. \(V_{\rm max}\) is the maximum circular velocity of the dark-matter halo, which is related to the halo mass. Note that the energy required to heat such mass, i.e. \(\Delta E_{\rm reheat}=\frac{1}{2}\Delta M_{\rm reheat}V_{200c}^{2}\), should be lower than the energy \(\Delta E_{\rm SN}\) released by supernovae that is effectively available to the gas components. Since halos at \(z>6\) are generally not very massive, we assume that the condition \(\Delta E_{\rm reheat}<\Delta E_{\rm SN}\) is always satisfied. There are two channels to grow the mass of massive black holes within galaxies (Croton et al., 2006). The main channel is the halo merger, that can trigger a strong accretion of the central black holes (i.e. quasar mode). The accreted gas mass of the merger between two neighbouring snapshots (with time difference \(t_{\rm diff}\)) depends on the properties of the two galaxies (Henriques et al., 2015): \[\Delta M_{\rm BH,Q}=\frac{f_{\rm BH}M_{\rm cold,tot}\times(M_{\rm sat}/M_{\rm cen })}{1+(V_{\rm BH}/V_{200c})^{2}}, \tag{4}\] where \(M_{\rm cen}\) and \(M_{\rm sat}\) are the baryon masses of the central galaxy and satellite galaxy, the fraction of accreted cold gas into black hole \(f_{\rm BH}\) and the virial velocity \(V_{\rm BH}\) at which the accretion saturates are two free parameters. The accretion rate can be simply estimated as \(\Delta M_{\rm BH,Q}/t_{\rm diff}\), while the actual accretion rate might be higher, as \(t_{\rm diff}\) might be larger than the real lifetime of the quasar. The other channel is the accretion of hot gas (i.e. radio mode), which is also the main source of the AGN feedback on star formation. Its accretion rate is computed with a modified version of the model proposed by Croton et al. (2006): \[\dot{M}_{\rm BH}=k_{\rm AGN}\left(\frac{M_{\rm hot}}{10^{11}{\rm M}_{\odot}} \right)\left(\frac{M_{\rm BH}}{10^{8}{\rm M}_{\odot}}\right), \tag{5}\] where the accretion efficiency \(k_{\rm AGN}\) is a free parameter, \(M_{\rm hot}\) is the mass of hot gas within the host galaxy, and \(M_{\rm BH}\) is the black hole mass. In this work, we only focus on the star formation efficiency, halo merger, supernovae feedback and AGN feedback, and keep the default models for other processes, e.g. the gas cooling, the chemical enrichment, the reincorporation of ejected gas, the tidal and ram-pressure stripping and the tidal disruption. We do not apply the dust model of LG20, but assume the escape fraction to compute the UV luminosity function and the budget of ionization photons following Park et al. (2019): \[f_{\rm es,\lambda}=f_{0,\lambda}\left(\frac{M_{\rm star}}{10^{8}{\rm\ M}_{ \odot}}\right)^{\beta_{\rm es}}, \tag{6}\] where \(f_{0,\lambda}\) is a function of the photon wavelength \(\lambda\) in rest-frame, \(M_{\rm star}\) is the stellar mass within the galaxy, and the index factor \(\beta_{\rm es}\) is a free parameter to describe the dependence of \(f_{\rm es,\lambda}\) on the stellar mass of the galaxy. Note that \(f_{0,\lambda}\) includes the dependence of absorption of dust and neutral gas on the frequency of the emitted photons, so that its value should be lower for H ionizing photons than for non-ionizing photons, as in the latter case only absorption from dust is effective. To simplify the discussion, we use only two values for \(f_{0,\lambda}\), i.e. \(f_{0,\lambda}=0.25\) at \(\lambda=1600\) A (to match the UV luminosity function of our fiducial model with observations at \(z=7\); see Fig. 5), and \(f_{0,\lambda}=0.1\) for ionizing photons (in order for our fiducial reionization history to be consistent with the Thomson scattering optical depth measured by CMB experiments; see discussion in Sec. 3.2). In the following, therefore, only \(\beta_{\rm es}\) is a free parameter. In summary, in the models considered here, we have 11 free parameters, which are summarized in Table 1. In Section 3, we will investigate how these parameters affect the global SFR history, the UV luminosity function, the reionization history and the 21-cm power spectrum during the EoR. ### The radiative transfer code _grizzly_ Since the above simulations and formalism do not include radiative transfer, which is crucial to properly model the EoR, we use the results of the \(N\)-body dark-matter simulations and LG20 as input for the 1-D radiative transfer code _grizzly_ to describe the HI ionization and heating. _grizzly_ is very efficient in evaluating the ionization and heating processes and the differential brightness temperature of the 21-cm signal (\(\delta T_{\rm 21cm}\)). The algorithm is based on pre-computed ionization and temperature profiles of gas for different source and density properties at various redshifts. During the later stages of the EoR, when the ionized bubbles merge into bigger ones, _grizzly_ also corrects for the effects of overlap by conserving the ionizing budget. We use the gridded density fields derived from the \(N\)-body simulations and the galactic properties (i.e. stellar mass and stellar age, see below) computed from LG20, as inputs for _grizzly_. Note that the gas density is assumed to scale constantly with the dark-matter. The matter density and galactic properties from the simulations L100 and L35 are gridded with \(100^{3}\) and \(35^{3}\) cells, respectively, ensuring the same cell resolution of 1 \(h^{-1}\)cMpc for the RT calculation. The spectral energy distributions (SEDs) of stellar sources are calculated using the Binary Population and Spectral Synthesis (BPASS) code (Stanway and Eldridge, 2018). To take into account the history of star formation in the evaluation of the physical properties of the ionized regions, we integrate the SED over the stellar age, i.e. the time from the birth of stars to the output redshifts. We refer to this as an integrated SED (iSED). Although LG20 can output iSEDs for each galaxy, for convenience, we adopt the one obtained by averaging the iSEDs normalized by the stellar mass of galaxies. In this case, the outputs from LG20 required to run _grizzly_ are only the stellar mass and the stellar age of galaxies, but not the full SED for each galaxy, that saves computing time both for LG20 and _grizzly_. As a reference, in Fig. 2 we present the average iSED after normalization by the stellar mass of the galaxies with the 1-\(\sigma\) area of \(\sim 2.8\times 10^{5}\) galaxies at \(z=7\), obtained from the simulation L100 with fiducial parameter values. This shows that the stellar mass normalized iSEDs have a very small scatter (e.g. root mean square \(\sigma\) value is \(<10\%\) of the mean value). We also check that the stellar mass normalized iSEDs are sensitive neither to the galaxy models (i.e. the parameters listed in Table 1 except \(\beta_{\rm es}\)) nor the output redshifts (see the discussion in Appendix A). This is due to the fact that, although the UV emission of stellar sources is dominated by massive young stars, both the stellar mass and the iSEDs are integrated over the whole star formation history, i.e. the iSEDs are proportional to the stellar mass of galaxies. Finally, _grizzly_ computes the \(\delta T_{\rm 21cm}\) and the associated power spectrum, where \(\delta T_{\rm 21cm}\) is defined as: \[\delta T_{\rm 21cm}=27\,{\rm mK}\,\frac{\Omega_{b}h^{2}}{0.023}\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ Appendix B we also present the evolution of the stellar mass density \(\rho_{M_{*}}\), which presents features similar to those of the \(\rho_{\rm SFR}\) shown here. With smaller (larger) star formation efficiency \(\alpha_{\rm H_{2}}\), the \(\rho_{\rm SFR}\) are as expected lower (higher) at \(z>10\), while they converge at \(z<8\) due to supernova feedback effects, i.e. higher star formation results in more supernova feedback that further reduces the star formation (see discussion of Fig. 4 in the following). The two series of simulations show similar evolution features, but L35 has a higher \(\rho_{\rm SFR}\) at \(z>12\), as the simulation with higher resolution can resolve more small halos that dominate star formation at such high \(z\). Changing the merger and starburst parameters (i.e. \(\sigma_{\rm friction}\)-\(\alpha_{\rm SF,\,burst}\) and \(\beta_{\rm SF,\,burst}\)) has negligible effects on \(\rho_{\rm SFR}\) of L100. Differently, it changes the results of the higher resolution simulation L35, e.g. the SFR becomes lower by increasing \(\alpha_{\rm friction}\) (i.e. higher time delay of mergers), while it increases by increasing the starburst efficiency \(\alpha_{\rm SF,\,burst}\). Since in Eq. 2\(M_{1}<M_{2}\), an increase of \(\beta_{\rm SF,\,burst}\) results in a lower SFR. As discussed in the following Fig. 4, this is because the high resolution \(N\)-body simulation resolves more neighboring halos that can more easily merge. The supernova feedback parameters (i.e. \(\epsilon_{\rm reheat}\), \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\)) affect \(\rho_{\rm SFR}\) in both series of simulations. A smaller (larger) feedback efficiency (i.e. \(\epsilon_{\rm reheat}\)) leads to a higher (lower) \(\rho_{\rm SFR}\) in both simulations. While in L100 the impact is more significant at \(z<10\), in L35 \(\rho_{\rm SFR}\) converges at \(z\sim 6\), as star formation here is dominated by merger induced starbursts, thus reducing the effect of supernova feedback on the total SFR. The parameters \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) regulate the dependence of supernovae feedback on the halo mass (see the Eq. 3 and Fig. 4), which results in different evolution features of \(\rho_{\rm SFR}\) in simulations L100 and L35, specifically the latter shows much larger differences than the former at \(z>10\). The AGN parameters (i.e. \(f_{\rm BH}\), \(V_{\rm BH}\) and \(k_{\rm AGN}\)) do not have large effects on \(\rho_{\rm SFR}\) in both series of simulations. It is because the energy of AGN is proportional to the black hole mass and the hot gas mass in the galaxies (see Eq. 5), i.e. the AGN feedback is more significant in the most massive halos, which are very rare during the EoR. As a reference, we also display observational data as summarized in Ma et al. (2017). These are roughly consistent with our predicted \(\rho_{\rm SFR}\) from both series of simulations, although the observations still have large error-bars. To better understand some of the features emerging in Fig. 3, in Fig. 4 we present the SFR distributions at \(z=7\) as functions of the halo virial mass \(M_{\rm vir}\). They are computed as \(\Delta_{\rm SFR}/\Delta_{\rm log_{10}(M_{\rm vir})}/\Sigma_{\rm SFR}\), where \(\Delta_{\rm log_{10}(M_{\rm vir})}\) is the bin-width of \(M_{\rm vir}\), \(\Delta_{\rm SFR}\) is the sum of the SFRs of halos within the bin, and \(\Sigma_{\rm SFR}\) denotes the total SFR. Note that, to have a consistent comparison, the \(\Sigma_{\rm SFR}\) used here is the same for all lines, i.e. from the L100 simulation with the fiducial parameter values. A smaller (larger) \(\alpha_{\rm H_{2}}\) results in a lower (higher) SFR in massive halos (i.e. \(M_{\rm vir}>10^{10}\,{\rm M}_{\odot}\)), while the trend is reversed in the less massive ones, especially in the L35 simulation. This is due to the effects of supernovae feedback on star formation, i.e. supernovae formed at early times can reduce the SFR in the less massive halos by reheating the cold gas, while this effect is weaker in the massive halos that have higher cooling rates and more cold gas. Note that supernovae explosions happen with a time delay after the formation of stars, which is very short for massive stars. The merger and starburst parameters do not affect the SFR distributions as a function of \(M_{\rm vir}\) in simulation L100, while they significantly change the SFR of halos with \(M_{\rm vir}<10^{10}\,{\rm M}_{\odot}\) in simulation L35, e.g. the distribution amplitudes with the smaller and Figure 3: Evolution of SFR density \(\rho_{\rm SFR}\) as a function of redshift \(z\), for different values of the galaxy formation parameters \(\alpha_{\rm H_{2}}\), \(\alpha_{\rm friction}\), \(\alpha_{\rm SF,\,burst}\), \(\rho_{\rm SFR,\,burst}\), \(\epsilon_{\rm reheat}\), \(V_{\rm reheat}\), \(\beta_{\rm reheat}\), \(\beta_{\rm BH}\), \(V_{\rm BH}\) and \(k_{\rm AGN}\), from left to right and from top to bottom. Each panel shows the results of the corresponding parameter with smaller value (dashed line), fiducial value (solid line) and larger value (dash-dotted line) in simulation L100 (cyan thick lines) and L35 (magenta thin lines). The black data points with error-bars refer to the observational data as summarized in Ma et al. (2017). larger \(\alpha_{\rm SF,\ burst}\) values have \(\sim 1\) dex difference at \(M_{\rm vir}\sim 10^{8.8}\) M\({}_{\odot}\). Since more small halos are resolved in the high resolution simulation (i.e. L35), and they are close to each other, mergers happen more often than in simulation L100. However, the SFR within halos with \(M_{\rm vir}>10^{10}\) M\({}_{\odot}\) is not very sensitive to the merger models for either simulation. A smaller (larger) supernovae feedback efficiency factor \(\epsilon_{\rm preheat}\) increases (decreases) the SFR within halos with \(M_{\rm vir}<10^{10}\) M\({}_{\odot}\) in simulation L100, while it has no such obvious effect in L35. One reason is that the star formation of less massive halos in L35 is dominated by mergers, which reduces the impact of supernovae feedback on the SFR. As mentioned before, \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) relate the supernovae feedback to the halo mass, and thus shape the dependence of SFR on \(M_{\rm vir}\). For example, in L35, with smaller \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) values the SFR of less massive halos (\(M_{\rm vir}<10^{9}\) M\({}_{\odot}\)) is much higher than the corresponding one with fiducial and larger values, and the latter two cases show similar SFRs. This is because with smaller \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) values the supernovae feedback within less massive halos are much smaller (see Eq. 3), with the consequence of significantly increasing the SFR of these halos. Since the SFR of less massive halos in L35 is dominated by mergers, these overcome the effect of supernovae explosions, so that a higher supernovae feedback (i.e. larger \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\)) does not visibly reduce the SFR. At \(M_{\rm vir}>10^{9}\) M\({}_{\odot}\) the SFR is similar for all values of \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\), i.e. the supernovae feedback effect is weak on SFR within massive halos. In L100, the SFRs with smaller \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) are higher than those with fiducial values at \(M_{\rm vir}<10^{10}\) M\({}_{\odot}\), while they are similar within more massive halos. With larger parameter values, the SFRs are lower than those with fiducial values at \(M_{\rm vir}<10^{10}\) M\({}_{\odot}\), while they also increase the SFRs of some more massive halos. Consistent with earlier results, the effects of the AGN model are only important for the very massive halos, that are rare in our simulations. Changing all three related parameters does not obviously affect the SFR distributions in the halos of either series of simulations. Fig. 5 shows the UV luminosity function \(\phi\) at the rest-frame wavelength \(\lambda=1600\)A for different galaxy formation and escape fraction model parameters. The absolute magnitude of galaxy luminosity at \(\lambda=1600\)A is computed as: \[M_{1600,\,{\rm AB}}=-\frac{5}{2}\log_{10}\left(f_{\rm eS,\lambda},\frac{F_{16 00}}{4\pi R^{2}}\right)-48.6 \tag{8}\] where \(F_{1600}\) is the galaxy brightness at \(\lambda=1600\)A, and \(R=10\) pc. As a comparison, we also show the observations of \(\phi\) at \(z=7\)(Bouwens et al., 2021) (see Appendix C for more comparisons at different redshifts). Note that, as mentioned earlier, we fix the parameter \(f_{0,\lambda}=0.25\) at \(\lambda=1600\)A to match our results with observations. The shape and amplitude of the measured \(\phi\) are consistent with our fiducial model in simulation L35, while they are slightly lower than the \(\phi\) at \(M_{1600,\,{\rm AB}}<-22\) from simulation L100. This may be caused by the bias associated to the small field of view of surveys (see e.g. Bouwens et al., 2021), which might not cover enough bright objects. In general, the differences induced on \(\phi\) by different parameters are much smaller than those on the SFR distributions shown in Fig. 4. Specifically, three star formation efficiency \(\alpha_{\rm H_{2}}\) values result in similar \(\phi\) for both simulations, except that the smaller (larger) \(\alpha_{\rm H_{2}}\) produces a lower (higher) \(\phi\) at \(M_{1600,\,{\rm AB}}>-15\) in L100. We note that, limited by the resolution of \(N\)-body simulations, the luminosity functions are not robust at the faint end, due to the lack of low mass halos (see Fig. 1). Figure 4: SFR distributions at \(z=7\) as functions of the halo virial mass \(M_{\rm vir}\), for different values of the galaxy formation parameters \(\alpha_{\rm H_{2}}\), \(\alpha_{\rm friction}\), \(\alpha_{\rm SF,\,burst}\), \(\beta_{\rm SF,\,burst}\), \(\beta_{\rm SF,\,burst}\), \(\epsilon_{\rm reheat}\), \(V_{\rm reheat}\), \(\beta_{\rm reheat}\), \(\gamma_{\rm BH}\), \(V_{\rm BH}\) and \(k_{\rm AGN}\), from left to right and from top to bottom. Each panel shows the results of the corresponding parameter with smaller value (dashed line), fiducial value (solid line) and larger value (dash-dotted line) in simulation L100 (cyan thick lines) and L35 (magenta thin lines). Although the merger parameters \(\alpha_{\rm friction}\), \(\alpha_{\rm SF,\,burst}\) and \(\beta_{\rm SF,\,burst}\) obviously affect the SFR density (Fig. 3) and the SFR distribution (Fig. 4) of simulation L35, their effects on \(\phi\) are not very significant. The slight differences are mostly at the bright end (i.e. \(M_{\rm 1600,\,AB}<-18\)). Similarly, some small differences appear in simulation L100, at the very bright end e.g. \(M_{\rm 1600,\,AB}<-21\). The reason for this is that mergers can trigger very strong star formation, thus leading to very high UV radiation. The effects of supernovae feedback on \(\phi\) are only visible at the faint end (e.g. \(M_{\rm 1600,\,AB}>-18\) of simulation L100 and \(M_{\rm 1600,\,AB}>-16\) of simulation L35), as supernovae explosions mainly affect star formation in the less massive halos (see the Fig. 4), that have low SFR and thus low UV luminosity. Since the fainter galaxies are very hard to detect even for JWST, the impact on \(\phi\) caused by supernovae feedback might be hard to confirm with observations of UV luminosity functions. Consistently to Fig. 3 and Fig. 4, the UV luminosity function \(\phi\) is not sensitive to the AGN model. The escape fraction parameter (i.e. \(\beta_{\rm es}\)) dramatically affects the shape of \(\phi\), e.g. with \(\beta_{\rm es}=-0.5\) both simulations present more faint UV luminosities but fewer bright ones, while with positive \(\beta_{\rm es}\) the UV luminosities of massive galaxies are increased, thus the simulations show more bright UV luminosities, but the number of faint ones is reduced. Compared to the observational results, it seems that they are consistent with \(\beta_{\rm es}=0\). In summary, the observed UV objects during the EoR are mostly bright ones, their luminosity functions are not very sensitive to the changing of many galaxy formation parameters, thus the UV luminosity function by itself is not enough to constrain the galaxy formation model. However, some parameters, e.g. the starburst and the escape fraction ones, should be possibly limited by the UV luminosity functions. ### Reionization and 21-cm signal Fig. 6 shows the history of volume averaged ionization fraction (\(\bar{x}_{\rm HII}\)) with different galaxy formation and escape fraction model parameters. Because the number of ionizing photons is related to the stellar mass (see the discussions in Sec. 2.3), the behaviour of \(\bar{x}_{\rm HII}\) looks similar to the one of the stellar mass density (see Fig. 11). In the fiducial model \(\bar{x}_{\rm HII}\) from L100 is lower than that of L35 at \(z>11\) due to the lower SFR density of L100 (see Fig. 3), while it becomes similar at lower \(z\), with \(\bar{x}_{\rm HII}=0.5\) at \(z\approx 7.8\). Assuming \(\bar{x}_{\rm HII}=\bar{x}_{\rm HII}\) and that helium is fully ionized at \(z=3\), the corresponding CMB optical depth is \(\tau\approx 0.059\). As a reference, the one measured by the _Planck_ satellite is \(0.054\pm 0.007\)(Planck Collaboration et al., 2020). Similarly to the evolution of stellar mass density in Fig. 11, with a smaller (larger) star formation efficiency \(\alpha_{\rm H_{2}}\), \(\bar{x}_{\rm HII}\) is lower (higher) at \(z>12\) in both L100 and L35, while they converge towards the end of the EoR. The merger and starburst models affect \(\bar{x}_{\rm HII}\) only in L35, with visible differences at \(z<10\), when mergers are more frequent. For example, a smaller (larger) \(\alpha_{\rm friction}\) and \(\beta_{\rm SF,\,burst}\) leads to higher (lower) \(\bar{x}_{\rm HII}\), while a smaller (larger) \(\alpha_{\rm SF,\,burst}\) results in lower (higher) \(\bar{x}_{\rm HII}\). As supernovae feedback can reduce the SFR and stellar mass density (see Fig. 3 and Fig. 11), it also affects the evolution of \(\bar{x}_{\rm HII}\). For example, with smaller (larger) \(\epsilon_{\rm reheat}\), \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\), \(\bar{x}_{\rm HII}\) throughout the whole EoR period becomes much higher (lower) in both simulations. The AGN models do not appreciably affect the ionization process. As a negative \(\beta_{\rm es}\) (i.e. -0.5) increases the budget of ionizing photons from low Figure 5: UV luminosity function \(\phi\) at the rest-frame wavelength \(\lambda=1600\) Å and \(z=7\) for different values of the galaxy formation parameters \(\alpha_{\rm H_{2}}\), \(\alpha_{\rm friction}\), \(\alpha_{\rm SF,\,burst}\), \(\beta_{\rm SF,\,burst}\), \(\epsilon_{\rm reheat}\), \(V_{\rm reheat}\), \(\beta_{\rm reheat}\), \(\beta_{\rm fit}\), \(V_{\rm HII}\), \(k_{\rm AGN}\) and \(\beta_{\rm es}\), from left to right and from top to bottom. Each panel shows the results of the corresponding parameter with smaller value (dashed line), fiducial value (solid line) and larger value (dash-dotted line) in the simulation L100 (cyan thick lines) and L35 (magenta thin lines). The blue triangle with error-bars are observations at \(z=7\) from Bouwens et al. (2021). mass galaxies (stellar mass \(M_{\rm s}<10^{8}\)M\({}_{\odot}\)), it dramatically speeds up the ionization process in both simulations. Instead, a positive \(\beta_{\rm es}\) (i.e. 0.5) reduces the output of ionization photon radiation from low mass galaxies, while it increases the one from massive galaxies. However, since there is a paucity of massive galaxies, the net effect is that the positive \(\beta_{\rm es}\) delays the ionization process, especially in simulation L35. #### 3.2.1 21-cm power spectra at halfway point of EoR Fig. 7 shows the power spectra \(\Delta^{2}_{\rm 21cm}(k)\) of the \(\delta T_{\rm 21cm}\) at \(\bar{x}_{\rm HII}=0.5\) of simulation L100. The results from L35 are not shown, as its small cell number (i.e. \(35^{3}\)) leads to very large sample variance on the power spectra. The different values of parameters, e.g. star formation efficiency \(\alpha_{\rm H_{2}}\), time delay factor of mergers \(\alpha_{\rm friction}\), supernovae feedback models (\(\epsilon_{\rm reheat}\), \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\)) and escape fraction index factor \(\beta_{\rm es}\), result in obviously different \(\Delta^{2}_{\rm 21cm}\) at \(k>0.15\) \({\rm cMpc}^{-1}\), while other parameters - i.e. starburst model (\(\alpha_{\rm SF,\,burst}\) and \(\beta_{\rm SF,\,burst}\)) and AGN model (\(f_{\rm BH}\), \(V_{\rm BH}\) and \(k_{\rm AGN}\)) - have no significant effects on \(\Delta^{2}_{\rm 21cm}\). Specifically, a higher (lower) star formation rate speeds up (delays) the ionization process, so that the \(\bar{x}_{\rm HII}=0.5\) value is reached at different redshifts (see Fig. 6). When \(\bar{x}_{\rm HII}=0.5\) happens at higher (lower) redshifts, the \(\delta T_{\rm 21cm}\) presents larger (smaller) amplitudes of \(\Delta^{2}_{\rm 21cm}\)- especially at small scales. For example, with a smaller (larger) \(\epsilon_{\rm reheat}\) the SFR and stellar mass densities are much higher (lower), thus the ionizing process is faster (slower), with the consequence that \(\Delta^{2}_{\rm 21cm}\) at \(k>0.15\) \({\rm cMpc}^{-1}\) is \(\sim\) 10% higher (lower) than in the case with the fiducial \(\epsilon_{\rm reheat}\) value. A similar effect is associated with the parameters \(\alpha_{\rm H_{2}}\) and \(\alpha_{\rm friction}\)-although the differences induced on \(\Delta^{2}_{\rm 21cm}\) are only few percents. Differently, both the smaller and larger \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) values result in an amplitude of \(\Delta^{2}_{\rm 21cm}\) higher than the fiducial one, due to their complicated relation to the star formation of halos (see Fig. 4). Although a larger \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) reduce the global SFR and thus delay the ionization process (see Fig. 3 and Fig. 6), they also increase the SFR of some massive halos (see Fig. 4), leading to larger size of ionized bubbles around these halos, and thus to higher fluctuations of \(\delta T_{\rm 21cm}\), and to a higher \(\Delta^{2}_{\rm 21cm}\). With \(\beta_{\rm es}=-0.5\), \(\bar{x}_{\rm HII}=0.5\) is obtained at very high \(z\) (i.e. 9.3), which results in a \(\Delta^{2}_{\rm 21cm}\) much higher than the fiducial one. With \(\beta_{\rm es}=0.5\), the \(\Delta^{2}_{\rm 21cm}\) is similar to the fiducial one at \(k>0.2\) \({\rm Mpc}^{-1}\) (i.e. small scales), while much higher than the latter at \(k<0.1\) \({\rm Mpc}^{-1}\) (large scales). It is because the positive \(\beta_{\rm es}\) significantly increases the size of the ionized bubbles surrounding very massive galaxies, which in turn changes the fluctuations of \(\delta T_{\rm 21cm}\). #### 3.2.2 Evolution of 21-cm power spectra Fig. 8 shows the evolution of \(\Delta^{2}_{\rm 21cm}\) of simulation L100 at \(k=0.29\) \({\rm cMpc}^{-1}\), as a function of redshift. We do not show \(\Delta^{2}_{\rm 21cm}\) at \(k=0.1\) \({\rm cMpc}^{-1}\) as it is dominated by sample variance, and thus it is not robust. Note that the assumption of \(T_{\rm S}\gg T_{\rm CMB}\) only works after heating from X-ray sources, thus the results of \(\Delta^{2}_{\rm 21cm}\) are valid only below a certain \(z\), which depends on the X-ray source model adopted (Ma et al., 2021). Since ionization is very weak in the beginning of the EoR, the fluctuations of \(\delta T_{\rm 21cm}\) are dominated by the matter density, thus all models present a similar \(\Delta^{2}_{\rm 21cm}\) at \(z>13\). With decreasing redshift, the fluctuations of ionization fraction \(x_{\rm HII}\) start to dominate the amplitude of \(\Delta^{2}_{\rm 21cm}\), which peaks at \(z\approx 8\) (\(\bar{x}_{\rm HII}\approx 0.45\)) in the fiducial model. Figure 6: Evolution of the volume averaged ionization fraction (\(\bar{x}_{\rm HII}\)) for different values of the parameters \(\alpha_{\rm H_{2}}\), \(\alpha_{\rm friction}\), \(\alpha_{\rm SF,\,burst}\), \(\beta_{\rm SF,\,burst}\), \(\epsilon_{\rm reheat}\), \(V_{\rm reheat}\), \(\beta_{\rm reheat}\), \(f_{\rm BH}\), \(V_{\rm BH}\), \(k_{\rm AGN}\) and \(\beta_{\rm es}\), from left to right and from top to bottom. Each panel shows the results of the corresponding parameter with smaller value (dashed line), fiducial value (solid line) and larger value (dash-dotted line) in simulation L100 (cyan thick lines) and L35 (magenta thin lines). Figure 8: Evolution of 21-cm power spectra \(\Delta_{\rm 21cm}^{2}\) at \(k=0.29\,\)cMpc\({}^{-1}\) with the parameters \(\alpha\)\({}_{\rm 12}\), \(\alpha\)\({}_{\rm friction}\), \(\alpha\)\({}_{\rm SF,burst}\), \(\beta\)\({}_{\rm SF,burst}\), \(\epsilon\)\({}_{\rm shear}\), \(V\)\({}_{\rmrheut}\), \(\beta\)\({}_{\rmrheut}\), \(f_{\rm H\rm H}\), \(V\)\({}_{\rm BH}\), \(k\)\({}_{\rm AGN}\) and \(\beta\)\({}_{\rm es}\), from left to right and from top to bottom. Each panel shows the results of the corresponding parameter with smaller value (dashed line), fiducial value (solid line) and larger value (dash-dotted line) in the simulation L100. Figure 7: Power spectra \(\Delta_{\rm 21cm}^{2}\) at \(\tilde{x}_{\rm HII}=0.5\) with parameters \(\alpha\)\({}_{\rm H_{2}}\), \(\alpha\)\({}_{\rm friction}\), \(\alpha\)\({}_{\rm SF,burst}\), \(\beta\)\({}_{\rm SF,burst}\), \(\epsilon\)\({}_{\rm shear}\), \(V\)\({}_{\rmrheut}\), \(\beta\)\({}_{\rmrheut}\), \(f_{\rm BH}\), \(V\)\({}_{\rm BH}\), \(k\)\({}_{\rm AGN}\) and \(\beta\)\({}_{\rm es}\), from left to right and from top to bottom. Each panel shows the results of the corresponding parameter with smaller value (dashed line), fiducial value (solid line) and larger value (dash-dotted line) in simulation L100. The differences of \(\Delta^{2}_{\rm 1cm}\) caused by the different values of parameters, are mostly at \(z<10\). Specifically, the supernovae feedback models (i.e. parameters \(\epsilon_{\rm reheat}\), \(\nu_{\rm reheat}\) and \(\beta_{\rm reheat}\)) show the most pronounced differences, as supernovae feedback strongly affects the star formation during the EoR (see Fig. 3) and thus the ionization history (see Fig. 6). Instead, only slight differences are visible for different values of the star formation efficiency \(\alpha_{\rm H_{2}}\) and the time delay parameter of merger \(\alpha_{\rm friction}\). Typically, the higher the redshift at which the peak in \(\Delta^{2}_{\rm 21cm}\) happens, the larger its amplitude is e.g. for parameters \(\alpha_{\rm H_{2}}\) and \(\epsilon_{\rm reheat}\). Differently, both the smaller and larger values of \(\nu_{\rm reheat}\) and \(\beta_{\rm reheat}\) have peak amplitudes of \(\Delta^{2}_{\rm 21cm}\) higher than the fiducial ones, although their ionization histories are clearly different from each other (see Fig. 6). As mentioned earlier, this is due to the complicated dependence of the galactic star formation on \(V_{\rm reheat}\) and \(\beta_{\rm reheat}\) for different halo masses. The negative \(\rho_{\rm es}\) (i.e. -0.5) leads to very early ionization, and thus to a clearly different \(\Delta^{2}_{\rm 21cm}\) evolution history, while the \(\Delta^{2}_{\rm 21cm}\) with positive \(\beta_{\rm es}\) (i.e. 0.5) is roughly consistent with the fiducial one. Changing the starburst parameters \(\alpha_{\rm SF,\ burst}\) and \(\beta_{\rm SF,\ burst}\), as well as the AGN models (i.e. \(f_{\rm BH}\), \(V_{\rm BH}\) and \(k_{\rm AGN}\)) does not visibly affect the evolution of \(\Delta^{2}_{\rm 21cm}\). ## 4 Discussion and Conclusions Ongoing and upcoming observations, e.g. with the JWST and SKA telescopes, respectively, will enable us to measure both the galaxy properties and the 21-cm signal during the EoR. In order to optimally exploit these forthcoming data, we have designed polar, a novel semi-numeric algorithm obtained by including the semi-analytical model for galaxy formation L-Galaxies 2020 (Henriques et al., 2020) within the 1-D radiative transfer code orgizly(Ghara et al., 2018). polar is then able to describe consistently both the galaxy formation and the reionization process. Compared to previous works (e.g. Park et al., 2020; Zhang et al., 2022), our framework is based on a well established and widely used semi-analytic model of galaxy formation, which allows for the inclusion of an extensive network of physical processes. polar is similar to the semi-numerical models ASTRAEUS (Hutter et al., 2021) and MERAXES (Muchch et al., 2016), but with different modelling for galaxy formation and radiative transfer. More specifically, while polar is based on a 1-D radiative transfer approach which allows also for a more accurate modeling of the source spectra and their effect on the temperature and ionization state of the gas, in ASTRAEUS and MERAXES the evolution of the ionized regions is followed by essentially comparing the number of emitted photons to the number of absorptions. While in this paper we only introduce polar and explore the effect of a few selected parameters on the galaxy and reionization process, in the future we will use it to perform a parameter fitting based on MCMC techniques, which is possible due to the low computation requirements of polar. With the newly published GADGET-4 code (Springel et al., 2021), we ran two \(N\)-body simulations of limited box length \(100\,\rm{cm}\rm{c}\rm{Mpc}/h\) (named L100) and \(35\,\rm{c}\rm{Mpc}/h\) (named L35), which resolve a minimum halo mass of \(\sim 4.2\times 10^{9}\,\rm{M}_{\odot}\) and \(\sim 1.7\times 10^{8}\,\rm{M}_{\odot}\), respectively. These simulations have a consistent halo mass function within the range \((4.2\times 10^{9}-3.6\times 10^{11})\,\rm{M}_{\odot}\). Using the merger trees and dark-matter density fields as inputs, and adopting the best-fit values for the galaxy formation parameters from Henriques et al. (2020), with polar we obtain a star formation history, UV luminosity function and CMB Thomson scattering optical depth consistent with observations in the literature. As this first paper is meant as a proof of concept of our new method, the \(N\)-body simulations do not reach sizes necessary for 21-cm studies (i.e. several hundreds of \(\rm{c}\rm{Mpc}\)), nor do they resolve small mass halos which could be relevant during the earlier stages of the reionization process. We note that, although polar has so far proven to be very efficient, the computation time required to run it on larger or higher resolution simulations will necessarily increase and possibly render an MCMC approach inefficient. In this case, we expect to rely on the additional use of specifically designed emulators, similarly to what were done in Ghara et al. (2020); Mondal et al. (2022). We also note that the inclusion of smaller halos should be accompanied by a modeling of radiative feedback effects, which are expected to affect their star formation (see e.g. Hutter et al., 2021; Legrand et al., 2023). We investigate how the galaxy formation and escape fraction models affect the results in terms of star formation history, UV luminosity function, ionization history and 21-cm power spectrum. We find that the star formation and the ionization history are very sensitive to the supernovae feedback models, as supernovae explosion can efficiently reduce star formation within low mass halos. They are also significantly affected by the star formation efficiency during the early stage of the EoR, while towards the end of the EoR supernovae feedback can offset the effects of star formation efficiency. The starburst triggered by mergers is important in our high resolution simulation L35, while its effects on star formation and ionization are negligible in L100. The ionization history is very sensitive to the escape fraction model, as it can significantly affect the budget of ionizing photons. On the contrary, the AGN feedback model does not affect significantly any of the results. The UV luminosity function is very sensitive to the escape fraction model (e.g. the slope of UV luminosity function), and indeed not all our models are consistent with observations (e.g. Bouwens et al., 2021). The parameters describing supernovae feedback and star formation efficiency may be difficult to constrain with observations of the UV luminosity function, as they have an effect only on its faint end, but these faint galaxies are hardly observed. Differently, since galaxy mergers can trigger very strong star formation and consequently high UV radiation, the merger and starburst model can affect the bright end of the UV luminosity function. As the 21-cm power spectra from simulation L35 are dominated by sample variance, in this paper we have only discussed those from L100. We find that they are very sensitive to the supernovae feedback and the escape fraction model, while only weakly sensitive to the star formation efficiency and the galaxy merger model. Usually, an earlier ionization results in higher amplitudes of the 21-cm power spectra, while we find that both the smaller and larger value of the parameters describing supernovae feedback give a 21-cm power spectrum larger than the one obtained with the fiducial parameter. This is because of their complex dependence on the halo mass. polar, the new tool introduced in this paper, provides an efficient way build a consistent and realistic galaxy formation and reionization process. In this framework, the different dependence of e.g. UV luminosity functions and 21-cm power spectra on the galaxy formation and escape fraction models would help to reduce the degeneracy between parameters and to exploit at best state-of-the-art multi-wavelength observations from the high redshift universe, as offered by e.g. HST, JWST, ALMA, LOFAR and the planned SKA and EELT (European Extremely Large Telescope). ## Acknowledgements The authors would like to thank Rob Yates for his helpful insight into L-Galaxies 2020, and an anonymous referee for her/his comments. QM is supported by the National SKA Program of China (grant No. 2020SKA0110402), National Natural Science Foundation of China (Grant No. 12263002, 11903010), Science and Technology Fund of Guizhou Province (Grant No. [2020]1Y020), and GZNU 2019 Special projects of training new academics and innovation exploration. RG and SZ acknowledge support grant no. 255/18 from the Israel Science Foundation. LVEK acknowledges the financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 884760, "CoDEX"). GM acknowledges support by Swedish Research Council grant 2020-04691. RM is supported by the Israel Academy of Sciences and Humanities & Council for Higher Education Excellence Fellowship Program for International Postdoctoral Researchers. The tools for bibliographic research are offered by the NASA Astrophysics Data Systems and by the JSTOR archive. ## Data Availability The code polar, the simulation data, and also the post-analysis scripts underlying this article will be shared on reasonable request to the corresponding author.
2308.07828
A Genetic Algorithm Meta-Heuristic for a Generalized Quadratic Assignment Problem
The generalized quadratic assignment problem (GQAP) is one of the hardest problems to solve in the operations research area. The GQAP addressed in this work is defined as the task of minimizing the assignment and transportation costs of assigning a set of facilities to a set of locations. The facilities have different space requirements, and the locations have different space capacities. Multiple facilities can be assigned to each location if the space capacity is not violated. In this work, three instances of GQAP in different situations are presented. Then, a genetic algorithm is developed to solve the GQAP instances. Finally, the local neighborhood search with the steepest descend strategy is constructed and applied to the final solution obtained by the GA, and the final solution is compared with the best solution found by MPL/CPLEX software and reference papers. The results show that the developed GA heuristic is effective for solving the GQAP.
Mojtaba A. Farahani, Alan McKendall
2023-08-15T15:13:26Z
http://arxiv.org/abs/2308.07828v2
# A Genetic Algorithm Meta-Heuristic for a Generalized Quadratic Assignment Problem ###### Abstract The generalized quadratic assignment problem (GQAP) is one of the hardest problems to solve in the operations research area. The GQAP addressed in this work is defined as the task of minimizing the assignment and transportation costs of assigning a set of facilities to a set of locations. The facilities have different space requirements, and the locations have different space capacities. Multiple facilities can be assigned to each location if the space capacity is not violated. In this work, three instances of GQAP in different situations are presented. Then, a genetic algorithm is developed to solve the GQAP instances. Finally, the local neighborhood search with the steepest descend strategy is constructed and applied to the final solution obtained by the GA, and the final solution is compared with the best solution found by MPL/CPLEX software and reference papers. The results show that the developed GA heuristic is effective for solving the GQAP. Generalized Quadratic Assignment Problem, Genetic Algorithms, Metaheuristics, Neighborhood Search Techniques The generalized quadratic assignment problem (GQAP) is one of the hardest problems to solve in the operations research area. The GQAP addressed in this work is defined as the task of minimizing the assignment and transportation costs of assigning a set of facilities to a set of locations. The facilities have different space requirements, and the locations have different space capacities. Multiple facilities can be assigned to each location if the space capacity is not violated. In this work, three instances of GQAP in different situations are presented. Then, a genetic algorithm is developed to solve the GQAP instances. Finally, the local neighborhood search with the steepest descend strategy is constructed and applied to the final solution obtained by the GA, and the final solution is compared with the best solution found by MPL/CPLEX software and reference papers. The results show that the developed GA heuristic is effective for solving the GQAP. The main parts of this work are adopted from IENG 554 lecture notes, [1], and [2]. The supplementary files, MATLAB codes and problem files of this work are available at [https://github.com/tamoraji/GA_for_GQAP](https://github.com/tamoraji/GA_for_GQAP). ## 1 Problem Definition & Mathematical Model The generalized quadratic assignment problem (GQAP) assigns a set of machines (M) to a set of locations (N locations) where M \(>\) N such that more than one machine can be assigned to a location based on the machine's requirements and the capacities of the locations while minimizing the sum of the assignment and transportation costs [3]. ### Non-linear Problem Definition The number of units of materials transported between machine \(\mathbf{t}\) and \(\mathbf{j}\) (\(\mathbf{f_{ij}}\)), the distances between locations \(\mathbf{k}\) and \(\mathbf{l}\) (\(\mathbf{d_{kl}}\)), the space requirement of each machine \(\mathbf{i}\) (\(\mathbf{r_{i}}\)), the capacity of each location k (\(\mathbf{c_{k}}\)), the costs of assigning each machine i to each location k (\(\mathbf{a_{ik}}\)), and the unit cost per distance unit of transporting materials between each pair of machines i at location k and machine j at location l (\(c_{ijkl}\)) are deterministic and known. The non-linear mathematical formulation of the problem will be as follows: \(x_{ik}=1,\) if machine \(i\) is assigned to location \(k\) and \(,0\) otherwise \[Min\ Z\ =\ \sum_{i=1}^{M}\sum_{k=1}^{N}a_{ik}\,x_{ik}+\sum_{i=1}^{M}\sum_{j=1 }^{N}\sum_{k=1}^{N}\sum_{l=1}^{M}c_{ijkl}\,f_{ij}\,d_{kl}x_{ik}x_{jl} \tag{1}\] \[\begin{array}{c}\hskip-14.226378pt\mbox{\footnotesize$j\neq i\quad l\neq k$} \\ \hskip 14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226 378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$ \bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt \mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$} \hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$ \bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.226378pt\mbox{ \footnotesize$\bullet$}\hskip-14.226378pt\mbox{\footnotesize$\bullet$}\hskip-14.22637 \(x_{lk}+x_{jl}-1\leq w_{ijkl}\)\(\forall\)\(i,j\neq i\) and \(\forall\)\(k,l\neq k\) \(w_{ijkl}\geq 0\)\(\forall\)\(i,j\neq i\) and \(\forall\)\(k,l\neq k\) (6) The linearized model is a mixed integer linear programming model (MILP) for the GQAP. This model will be used in the next section to solve a small GQAP instance using MPL/CPLEX software. ### 3 Mpl/CPLEX formulation Figure.1. shows the MPL formulation that has been used in this project solve three different problem instances mathematically using linear programming techniques. M is the number of machines and N is the number of locations, respectively, and the DATA and transportation cost part in the formulation should be substituted based on each problem instance accordingly. Figure.1: MPL formulation used in this project ## 2 Solving GQAP instances with LP and MPL/CPLEX results Three different GQAP instances have been used in this project. a small instance, a medium instance, and a large instance. The small instance of a problem is the task of assigning six machines to four locations. The medium instance problem is the task of assigning 20 machines to 15 locations, and the large instance problem is the task of assigning 50 machines to 10 locations. The details of each problem instance are given in the project assignment and will not be discussed here. Here, we will present the results achieved by using the CPLEX solver using MPL/CPLEX software and the linearized mathematical formulation for the GQAP presented before. It should be noted that the software was run on the WVU virtual machine with 16 GB of RAM and an Intel Xeon Platinum 8272CL at 2.6 GHz. For the small problem instance, the MILP has 370 constraints and 384 variables, which include 24 integers. The optimal solution was found by the software in under a second. The optimal objective function value for this problem is, and the optimal solution is: x13 = x21 = x34 = x42 = x51 = x61 = 1, and all other decision variables are zero. More specifically, machines 2, 5, and 6 are assigned to location 1. Machines 4, 1, and 3 are assigned to locations 2, 3, and 4, respectively. Figure 2 is the screenshot of the problem solution. For the medium problem instance, the MILP has 79835 Constraints and 80100 variables which include 300 integers. The software ran out of memory after about three hours and 18 minutes and could not reach to the optimal solution and objective function value. The best-found objective function value for this problem is \(Z=1714264\). This value will be used as the benchmark for the best-found solution by MPL/CPLEX. Figure. 3 is the screenshot of the problem solution Figure 2: MPL/CLEX result for the small instance problem For the large problem instance, the MILP has 220560 Constraints and 221000 variables which include 500 integers. The virtual machine logged out after about five hours and could not reach to the optimal solution and objective function value. The best-found objective function value for this problem is \(Z=12878101\). This value will be used as the benchmark for the best-found solution by MPL/CPLEX. Figure. 4 is the screenshot of the problem solution Figure 3: MPL/CLEX result for the medium instance problem ## 3 Solving GQAP instances using Genetic Algorithm Metaheuristics The general framework adopted in this project is presented as Figure 5. The detail of each step will be discussed in subsequent subsections. ### Solution representation The mathematical model that was defined in Section 1 can only be used to solve small problems in a reasonable amount of time. Thus, heuristic and meta-heuristic algorithms are developed for the GQAP. Figure 4: MPL/CLEX result for the medium instance problem Figure 5: The framework to address the GQAP As it was shown in [1], it is much more efficient using the COP model, as opposed to the mathematical model, to solve the GQAP. The combinatorial optimization problem (COP) model and the solution representation for our problem are as follows: \(S=(S(1),S(2),\ldots,S(M))\) where S(i) = k means machine i is assigned to location k \(Min\;TC(S)=\sum_{i=1}^{M}a_{is(i)}+\sum_{l=1}^{M}\sum_{ \begin{subarray}{c}1\leq j\leq M\\ j\neq l\end{subarray}}c_{ljs(l)s(j)}f_{lj}d_{s(l)s(j)}\) \(subject\;to\;\sum_{\forall i\;s.t.s(i)=k}r_{l}\leq C_{k}\;for\;k=1,\ldots,N\) For example, the optimal solution for the small problem instance presented before is represented as S = (3, 1, 4, 2, 1, 1). That is, s(1) = 3, s(2) = 1, s(3) = 4, s(4) = 2, s(5) = 1, and s(6) = 1. More specifically, machines 2, 5, and 6 are assigned to location 1, machine 4 to location 2, machine 1 to location 3, and machine 3 to location 4. ### Genetic Algorithm A genetic algorithm (GA) is an intelligent probabilistic search algorithm that simulates the process of evolution by constructing a population of solutions and applying genetic operators (i.e., crossover and mutation) in each reproduction. Each solution in the population is evaluated according to the objective function and fitness of the solution. Highly fit solutions in the population are given opportunities to reproduce and generate offspring. New offspring solutions are generated, and unfit solutions in the population are replaced. This evaluation-selection-reproduction cycle is repeated until a satisfactory solution is found or a stopping criterion has been met [2]. In this project, I adopted the genetic algorithm proposed by Chu and Beasly [2] with a few minor modifications for GQAP. First, the general steps of the algorithm will be presented, and then each step will be discussed in more detail along with a numerical example from the small problem instance of the project. Figure 6 is the general GA algorithm that has been used in this project. The details of each step will be discussed subsequently. The MATLAB code snippet of each step and the results of the code for the small instance will be presented. **Step 0:** In this step, we generate an initial population of n randomly constructed solutions. Each of the initial solutions is generated by randomly assigning a machine to a random location. Note that the initial solutions may violate the capacity constraint and be infeasible. The number of chromosomes in the initial population is defined by the user. Figure 6: The GA algorithm flowchart **Step 1:** In this step, we calculate the fitness and unfitness values of each chromosome in the population. F, D, A, C, and R are the problem input data matrices. The notation is the same as the notation that was defined in the COP model. "costcalc" is the function to calculate the fitness value based on the defined COP model, and "unfitness_calc" is the function to calculate the unfitness of the solutions. The unfitness of a solution is a measure of infeasibility (in relative terms) as calculated by the formula in [2]. It should be noted that the unfitness value is equal to 0 if the solution is feasible. Figure 7 illustrates the code snippets to calculate these two values. Figure. 8. is the code snippet for step 0 and step 1. In Figure 8, "_n_pop"_ is the number of chromosomes in the population (e.g., 5), "N" is the number of locations, and "M" is the number of machines. Figure 7. costcalc and unfitness_calc funtions * % initialization tic %rng('default') P0 = randi(N,n_pop,M); % Generate 'n_pop' random solutions fitness_P0 = costcalc(P0,F, D,A); [extra_cap_P0,unfitness_P0] = unfitness_calc(P0,C,R); final_P0 = [P0 int16(fitness_P0) unfitness_P0]; disp(final_P0)| Table 1 shows an example of the initial population constructed by the algorithm in steps 0 and 1. **Step 2:** The next step is to find the population's best solution and determine its fitness. Since there can be a case where an unfit solution has a lower fitness value than a fit solution, the best solution is chosen only from fit solutions, and the solution that has the minimum fitness value is considered the best solution. If there is no solution with unfitness equal to zero in the population, the solution with the least amount of unfitness is chosen as the best solution. The fitness value for this solution is set to a large number (e.g., 999,999,999). Figure 9 shows the code snippet for this step. \begin{table} \begin{tabular}{c c c c c c c c} \hline Solution & & & & Fitness & Unfitness \\ \hline 4 & 4 & 3 & 4 & 4 & 3 & 13030 & 400 \\ 2 & 2 & 1 & 1 & 2 & 3 & 18438 & 120 \\ 2 & 4 & 4 & 2 & 1 & 4 & 16340 & 190 \\ 3 & 2 & 3 & 1 & 3 & 2 & 20130 & 200 \\ 4 & 1 & 3 & 4 & 4 & 4 & 18785 & 280 \\ \hline \end{tabular} \end{table} Table 1: Initial population example for small instance problem Figure 8: step 0 & step 1: generate the initial population **Stopping criterion:** The stopping criterion should be checked before moving on to the next step of the algorithm. We defined a parameter "\(K\_iter\)" here, which is defined as the number of iterations that a new non-duplicate child is generated, but the best solution was not improved. The stopping criterion is for the algorithm to run for a predefined number of iterations (e.g., "maxiter = 100000") or for \(K\_iter\)_to_ reach a predefined value. (e.g.,'max_k=100'). The algorithm goes to step 3 if either of those two criteria is not met. If any of them are met, the algorithm stops and reports the best-found solution and the respected fitness value. **Step 3:** Select two parent solutions for reproduction. the tournament selection scheme used for this step. Two individuals are chosen randomly from the population. The more fit individual is then allowed into the mating pool. To produce a child, two tournaments are held, each of which produces one parent. Note that the selection criteria do not involve the unfitness value of an individual [2]. Duplicate parents are also not acceptable in the mating pool. Figure 10. shows the code snippet and an example of it for the small problem instance. Figure 9: step 2: find the best solution in current population * Generate the mating pool with the tournament scheme M_pool = zeros(2,M); choose = randperm(size(P0,1),2); %Choose 2 random solutions if fitness_P0(choose(1)) <= fitness_P0(choose(2)) %compare the fitness values M_pool(1,:) = P0(choose(1),:); else M_pool(1,:) = P0(choose(2),:); end ii=1; while ii < 2 choose = randperm(size(P0,1),2); %choose 2 random solutions if fitness_P0(choose(1)) <= fitness_P0(choose(2)) %compare the fitness values if P0(choose(1),:) == M_pool(1,:) continue end M_pool(2,:) = P0(choose(1),:); ii = 2; else if P0(choose(2),:) == M_pool(1,:) continue end M_pool(2,:) = P0(choose(2),:); ii = 2; end if %isempty(find(M_pool(2,:), 1)) break end ``` new generation is: 1 2 3 1 4 1 2273 10 3 1 1 2 4 1 19373 10 2 3 4 1 1 2 19610 20 4 4 2 1 1 3 20400 110 1 1 4 2 4 1 18878 110 Iteration Number: 6 The mating pool is: 3 1 1 2 4 1 2 3 4 1 1 2 In this example, second and third solution randomly chosen for the mating pool. **Step 4:** Generate a child solution by applying a crossover operator to the selected parents in the mating pool. A simple one-point crossover operator is used for this step, in which a crossover point is randomly selected, and the child solution will consist of the first j genes taken from the first parent and the remaining (l-j) genes taken from the second parent, or vice versa, with equal probabilities [2]. Figure 11 shows the code snippet and an example of it for the small problem instance. Figure 10: step 3: generating mating pool from the population In this example, the second position and the first child are randomly chosen as the crossover point and the final child. **Step 5:** The crossover procedure is followed by a mutation procedure. This mutation procedure involves exchanging elements in two randomly selected genes. It should be noted that mutation will only happen if two exchanged elements are not the same. There might be a case where no mutation happens. Authors in [2] showed that the GA with only the crossover and mutation operators is effective in producing good-quality solutions. Figure 12 shows the code snippet and an example of it for the small problem instance. Figure 11: step 4: one-point crossover operation function mutated=mutation_beasly(child) mutated=child; nvar=length(child); j1=randi([1nvar-1]); j2=randi([j1+1nvar]); nj1=mutated(j1); nj2=mutated(j2); mutated(j1)=nj2; mutated(j2)=nj1; ifchild==mutated disp('No Mutation happened') else disp('mutation happened') end end child=1x6 314112 In this example, mutation happened and positions 3 and 5 were exchanged. **Step 6:** In this step, we will handle the unfitness of the generated child using the method introduced in [2]. Please note that this step will only happen for unfit children and will be skipped for those with unfitness equal to zero (i.e., no unfitness). For each location in the solution, if the resource capacity of the location is exceeded (i.e., overused location), then a single randomly selected machine is reassigned from the overused location to the next underused location (in order) that has adequate remaining capacity (if one can be found). The result will be a new child with less or no unfitness. Figure 13 shows the code snippet and an example of it for the small problem instance. Figure 12: step 5: pairwise exchange mutation function [child, fit_child,unfitness_child] = beasly_unfit(child,C,R,F,D,A,T) [extra_cap_child,unfitness_child] = unfitness_calc(child,C,R); if unfitness_child >= disp('thereareunfitnessinsolutions') over_used_loc = find(extra_cap_child(:)>0); under_used_loc = find(extra_cap_child(:)<0); num_overused = size(over_used_loc,1); for k=1:num_overused %findallthem/c'sthatisassignedtotheoverusedloc mc=find(child(:)==over_used_loc(k)); choose=mc(randperm(numel(mc),1));%Randomlychooseoneoftthem if~isempty(under_used_loc) extra=min(extra_cap_child(under_used_loc)); if-extra<R(choose) continue else inde=find(extra_cap_child=extra,1); child(choose)=inde;%assignthem/ctoanunderusedloc end end [extra_cap_child,~]=unfitness_calc(child,C,R); under_used_loc = find(extra_cap_child(:)<0); In this example, there were unfitness in the solution. So, the algorithm changed the assignments somehow and the new child unfitness is zero. **Step 7:** The child solution will take the place of one chromosome in the population and the new population is generated. In our population replacement strategy, the child replaces the population solution that has the highest unfitness value (i.e., the most unfit solution). If the population consists of all feasible solutions, the solution with the maximum fitness is removed. This replacement plan aids in eliminating infeasible solutions in the population. It should be noted that a duplicate child, which is defined as a solution whose structure is identical to any of the solution structures already in the population, is not permitted to enter the new population because, in that case, the population might end up being made up entirely of identical solutions, which would severely restrict the GA's ability to produce new solutions. Figure 14 shows the code snippet and an example of it for the small problem instance. In this example, the newly generated child with unfitness equal to 20 substitutes for the solution with unfitness equal to 110, thus reducing the overall unfitness of the population. Finally, the best-found solution and best-found objective function value are updated, and the algorithm goes back to step 2. Steps 2 to 7 are repeated until one of the previously mentioned stopping criteria is met. The MATLAB output for a few complete iterations is printed in the appendix to check how the algorithm works. The GA algorithm is followed by applying a local neighborhood search technique with steepest descend improvement strategy. This make sure that the result of the GA is at local (or hopefully global) optima. If the solution is not at the local optima, the local neighborhood search algorithm will generate a new solution at the local optima. Figure 15 shows the code snippet. Figure 14: step 7: update the population improvement = 1; iter = 0; while improvement iter = iter + 1; i = 0; NBH_child = steep_desc(Best_sol,C,R); cost_NBH = costcalc(NBH_child,F,D,A,T); [\(\sim\),unfit_NBH] = unfitness_calc(NBH_child,C,R); final = [NBH_child int64(cost_NBH) unfit_NBH]; found = find(unfit_NBH==0); fitted = final(found,:); [mincost, minind] = min(fitted(:,M+1)); if mincost < Z_best disp('a better sol found') Best_sol = fitted(minind,1:M); disp(Best_sol) Z_best = mincost; disp(Z_best) else improvement = 0; end end disp(Best_sol); disp(Z_best); ## 4 Computational Results In this part, the results of implementing this algorithm on three different problem instances (i.e., small, medium, and large) are presented. The results are compared with the results from solving the same problems with MPL/CPLEX software and the results presented in [1]. A MacBook Pro 2018 with a 2.3 GHz quad-core Intel Core i5 CPU and 8 GB of RAM was used as the hardware, along with MATLAB R2022b software. In this project's algorithm, there are two main parameters that can affect the quality of the solution and the time to reach that solution. The first parameter is the number of populations to generate and update throughout the algorithm (i.e., the \(n\_pop\) variable in codes), and the second parameter is the number of iterations that a new non-duplicate child is generated but the best solution was not improved (i.e., the \(max\_k\) Figure 15: local neighborhood search with steepest descend variable in codes). A design of experiment approach with two factors and three levels for each factor (low, medium, high) is adopted to determine the best parameters for each problem instance. Thus, each experiment includes nine runs of the algorithm with different sets of parameters. Table 2 shows the different parameter settings that have been tested for each problem instance. Due to the random nature of many steps in the algorithm, each experiment was repeated three times for diversification purposes. The best performing result in terms of quality (i.e., percent deviation from the best-found solution) and computational time is chosen for the respected parameter setup. Totally, 27 experiments have been done for each problem instance. The summary of the results is in table 3 and the details are in the appendix section. In table 3, n_pop and Max_k are algorithm parameters that gave the best solution in terms of solution quality and computational time. Time is the amount of elapsed time for the algorithm to complete the experiment run in seconds. Z_best_GA is the objective function value result of our algorithm after \begin{table} \begin{tabular}{l l l l l l l} \hline Problem & n\_pop & n\_pop & n\_pop & Max\_k (low) & Max\_k & Max\_k \\ Instance & (low) & (medium) & (high) & & (medium) & (high) \\ \hline Small & 5 & 10 & 15 & 10 & 40 & 70 \\ Medium & 35 & 50 & 75 & 250 & 500 & 700 \\ Large & 50 & 75 & 100 & 300 & 500 & 700 \\ \hline \end{tabular} \end{table} Table 2: algorithm parameters \begin{table} \begin{tabular}{l l l l l l l l} \hline Problem & n\_pop & Max\_k & Time (s) & Z\_best\_GA & Z\_best\_C & Z\_best\_found & \%D \\ Instance & & & & & & & \\ \hline Small & 5 & 70 & 12.48 & 17165 & 17165 & n/a & 0\% \\ Medium & 50 & 250 & 578.91 & 1471896 & 1714264 & 1471896 & 0\% \\ Large & 100 & 500 & 2148.0701 & 11261034 & 12878101 & 11217503 & 0.39\% \\ \hline \end{tabular} \end{table} Table 3: Computational result summary finishing the GA and local neighborhood search. Z_best_C is the best-found objective function value using MPL/CPLEX software. Z_best_found is the best-found objective function value from [1]. And finally, %D is the percent deviation between our solution and Z_best_found. The computational result in this project shows that increasing the number of chromosomes in the population doesn't necessarily guarantee better solution. It also shows that the initial randomly generated population may influence the final solution quality. ## 5 Conclusions In this project, a modified version of the genetic algorithm developed by [2] was presented and applied to three different instances of the GQAP problem. The results, best-found solutions, and objective function values of the GA are compared with the best-found solutions calculated by solving the problem mathematically with MPL/CPLEX software and other metaheuristics. The results show that the developed GA can produce better results more efficiently and in a fraction of the time compared to MPL/CPLEX software results. And the result is interpretable and can be used to solve real-life problems. It should be noted that the main objective of this project was to implement the GA and receive acceptable results. There is a lot of room for improving the algorithm to perform faster and in a more efficient manner. That can be considered in future works.
2308.01281
Boundary conditions and infrared divergences
We review the procedure to construct quasi-free ground states, for real scalar fields whose dynamics is dictated by the Klein-Gordon equation, on standard static Lorentzian manifolds with a time-like boundary. We observe that, depending on the assigned boundary condition of Robin type, this procedure does not always lead to the existence of a suitable bi-distribution $w_2\in \mathcal{D}'(M\times M)$ due to the presence of infrared divergences. As a concrete example we consider a Bertotti-Robinson spacetime in two different coordinate patches. In one case we show that infrared divergences do not occur only for Dirichlet boundary conditions as one might expect a priori, while, in the other case, we prove that they occur only when Neumann boundary conditions are imposed at the time-like boundary.
Lissa de Souza Campos, Claudio Dappiaggi, Luca Sinibaldi
2023-08-02T17:07:30Z
http://arxiv.org/abs/2308.01281v1
# Boundary conditions and infrared divergences ###### Abstract We review the procedure to construct quasi-free ground states, for real scalar fields whose dynamics is dictated by the Klein-Gordon equation, on standard static Lorentzian manifolds with a time-like boundary. We observe that, depending on the assigned boundary condition of Robin type, this procedure does not always lead to the existence of a suitable bi-distribution \(w_{2}\in\mathcal{D}^{\prime}(M\times M)\) due to the presence of infrared divergences. As a concrete example we consider a Bertotti-Robinson spacetime in two different coordinate patches. In one case we show that infrared divergences do not occur only for Dirichlet boundary conditions as one might expect a priori, while, in the other case, we prove that they occur only when Neumann boundary conditions are imposed at the time-like boundary. keywords: quantum field theory on curved spacetimes, infrared divergences, Bertotti-Robinson spacetime ## 1 Introduction In the past decade, we have witnessed a steadily growing interest towards the interplay between boundary conditions and the construction of free and interacting quantum field theories living on a large class of backgrounds possessing a (conformal) timelike boundary. From a physical viewpoint the reasons are several ranging from the desire of a better understanding of the structural aspects of a quantum field theory on asymptotically AdS spacetimes to the necessity of a thorough investigation of newly discovered phenomena, among which remarkable are the anti-Unruh and the anti-Hawking effect, see _e.g._[1; 2; 3; 4]. In this endeavor, even when focusing on the simple scenario of a free, scalar field theory, one realizes from the very beginning that the presence of a timelike boundary entails already at a classical level a sharp difference from the standard formulation when the underlying background, say \((M,g)\), is globally hyperbolic. As a matter of fact, if the dynamics is ruled by a Klein-Gordon operator \(P=\Box_{g}-m^{2}-\xi R\), see Equation (1), the associated initial value problem has to be supplemented with the assignment of a suitable boundary condition at \(\partial M\). For several years, it has been customary to focus the attention only on that of Dirichlet type, since it has the distinguished property of being always admissible, in the sense that it entails not only that the underlying Cauchy problem is well-posed, but also that \(P\) admits unique advanced and retarded fundamental solutions, which are in turn the building blocks to implement the canonical commutation relations at a quantum level. Yet, in the past few years it has been highlighted that under suitable constraints on the parameters of the theory, in particular \(m\) and \(\xi\), one has the freedom of assigning to a scalar field theory a much larger class of admissible boundary conditions, among which noteworthy are those of Robin type, see _e.g._[5; 6; 7; 8; 9; 10; 11; 12; 13]. This realization has prompted the beginning of several research projects aimed at scrutinizing both the structural aspects and the physical properties of the underlying quantum counterpart. In this respect one of the main questions which needs to be addressed concerns the identification of a class of physically admissible quantum states. When the underlying spacetime is globally hyperbolic, it is widely accepted that one needs to restrict the attention to the so-called _Hadamard states_[14]. Once more denoting by \((M,g)\) the underlying background this entails that one needs to identify a bi-distribution \(\omega_{2}\in\mathcal{D}^{\prime}(M\times M)\), which is a solution to the underlying equation of motion and whose antisymmetric part coincides up to a multiplicative constant to the difference between the advanced and the retarded fundamental solutions and it abides by a suitable constraint on its singular structure. At a physical level this establishes a condition on the ultraviolet behaviour of the quantum state which has far reaching consequences, among which it is worth recalling the existence of a covariant notion of Wick polynomials and the finiteness of the quantum fluctuations of all observables, see _e.g._[15]. In view of these remarks, in the past few years, much effort has been devoted to finding a suitable generalization of the Hadamard condition whenever the underlying background \((M,g)\) admits a timelike boundary. Thanks to the effort of several research groups, it is nowadays understood what it is a good notion of Hadamard states in this framework and several explicit examples has been constructed, see _e.g._[16; 17; 5; 18]. Yet, in all these works, the focus has always been on the one hand on the control of the ultraviolet behaviour of the underlying two-point correlation function. On the other hand, barring some very specific examples, it has always been assumed that \((M,g)\) is stationary, since the existence of a timelike Killing field, entails that one can focus the attention either on ground or on thermal/KMS states as natural candidates for being of Hadamard form. In almost all scenarios considered, a great deal of attention has always been given to controlling the underlying ultraviolet behaviour. Yet almost no attention has been given to the possibility that infrared divergences can occur and that they are connected to the choice of an underlying boundary condition. The main goal of this paper is to warn about this possibility which, from a technical viewpoint, is tantamount to the existence of an obstruction in finding in the first place a suitable bi-distribution \(\omega_{2}\in\mathcal{D}^{\prime}(M\times M)\) out of which one can define a desired, quantum state. More precisely, we shall proceed as follows. In the next section we shall recall succinctly and in great generality how one can construct on a globally hyperbolic spacetime with a timelike boundary \((M,g)\) a two-point correlation function associated to a ground state for a real, massive scalar field. This part of the paper is based on existing literature and its main goal is mainly to highlight were a potential infrared divergence can occur in the most general scenario, so to emphasize that this obstruction is always a potential threat. Subsequently in Section 3 we focus the attention on a specific scenario, namely a real, massive scalar field on the four-dimensional Bertotti-Robinson spacetime [19; 20], here presented in two different coordinate patches. On the one hand this background is relevant at a physical level since it is both an exact solution of the Einstein-Maxwell's equations and it represents a good approximation of the near horizon geometry of Reissner-Nordstrom black hole, see _e.g._[21]. On the other hand, the background isometries are such to allow us to give a detailed, analytic construction both of the underlying fundamental solutions with Robin boundary conditions. Subsequently we shall turn our attention to the construction of the two-point correlation functions of an underlying ground state and we shall show explicitly that, in one of the coordinate patches considered, only with Dirichlet boundary conditions, no infrared singularity is present. On the contrary, in the second coordinate patch, this pathology occurs only if we consider Neumann of boundary conditions, hence allowing in this case also those of Robin type. ## 2 General Construction In this section we review succinctly the construction of the two-point correlation function associated to the ground state of a real, massive scalar field, so to highlight at which stage a potential infrared singularity might occur and the relation between its insurgence and the underlying boundary condition. Henceforth with \((M,g)\) we refer to a globally hyperbolic, oriented and time-oriented Lorentzian manifold with timelike boundary of dimension \(\dim M=n\geq 2\), as formalized in [23]. In addition, we assume that \((M,g)\) is standard static, namely it is isometric to \(\mathbb{R}\times\Sigma\) with line element \[ds^{2}=-\beta dt^{2}+h,\] where \(t\in\mathbb{R}\) plays the role of time coordinate along the \(\mathbb{R}\)-direction, while \(\beta\) is a positive, time-independent, smooth function. In addition \(h\) is a time-independent Riemannian metric on \(\Sigma\). As a consequence the boundary \(\partial M\simeq\mathbb{R}\times\partial\Sigma\) is also naturally endowed with a static metric. On top of this class of backgrounds we consider a real scalar field \(\Phi:M\to\mathbb{R}\) whose dynamics is ruled by \[P\Phi=\left(\Box_{g}-V\right)\Phi=0, \tag{1}\] where \(\Box_{g}\) is the d'Alembert wave operator built out of the underlying metric \(g\), while \(V\) is a time-independent potential, which we assume to be smooth in the interior of \(M\). A rather common choice for such potential is \(V=m^{2}+\xi R\) where \(m\geq 0\) plays the role of mass parameter while \(\xi\in\mathbb{R}\) denotes an arbitrary coupling to the scalar curvature \(R\). Since \(M\) possesses a timelike boundary, the solutions to Equation (1) cannot be constructed only by assigning initial data on a Cauchy surface \(\Sigma_{t_{0}}\simeq\{t_{0}\}\times\Sigma\), \(t_{0}\in\mathbb{R}\), but it is necessary to supplement suitable boundary conditions on \(\partial M\). Among the plethora of possible choices, we are mainly interested in those that entail the existence of unique advanced and retarded fundamental solutions associated to \(P\) as well as of a two-point correlation function \(\omega_{2}\in\mathcal{D}^{\prime}(M\times M)\), which allows to define a quantum state for the field \(\Phi\) whose ultraviolet singular behaviour is of Hadamard form. From a mathematical viewpoint the first problem has been addressed in [6], while the second one has been mainly discussed in [24, 25], although only on asymptotically anti-de Sitter spacetimes. It is important to stress that, in these works, the main task has always been the control of the ultraviolet behaviour of \(\omega_{2}\), while no attention has been given to the possibility that infrared divergences might occur. This translates in the presence of an obstruction to the existence of \(\omega_{2}\) as a well-defined distribution and, although this is abstractly known as a lurking possibility, to the best of our knowledge, it has not been sufficiently stressed that this might be directly connected to the choice of an underlying boundary condition. As mentioned in Section 1 this is the main goal of this paper and, to this end we start by recalling succinctly the procedure leading to the construction of ground states in the case in hand. In order to make more crystal clear our message, we shall focus the attention only on boundary conditions of Robin type, although our conclusions can be drawn also for more general choices. Hence, starting from Equation (1), we consider the Fourier transform along the time-direction, namely \[\widehat{\Phi}(\omega,x)\doteq\int\limits_{\mathbb{R}}dt\,e^{i\omega t}\Phi(t,x),\] where \(x\) stands for a choices of coordinates on \(\Sigma\). Therefore Equation (1) can be equivalently written as an eigenvalue problem \[K\widehat{\Phi}=\omega^{2}\widehat{\Phi}, \tag{2}\] where, denoting by \(\Delta_{h}\) the Laplace-Beltrami operator built out of the Riemannian metric \(h\), the operator \(K\) reads \[K=\beta\Delta_{h}-\frac{1}{2}h^{ij}(\partial_{i}\beta)\partial_{j}+\beta V. \tag{3}\] In studying Equation (1), the first step consists of establishing whether the Klein-Gordon operator \(P\) admits unique advanced and retarded fundamental solutions \(E_{adv}/E_{ret}\in\mathcal{D}^{\prime}(M\times M)\). As already mentioned, this is not the case unless one assigns suitable boundary conditions for the field \(\Phi\) on \(\partial M\simeq\mathbb{R}\times\partial\Sigma\). To this end, as discussed thoroughly in [6], it is convenient to read \(K\) as a second order partial differential operator on \(L^{2}(\Sigma)\), the space of square-integrable functions on a Cauchy surface \(\Sigma\) with respect to the metric-induced induced volume form \(\beta^{-1}d\mu_{h}\). In this way, it is possible to prove that admissible boundary conditions are in 1:1 correspondence with the self-adjoint extension of \(K\) on \(L^{2}(\Sigma)\). To make our case clear, among the plethora of possible choices, we restrict our attention only to those scenarios for which it is admissible a boundary condition that is _static_ and of _Robin type_, namely there exists \(\alpha\neq\alpha(t)\in C^{\infty}(\partial M)\) such that \[\left.\Phi\right|_{\partial M}=\alpha\,\partial_{n}\Phi|_{\partial M}, \tag{4}\] where \(\partial_{n}\) denotes the derivative along the direction normal to the boundary. Observe that in Equation (4), the symbol \(|_{\partial M}\) denotes the restriction both of \(\Phi\) and of \(\partial_{n}\Phi\) on the boundary. This operation is legitimate only if both \(\Phi\) and \(\partial_{n}\Phi\) are continuous on \(\partial M\), a feature which is strongly dependent both on the choice of \(V\) and of \(g\). When the solution \(\Phi\) to Equation (1) is not regular enough, then all restrictions needs to be replaced by suitable trace maps, which we avoid defining in this paper in order to avoid unnecessary technical details. Hence we refer an interested reader to [6] for the analysis of the general scenario. As a consequence of [6, Prop. 19], for each boundary condition as in Equation (4) there exists a corresponding self-adjoint extension on \(L^{2}(\Sigma)\) of the operator \(K\) as in Equation (3) which we denote by \(K_{\alpha}\). In turn, this result combined with [6, Thm. 30], entails that the operator \(P\) admits unique advanced and retarded Green's operators, \(E_{\alpha}^{\pm}\) whose associated integral kernels read \(\mathcal{E}_{\alpha}^{-}(x,x^{\prime})=\Theta(t-t^{\prime})\mathcal{E}_{\alpha }(x,x^{\prime})\) and \(\mathcal{E}_{\alpha}^{+}(x,x^{\prime})=-\Theta(t^{\prime}-t)\mathcal{E}_{ \alpha}(x,x^{\prime})\), where \(\mathcal{E}_{\alpha}\in\mathcal{D}^{\prime}(M\times M)\) is such that, for all \(f,f^{\prime}\in C_{0}^{\infty}(M)\), \[\mathcal{E}_{\alpha}(f,f^{\prime})=\int\limits_{\mathbb{R}^{2}}dtdt^{\prime} \,\left(f(t),K_{\alpha}^{-\frac{1}{2}}\sin[K_{\alpha}^{\frac{1}{2}}(t-t^{ \prime})]f^{\prime}(t)\right)_{L^{2}(\Sigma)}, \tag{5}\] where \((,)_{L^{2}(\Sigma)}\) denotes the inner product of \(L^{2}(\Sigma)\), while \(K_{\alpha}^{-\frac{1}{2}}\sin[K_{\alpha}^{\frac{1}{2}}(t-t^{\prime})]\) is defined in terms of the functional calculus for \(K_{\alpha}\). Having established the existence of the advanced and retarded Green's operators associated to \(P\), the next step in the construction of a quantum field theory consists of individuating \(\omega_{2}\in\mathcal{D}^{\prime}(M\times M)\), the two-point correlation function of a quantum state. In the case in hand, and considering a boundary condition as per Equation (4), this amounts to solving the system \[\left\{\begin{array}{l}(P\otimes\mathbb{I})\,\omega_{2,\alpha}=0,\quad( \mathbb{I}\otimes P)\,\omega_{2,\alpha}=0,\\ \omega_{2,\alpha}(f,f^{\prime})-\omega_{2,\alpha}(f^{\prime},f)=i\mathcal{E} _{\alpha}(f,f^{\prime})\quad\forall f,f^{\prime}\in\mathcal{D}(M),\end{array}\right. \tag{6}\] where \(\mathcal{E}_{\alpha}\) is as per Equation (5) and where the subscript \(\alpha\) in \(\omega_{2,\alpha}\) highlights the dependence of the two-point function from the boundary condition. Still exploiting the functional calculus for \(K_{\alpha}\), Proposition 5.5. in [24] entails that a solution to Equation (6), corresponding to the two-point function of a ground state of Hadamard form exists if there exists in turn \(\omega_{2,\alpha}\in\mathcal{D}^{\prime}(M\times M)\) which reads \[\omega_{2,\alpha}(f,f^{\prime})=\int\limits_{\mathbb{R}^{2}}dtdt^{\prime}\, \left(f(t),K_{\alpha}^{-\frac{1}{2}}\exp[iK_{\alpha}^{\frac{1}{2}}(t-t^{\prime })]f^{\prime}(t)\right)_{L^{2}(\Sigma)}. \tag{7}\] While such expression appears to be innocuous, observe that the symmetric part of \(\omega_{2,\alpha}\) reads formally \[\omega_{2,\alpha}^{S}(f,f^{\prime})=\int\limits_{\mathbb{R}^{2}}dtdt^{\prime} \,\left(f(t),K_{\alpha}^{-\frac{1}{2}}\cos\left[K_{\alpha}^{\frac{1}{2}}(t-t^ {\prime})\right]f^{\prime}(t)\right)_{L^{2}(\Sigma)}. \tag{8}\] Yet, contrary to the antisymmetric part of \(\omega_{2}\) which coincides up to multiplicative constants with Equation (5), Equation (8) is potentially ill-defined due to the 0-modes of the operator \(K_{\alpha}\) which entail that the integrand might fail to be integrable on account of the action of \(K_{\alpha}^{-\frac{1}{2}}\cos\left[K_{\alpha}^{\frac{1}{2}}(t-t^{\prime})\right]\). This potential singularity is of infrared type since it involves the low energy behaviour of the operator \(P\) as in Equation (1) and, in turn, it is directly dependent on the spectrum of \(K_{\alpha}\), whose form depends explicitly on the underlying boundary condition as per Equation (4). Having established at a generic and abstract level where is the source of a possible infrared singularity, in the next section, we discuss an explicit example that highlights it more concretely, making particularly evident the interplay with the underlying boundary condition. ## 3 Infrared Divergences on Bertotti-Robinson Spacetime In order to give an example of the feature outlined in the previous section, we consider as background \((M,g)\) a Bertotti-Robinson spacetime, which is a solution to the Einstein-Maxwell equation and it approximates the near horizon geometry of an extremal black hole with unit charge, see e.g. [19; 20; 21; 22]. As a manifold, \(M\) is globally diffeomorphic to \(\mathrm{CAdS}_{2}\times\mathbb{S}^{2}\), where \(\mathrm{CAdS}_{2}\) stands for the universal cover of the two-dimensional anti-de Sitter spacetime. In the following we start by considering a distinguished patch \((\widetilde{M},g)\) of \((M,g)\) which makes manifest the connection with a black hole spacetime, see [21]. Herein the line-element reads \[ds^{2}=-(\rho^{2}-1)dt^{2}+(\rho^{2}-1)^{-1}d\rho^{2}+d\Omega^{2}(\theta,\varphi), \tag{9}\] where \(t\in\mathbb{R}\), \(\rho\in(1,\infty)\) while \(d\Omega^{2}(\theta,\varphi)=d\theta^{2}+\sin^{2}\theta d\phi^{2}\) is the line element of the unit two-sphere. On top of \((\widetilde{M},g)\) we consider a massive, real scalar field \(\Psi:\widetilde{M}\rightarrow\mathbb{R}\), whose dynamics is ruled by \[P_{0}\Psi=\left(\square_{g}-m^{2}\right)\Psi=0, \tag{10}\] where \(\square_{g}\) is the D'Alembert wave operator built out of \(g\) and where we are discarding any coupling to the underlying, scalar curvature since it is vanishing. Since \((\widetilde{M},g)\) is a static spacetime with a (conformal) timelike boundary at \(\rho\rightarrow\infty\), _c.f._ Equation (9), it is legitimate to look for a bi-distribution \(\omega_{2}\in\mathcal{D}^{\prime}(\widetilde{M}\times\widetilde{M})\) which represents the two-point correlation function of a ground state. In the following we follow a procedure based on a mode decomposition which yields ultimately a counterpart both of Equation (5) and of Equation (7) in the case in hand. Let us therefore consider a solution \(\Psi\) of Equation (10) which admits the following decomposition \[\Psi(t,\rho,\theta,\phi)=\sum\limits_{l=0}^{\infty}\sum\limits_{p=-l}^{l}\int \limits_{\mathbb{R}}e^{-i\omega t}\psi_{\omega lp}(\rho)Y_{l}^{p}(\theta,\phi), \tag{11}\] where \(Y_{l}^{p}(\theta,\varphi)\) is the standard spherical harmonic, eigenfunction of the Laplacian on the unit 2-sphere, namely \(\Delta_{\mathrm{S}^{2}}Y_{lp}(\theta,\varphi)=-l(l+1)Y_{lp}(\theta,\varphi)\). On account of Equation (10), \(\psi_{\omega lp}(\rho)\) satisfies the radial equation \[(\mathbf{L}+\omega^{2})\psi(\rho):=\left\{(\rho^{2}-1)\bigg{[}\frac{d}{d\rho} \bigg{(}(\rho^{2}-1)\frac{d}{d\rho}\bigg{)}-l(l+1)-m^{2}\bigg{]}+\omega^{2} \right\}\psi(\rho)=0, \tag{12}\] which is a Sturm-Liouville problem with eigenvalue \(-\omega^{2}\), see [27]. Observe that we have suppressed the subscripts highlighting the dependence of \(\mathbf{L}\) as well as of the solutions of the radial equation on the spectral parameters \(\omega,l,p\) to avoid an unnecessarily heavy notation. A basis of solutions of Equation (12) is given by the associated Legendre polynomials \[\psi_{1}(\rho) =P_{\nu}^{i\omega}(\rho), \tag{13a}\] \[\psi_{2}(\rho) =Q_{\nu}^{i\omega}(\rho), \tag{13b}\] with \(\nu=\frac{1}{2}(\sqrt{1+4l(l+1)+4m^{2}}-1)\), see [26, SS14.2]. The next step consists of constructing a fundamental solution associated to the differential operator \(\mathbf{L}\). Following the theory of Sturm-Liouville operators, see [27], first of all we allow \(\omega\) to take any complex value and we look for solutions of Equation (10) that belong to \(\mathcal{H}_{1}:=L^{2}((1,c),d\mu)\) or to \(\mathcal{H}_{\infty}:=L^{2}((c,\infty),d\mu)\), with \(d\mu=(\rho^{2}-1)^{-1}d\rho\), while \(c\in(1,\infty)\) is an arbitrary, but fixed number. Since both \(\psi_{1}\) and \(\psi_{2}\) are smooth functions, it suffices to look for their asymptotic expansion in a neighborhood of \(\rho=1\). Using [26, SS14.8], it turns out that \(Q_{\nu}^{\pm i\omega}\), \(P_{\nu}^{\pm i\omega}\in\mathcal{H}_{1}\) if \(\pm\mathrm{Im}(\omega)>0\), while they both lie in \(\mathcal{H}_{\infty}\) if and only if \(0\leq\nu<\frac{1}{2}\), which is equivalent to \(l=0\) and \(m^{2}\in[-\frac{1}{4},\frac{3}{4})\). Whenever \(\nu>\frac{1}{2}\), only \(Q_{\nu}^{\pm i\omega}\) belongs to \(\mathcal{H}_{\infty}\). Hence, using the standard terminology proper of Sturm-Liouville problems summarized in [5], at \(\rho=\infty\), we say that \(\mathcal{Q}_{\nu}^{\omega}(\rho):=e^{\pi\omega}Q_{\nu}^{i\omega}(\rho)\) is the _principal solution_, since, as \(\rho\to\infty\), it falls off to \(0\) faster than any other solution of Equation (12). At the same time \(\mathcal{P}_{\nu}^{\omega}(\rho):=\frac{\pi}{2i}\sinh^{-1}(\pi\omega)P_{\nu}^ {i\omega}(\rho)\) is referred to as being a choice for _one_ secondary solution, see [28, Def. 4.2] as well as [24] for a discussion of the physical significance. We highlight that the \(\omega\)-dependent coefficients have also been chosen for later computational efficiency. Using Weyl classification of the behaviour of a Sturm-Liouville problem at the end points, we can infer that both \(\rho=1\) and \(\rho\to\infty\) are _limit circle_ if \(0<\nu<\frac{1}{2}\), while only \(\rho\to\infty\) falls in this class if \(\nu\geq\frac{1}{2}\). This entails that, in order to solve Equation (12), in addition to initial data we need to assign boundary conditions at the endpoints whenever they are limit circle. In the following, since we are more concerned about the role of the time-like boundary in the analysis of the underlying quantum theory, we consider only Dirichlet boundary conditions on the horizon (\(\rho=1\)), while, at \(\rho\to\infty\), we assign Robin boundary conditions. Observe that, if we switch back to a fully-covariant description of the underlying field, Equation (9) entails that the locus \(\rho=1\) is an event horizon, hence a light-like hypersurface. This This translates in the choice of the following solutions of Equation (12) whenever \(\mathrm{Im}(\omega)\neq 0\): \[u_{1}(\rho) :=\Theta(\mathrm{Im}(\omega))P_{\nu}^{i\omega}(\rho)+\Theta(- \mathrm{Im}(\omega))P_{\nu}^{-i\omega}(\rho), \tag{14a}\] \[u_{\infty}(\rho) :=\cos\gamma\,e^{\pi\omega}Q_{\nu}^{i\omega}(\rho)+\sin\gamma\, \frac{\pi}{2i}\sinh^{-1}(\pi\omega)P_{\nu}^{i\omega}(\rho),\qquad\gamma\in[0, \pi/2], \tag{14b}\] where the Heaviside step function in \(u_{1}\) helps us to take into account both \(\mathrm{Im}\,\omega\) positive and negative in a single expression, while the exponential coefficients appearing in \(u_{\infty}\) are chosen for future computational convenience. As a consequence of the choices above, it turns out that \(u_{\infty}\) abides by Robin boundary conditions, namely \[\lim_{\rho\to\infty}(\cos\gamma\{u_{\infty}(\rho),\mathcal{P}_{\nu}^{\omega}( \rho)\}+\sin\gamma\{u_{\infty}(\rho),\mathcal{Q}_{\nu}^{\omega}(\rho)\})=0, \tag{15}\] where \(\{\cdot,\cdot\}:=(\rho^{2}-1)W[\cdot,\cdot]\), \(W\) being the Wronskian. ### The radial Green function As a next step, denoting by \(I\) the interval \((1,\infty)\), we look for \(G_{\gamma}\in\mathcal{D}^{\prime}(I\times I)\), \(\gamma\in[0,\frac{\pi}{2}]\), a distribution whose integral kernel \(G_{\gamma}(\omega,\rho,\rho^{\prime})\equiv G_{\gamma}(\rho,\rho^{\prime})\) abides by \[(\mathbf{L}\otimes\mathbb{I})G_{\gamma}(\rho,\rho^{\prime})=(\mathbb{I}\otimes \mathbf{L})G_{\gamma}(\rho,\rho^{\prime})=\delta(\rho-\rho^{\prime}), \tag{16}\] where \(\mathbf{L}\) is the operator in Equation (12) and where the subscript \(\gamma\) highlights that we are looking for a fundamental solution which encodes Robin boundary conditions as per Equation (15). Observe that, once more, we are suppressing for convenience of the notation any explicit reference on the dependence of \(G_{\gamma}\) on \(\omega\) as well as on \(l,p\). Using Equation (14) and standard results of theory of second order ODEs, it holds that \[G_{\gamma}(\rho,\rho^{\prime})=\frac{u_{1}(\rho_{<})u_{\infty}(\rho^{\prime}_{>}) }{(\rho^{2}-1)W[u_{\infty},u_{1}](\rho)}, \tag{17}\] where \[u_{1}(\rho_{<})u_{\infty}(\rho^{\prime}_{>})=\Theta(\rho-\rho^{\prime})u_{1}( \rho^{\prime})u_{\infty}(\rho)+\Theta(\rho^{\prime}-\rho)u_{1}(\rho)u_{\infty} (\rho^{\prime}),\] while \(W[\cdot,\cdot]\) is the Wronskian between \(u_{1}\) and \(u_{\infty}\). As an example, for Dirichlet (\(\gamma=0\)) and Neumann boundary conditions (\(\gamma=\frac{\pi}{2}\)), Equation (17) boils down to \[G_{0}(\rho,\rho^{\prime})=e^{\pi\omega}P_{\nu}^{-i\omega}(\rho_{<})Q_{\nu}^{i \omega}(\rho_{>}), \tag{18a}\] \[G_{\frac{\pi}{2}}(\rho,\rho^{\prime})=\frac{-i\pi}{2\sinh\pi\omega}P_{\nu}^{- i\omega}(\rho_{<})P_{\nu}^{i\omega}(\rho_{>}), \tag{18b}\] while, for \(\gamma\in(0,\pi/2)\), _i.e._ for Robin boundary conditions, it holds that \[G_{\gamma}(\rho,\rho^{\prime})=\frac{\cos\gamma G_{0}(\rho,\rho^{\prime})+\sin \gamma G_{\frac{\pi}{2}}(\rho,\rho^{\prime})}{\cos\gamma+\sin\gamma}. \tag{19}\] Observe that if we reinstate the \(\omega\)-dependence in the radial Green function, it holds that, for all \(\gamma\in[0,\frac{\pi}{2}]\), \(G_{\gamma}(-\omega,\rho,\rho^{\prime})=\overline{G_{\gamma}(\omega,\rho,\rho^ {\prime})}\) whenever \(\omega\in\mathbb{R}\). We remark that given \(G_{\gamma}\) there is always the freedom to add a bi-solution of the radial equation \(\mathbf{L}G_{0}=0\) which abides by the Robin boundary condition parameterized by \(\gamma\in[0,\frac{\pi}{2}]\). In our construction this freedom amounts to the possibility of making different choices of the secondary solution in Equation (13b). The consequences of this leeway have been thoroughly investigated in [28; 29] and we shall not delve further into them. ### Resolution of the Identity Having established an expression for the radial Green function, in order to construct the two-point correlation function for the ground state of the Klein-Gordon field as in Equation (10) endowed with Robin boundary conditions, we need to individuate a resolution of the identity written in terms of the Green function \(G_{\gamma}\) as in Equation (19). Following [30, Ch. 7], reinstating the explicit dependence on \(\omega\) and recalling that we are allowing \(\omega\in\mathbb{C}\), it holds that \[-(\rho^{2}-1)\delta(\rho-\rho^{\prime})=\frac{1}{2\pi i}\int_{C_{\infty}}G_{ \gamma}(\lambda,\rho,\rho^{\prime})d\lambda,\quad\lambda=\omega^{2}, \tag{20}\] where \(C_{\infty}\) is a suitable "Pac-Man" contour in the complex \(\lambda\)-plane. Using Jordan's lemma, the right hand side of Equation (20) can be shown to abide by the following chain of identities: \[\frac{1}{2\pi i}\int_{C_{\infty}}G_{\gamma}(\lambda,\rho,\rho^{ \prime})d\lambda\overset{\lambda=\omega^{2}}{=}\frac{1}{\pi i}\int_{\mathbb{R }}d\omega\,\omega\,G_{\gamma}(\omega,\rho,\rho^{\prime}) =\frac{1}{\pi i}\int_{0}^{\infty}d\omega\,\omega\,[G_{\gamma}( \omega,\rho,\rho^{\prime})-G_{\gamma}(-\omega,\rho,\rho^{\prime})]= \tag{21}\] \[=\frac{2}{\pi}\int_{0}^{\infty}d\omega\,\omega\,\operatorname{Im }[G_{\gamma}(\omega,\rho,\rho^{\prime})]. \tag{22}\] Introducing the function \(R_{\gamma}(\omega,\rho,\rho^{\prime}):=\frac{2}{\pi}\operatorname{Im}[G_{ \gamma}(\omega,\rho,\rho^{\prime})]\), by comparison with Equation (20), it holds that \[\int_{\mathbb{R}}d\omega\,\omega\,R_{\gamma}(\omega,\rho,\rho^{\prime}):=(\rho ^{2}-1)\delta(\rho-\rho^{\prime}), \tag{23}\] which is a real-valued odd function in \(\omega\), _cf._ Equation (21). Using Equation (19), one can infer that \[R_{0}(\omega,\rho,\rho^{\prime}) =\frac{i}{\pi}\big{(}e^{\pi\omega}P_{\nu}^{-i\omega}(\rho)Q_{\nu} ^{i\omega}(\rho^{\prime})-e^{-\pi\omega}P_{\nu}^{i\omega}(\rho)Q_{\nu}^{-i \omega}(\rho^{\prime})\big{)}, \tag{24a}\] \[R_{\frac{\pi}{2}}(\omega,\rho,\rho^{\prime}) =\frac{1}{2\sinh\pi\omega}(P_{\nu}^{-i\omega}(\rho)P_{\nu}^{i \omega}(\rho^{\prime})+P_{\nu}^{i\omega}(\rho)P_{\nu}^{-i\omega}(\rho^{\prime})),\] (24b) \[R_{\gamma}(\omega,\rho,\rho^{\prime}) =\frac{\cos\gamma R_{0}(\omega,\rho,\rho^{\prime})+\sin\gamma R_{ \frac{\pi}{2}}(\omega,\rho,\rho^{\prime})}{\cos\gamma+\sin\gamma},\qquad\gamma \in\big{(}0,\frac{\pi}{2}\big{)}. \tag{24c}\] ### Two-point correlation function of the ground state Having individuated the necessary building blocks, we are in a position to address the question of the construction of a two-point correlation function for the ground state of a Klein-Gordon field as per Equation (10) with Robin boundary conditions. In other words, denoting by \((M,g)\) the underlying Bertotti-Robinson spacetime as per Equation (9), we are looking in the first place for \(\omega_{2,\gamma}\in\mathcal{D}^{\prime}(M\times M)\), the subscript \(\gamma\) highlighting the dependence on the Robin boundary condition, such that \[(P\otimes\mathbb{I})\omega_{2,\gamma}=(\mathbb{I}\otimes P)\omega_{2,\gamma}=0,\] while its antisymmetric part is such that, working at the level of integral kernels \[\omega_{2,\gamma}(x,x^{\prime})-\omega_{2,\gamma}(x^{\prime},x)=i\mathcal{E}_ {\gamma}(x,x^{\prime}), \tag{25}\] where \(\mathcal{E}_{\gamma}\in\mathcal{D}^{\prime}(\mathcal{M}\times\mathcal{M})\) is the causal propagator, namely the difference between the retarded and the advanced fundamental solutions of the operator \(P\) as in Equation (10) supplemented with Robin boundary conditions - see also Section 2. In turn \(\mathcal{E}_{\gamma}\) abides by the following initial value problem: \[(P\otimes\mathbb{I})\mathcal{E}_{\gamma}=(\mathbb{I}\otimes P) \mathcal{E}_{\gamma}=0, \tag{26}\] \[\mathcal{E}_{\gamma}|_{\Sigma_{t}\times\Sigma_{t}}=0,\qquad \partial_{t}\mathcal{E}_{\gamma}|_{\Sigma_{t}\times\Sigma_{t}}=\delta_{ \Sigma_{t}}, \tag{27}\] where \(\Sigma_{t}\) is a generic Cauchy surface at constant time \(t\in\mathbb{R}\), while \(\delta_{\Sigma_{t}}\) is the Dirac delta on \(\Sigma_{t}\). Following (4, Ch. 2.3), a solution to this initial value problem can be written as \[\mathcal{E}_{\gamma}(x,x^{\prime})=\lim_{\varepsilon\to 0^{+}}\sum_{l=0}^{ \infty}\sum_{p=-l}^{+l}\int_{\mathbb{R}}d\omega\sin\left(\omega(t-t^{\prime}-i \varepsilon)\right)Y_{l}^{p}(\theta,\phi)\overline{Y_{l}^{p}(\theta^{\prime},\phi^{\prime})}R_{\gamma}(\omega,\rho,\rho^{\prime}), \tag{28}\] where \(R_{\gamma}(\omega,\rho,\rho^{\prime})\) ought to satisfy the integral identity \[\int_{\mathbb{R}}d\omega\,\omega\,R_{\gamma}(\omega,\rho,\rho^{\prime})=(\rho ^{2}-1)\delta(\rho-\rho^{\prime}). \tag{29}\] This is nothing but Equation (23) which justifies why we have used the symbol \(R_{\gamma}\). Keeping in mind Equation (25) as well as Equation (28), we have all ingredients to write the formal expression of the two-point correlation function of the ground state associated to a Klein-Gordon field with Robin boundary conditions on a Bertotti-Robinson spacetime, namely, using (4, Th. 2.22): \[w_{2,\gamma}(x,x^{\prime})=\lim_{\varepsilon\to 0^{+}}\sum_{l=0}^{\infty} \sum_{p=-l}^{+l}\int_{\mathbb{R}}d\omega\,\Theta(\omega)e^{-i\omega(t-t^{ \prime}-i\varepsilon)}Y_{l}^{p}(\theta,\phi)\overline{Y_{l}^{p}(\theta^{ \prime},\phi^{\prime})}R_{\gamma}(\omega,\rho,\rho^{\prime}), \tag{30}\] which can be rewritten in a more compact form, summing over \(m\) and using (26, SS14.30.9), as \[w_{2,\gamma}(x,x^{\prime})=\lim_{\varepsilon\to 0^{+}}\sum_{l=0}^{\infty} \int_{0}^{\infty}d\omega e^{-i\omega(t-t^{\prime}-i\varepsilon)}\frac{2l+1}{4 \pi}P_{l}(\cos\Gamma(\theta,\theta^{\prime},\phi,\phi^{\prime}))R_{\gamma}( \omega,\rho,\rho^{\prime}), \tag{31}\] where \(P_{l}\) is the Ferrers function of the first kind while \(\Gamma:\mathcal{S}^{2}\times\mathcal{S}^{2}\rightarrow\mathbb{R}\) is the geodesic distance on the unit 2-sphere. We highlight that we have called \(w_{2,\gamma}\) a formal expression since if we expand \(R_{\gamma}\) near \(\omega=0\), for both \(\gamma=0\) and \(\gamma\in(0,\frac{\pi}{2}]\), the asymptotic behaviour of \(R_{\gamma}\) reads - see Figure 1 \[|R_{0}(\omega,\rho,\rho^{\prime})|\stackrel{{\omega\to 0}}{{ \sim}}\omega,\qquad|R_{\gamma}(\omega,\rho,\rho^{\prime})|\stackrel{{ \omega\to 0}}{{\sim}}\omega^{-1}. \tag{32}\] Hence the expression in Equation (31) identifies a well defined distribution only for \(\gamma=0\), namely Dirichlet boundary conditions, while for \(\gamma\in(0,\frac{\pi}{2}]\) an infrared divergence occurs, therefore Equation (31) cannot identify the two-point correlation function of a ground state. ### A different scenario: the Poincare patch In the previous section we have shown that infrared divergences can occur in a Bertotti-Robinson spacetime when the underlying Klein-Gordon field is not endowed with Dirichlet boundary conditions. In the following, we highlight that the correspondence between the existence of a singular behaviour at large distances and the choice of a boundary condition is highly dependent on the underlying geometry. To achieve this goal, we consider a different patch of the Bertotti-Robinson spacetime, denoted by \((\widetilde{M}_{1},g_{1})\). It has been already analyzed in [29] and it can be realized starting from Equation (9) and switching from the coordinates \((t,\rho)\) to \((\tau,r)\) by means of the transformation [22] \[\tau=\frac{(\rho^{2}-1)^{1/2}\sinh t}{\rho+(\rho^{2}-1)^{1/2}\cosh t},\qquad r =\frac{1}{\rho+(\rho^{2}-1)^{1/2}\cosh t}. \tag{33}\] The line element in Equation (9) becomes \[ds^{2}=\frac{1}{r^{2}}(-d\tau^{2}+dr^{2}+r^{2}d\Omega^{2}), \tag{34}\] where the new coordinates can be assigned the following domain: \(\tau\in\mathbb{R}\) while \(r\in(0,\infty)\). Subsequently, on top of \(\widetilde{M}_{1}\), we consider a massive, real, Klein-Gordon field \(\widetilde{\Psi}:\widetilde{M}_{1}\rightarrow\mathbb{R}\) whose dynamics is ruled by Equation (10) where the D'Alembert wave operator is built out of the metric in Equation (34). In particular we are interested in constructing the two-point correlation function of the ground state associated to \(\widetilde{\Psi}\) and we can follow the same step-by-step construction outlined in the previous section. Here we will not dwell into the details which have been already accounted for in [29] and we report the main ingredients and results. We observe that in the last cited paper the main interest lies in the analysis of the response function of a Unruh-de Witt detector rather than in the identification of infrared singularities. As in the previous section we consider a mode decomposition \[\widetilde{\Psi}(\tau,r,\theta,\phi)=\sum_{l=0}^{\infty}\sum_{p=-l}^{l}\int \limits_{\mathbb{R}}e^{-i\omega\tau}\widetilde{\psi}_{\omega lp}(r)Y_{l}^{p}( \theta,\phi), \tag{35}\] where \(\widetilde{\psi}_{\omega lp}\) satisfies the radial equation \[(\mathbf{\tilde{L}}+\omega^{2})\widetilde{\psi}_{\omega lp}(r):=\bigg{[}\frac {d^{2}}{dr^{2}}-\frac{l(l+1)+m^{2}}{r^{2}}+\omega^{2}\bigg{]}\widetilde{\psi} _{\omega lp}(r). \tag{36}\] A basis of solutions of this equation can be written in terms of Bessel functions of the first kind and, using once more the language of Sturm-Liouville problems, the primary and secondary solutions read \[\tilde{\mathcal{P}}(r)=\sqrt{r}J_{\eta}(\omega r),\qquad\tilde{\mathcal{S}}( r)=-\omega^{2\eta}\sqrt{r}J_{-\eta}(\omega r), \tag{37}\] Figure 1: The radial function \(R_{\gamma}\) as in Equation (24a) where we set \(l=0\). To highlight the behaviour near \(\omega=0\), we have considered the product \(\omega R_{\gamma}\) as a function of \(\omega\), the other parameters being fixed. The plot shows that only for \(\gamma=0\)\(\omega R_{\gamma}\) tends to \(0\) as \(\omega\to 0\), all other cases displaying therefore a singular behaviour. where \(\eta=\frac{1}{2}\sqrt{1+4l(l+1)+4m^{2}}\) and where, for simplicity of the notation, we have dropped the subscript highlighting the dependence on \(\omega,l,p\). As discussed in [29, Sec. 2], Robin boundary conditions, parameterized by \(\gamma\in[0,\frac{\pi}{2}]\), can be imposed whenever \(l=0\) and \(m^{2}\in[-\frac{1}{4},\frac{3}{4})\) exactly as in the previous section. Using Equation (37), it turns out that the two-point correlation function for the ground state in this specific patch reads formally \[\widetilde{w}_{2,\gamma}(x,x^{\prime})=\lim_{\varepsilon\to 0^{+}}\sum_{l=0}^{ \infty}\int_{0}^{\infty}d\omega e^{-i\omega(\tau-\tau^{\prime}-i\varepsilon)} \frac{2l+1}{4\pi}P_{l}(\cos\Gamma(\theta,\theta^{\prime},\phi,\phi^{\prime})) \tilde{R}_{\gamma}(\omega,r,r^{\prime}), \tag{38}\] where, similarly to Equation (7) we have taken the sum over all admissible values of \(p\) and where \[\tilde{R}_{\gamma}(\omega,r,r^{\prime})=\frac{\sqrt{rr^{\prime}}(\cos(\gamma) J_{\eta}(\omega r)+\sin(\gamma)\omega^{2\eta}J_{-\eta}(\omega r))(\cos( \gamma)J_{\eta}(\omega r^{\prime})+\sin(\gamma)\omega^{2\eta}J_{-\eta}(\omega r ^{\prime}))}{2(\sin^{2}(\gamma)\omega^{4\eta}+\sin(2\gamma)\cos(\pi\eta) \omega^{2\eta}+\cos^{2}(\gamma))}. \tag{39}\] If we expand \(\tilde{R}_{\gamma}\) near \(\omega=0\), for both \(\gamma\in[0,\frac{\pi}{2})\) and \(\gamma=\frac{\pi}{2}\), the asymptotic behaviour of \(\tilde{R}_{\gamma}\) reads \[|\tilde{R}_{\gamma}(\omega,r,r^{\prime})|\stackrel{{\omega \to 0}}{{\sim}}\omega^{2\eta},\qquad|\tilde{R}_{\frac{\pi}{2}}(\omega,r,r^{ \prime})|\stackrel{{\omega\to 0}}{{\sim}}\omega^{-2\eta}. \tag{40}\] Since we are considering mass values in the range \(m^{2}\in[0,\frac{3}{4})\), then the parameter \(\eta\in[\frac{1}{2},1)\) which entails that the right hand side of Equation (38) identifies a well-defined distribution whenever \(\gamma\in[0,\frac{\pi}{2})\) while for \(\gamma=\frac{\pi}{2}\), namely only for Neumann boundary conditions, an infrared divergence occurs in the integrand, see Figure 2. Therefore one cannot conclude that Equation (38) identifies the two-point correlation function of a ground state when \(\gamma=\frac{\pi}{2}\). ## 4 Conclusions In this short paper, we have highlighted that, in the construction of the two-point correlation functions for a free, Klein-Gordon field on globally hyperbolic manifolds with a timelike boundary, one might face the occurrence of infrared singularities in the construction of the two-point correlation function of the ground state even when considering simple boundary conditions such as those of Robin type. Although this pathology is a well-known feature especially in two-dimensional globally hyperbolic spacetimes with an empty boundary, we have shown that it can occur even in higher dimensions, _e.g._ in a Bertotti-Robinson spacetime, and that it is strictly linked to the choice of boundary conditions for the underlying field theory. It is interesting to remark that, since this phenomenon occurs for ground states, then it is also present for thermal/KMS states whose infrared behaviour is more singular due to the contribution of the Bose factor in the mode decomposition. Figure 2: The radial function \(\tilde{R}_{\gamma}\) as in Equation (39) where we set \(l=0\). To highlight the behaviour near \(\omega=0\), we have considered the product \(\omega\tilde{R}_{\gamma}\) as a function of \(\omega\), the other parameters being fixed. The plot shows that the function tends to \(0\) as \(\omega\to 0\) for all boundary conditions except for \(\gamma=\frac{\pi}{2}\). Hence, one might envisage scenarios, behaving similarly to free Bosonic quantum field theories in three-dimensional globally hyperbolic spacetimes with an empty boundary, for which the ground state exists while the thermal counterpart displays infrared singularities. Finally, our work sets the ground for multiple future analyses among which we reckon that the following are especially worth mentioning: * investigating the occurrence of infrared singularities when we consider a more general class of boundary conditions such as those of Wentzell type [31; 32] and those associated to a dynamic wall [33; 34], * a comparison of the phenomenon highlighted in this work with the freedom of choosing the secondary solution in a Sturm-Liouville problem as highlighted in [28]. ## 5 Acknowledgements The work of L.C. is supported by a postdoctoral fellowship of the Department of Physics of the University of Pavia, while that of L.S. by a PhD fellowship of the University of Pavia.
2304.08616
Exploring exotic configurations with anomalous features using deep learning: Application of classical and quantum-classical hybrid anomaly detection
In this article we present the application of classical and quantum-classical hybrid anomaly detection schemes to explore exotic configuration with anomalous features. We consider the Anderson model as a prototype where we define two types of anomalies - a high conductance in presence of strong impurity and low conductance in presence of weak impurity - as a function of random impurity distribution. Such anomalous outcome constitutes an imperceptible fraction of the data set and is not a part of the training process. These exotic configurations, which can be a source of rich new physics, usually remain elusive to conventional classification or regression methods and can be tracked only with a suitable anomaly detection scheme. We also present a systematic study of the performance of the classical and the quantum-classical hybrid anomaly detection method and show that the inclusion of a quantum circuit significantly enhances the performance of anomaly detection which we quantify with suitable performance metrics. Our approach is quite generic in nature and can be used for any system that relies on a large number of parameters to find their new configurations which can hold exotic new features.
Kumar J. B. Ghosh, Sumit Ghosh
2023-04-17T21:10:04Z
http://arxiv.org/abs/2304.08616v2
# Exploring exotic configurations with anomalous features using deep learning: ###### Abstract In this article we present the application of classical and quantum-classical hybrid anomaly detection schemes to explore exotic configuration with anomalous features. We consider the Anderson model as a prototype where we define two types of anomalies - a high conductance in presence of strong impurity and low conductance in presence of weak impurity - as a function of random impurity distribution. Such anomalous outcome constitutes an imperceptible fraction of the data set and is not a part of the training process. These exotic configurations, which can be a source of rich new physics, usually remain elusive to conventional classification or regression methods and can be tracked only with a suitable anomaly detection scheme. We also present a systematic study of the performance of the classical and the quantum-classical hybrid anomaly detection method and show that the inclusion of a quantum circuit significantly enhances the performance of anomaly detection which we quantify with suitable performance metrics. Our approach is quite generic in nature and can be used for any system that relies on a large number of parameters to find their new configurations which can hold exotic new features. ## I Introduction In recent years machine learning has become an integral part of different branches of condensed matter physics. It has shown impeccable performance is dealing with problem with large degrees of freedom where extracting an effective model is practically impossible. It has been adopted as a viable alternative for exploring electronic properties [1; 2; 3; 4] as well as transport properties [5; 6; 7; 8]. Its inherent ability to deal with a high level nonlinearity makes it quite successful in highly non-trivial physical problem such as predicting different phases of matter [9; 10] and their topological characterisation [11]. In addition to playing a crucial role in discovering new materials as well as mapping their quantum features [12], this has been instrumental in designing new experiments to unravel their quantum nature [13]. Although such automatization makes it possible to scan through a huge configuration space, it also has a risk of missing exotic configurations containing significantly new physics. Occurrence of such configurations are statistically insignificant and can be easily overlooked in a learning process. Identifying these rare configurations therefore can hold key to discovering new physics. In this article, we present a new paradigm, namely _anomaly detection_[14; 15; 16] which is particularly suitable for detecting such special configurations. The main advantage of anomaly detection with respect to conventional classification schemes is that here one doesn't need the a priory knowledge of the data points that are uncharacteristic for a specific data set, namely _anomaly_. The training is done with normal data. The anomalies are heterogeneous and remain unknown until their occurrence. For example consider the ECG of regular heart beat which shows a periodic pattern. An anomaly detection algorithm trained with the normal heart beat can identify the irregularities which has not been observed before and can predict signatures of heart problems [17; 18]. Due to the rarity of anomalous events, anomaly-detection data sets are heavily imbalanced. It is, therefore, a highly complex task to formally describe an anomaly [19]. In this work, we demonstrate how anomaly detection can be exploited to reveal subtle features of a condensed matter system which can remain hidden from any conventional regression or classification scheme. We consider the Anderson model where the distribution of the random impurity constitute the input parameter space. The output is the conductance of the system which falls down significantly for strong impurity strength. However for certain distribution the system might achieve a significantly larger transmission which we mark as _anomaly_. Such occurrence is statistically insignificant and therefore almost impossible to anticipate beforehand. However a configuration which can provide large conductance in presence of strong impurity can be quite useful in device design. For example, disorder can enhance damping like spin-orbit torque, which is responsible for electrical switching of magnetisation [20], or it can enhance the superconducting nature [21] as well. On the other hand, for a weak impurity strength, when the system is expected to have a high transmission, the anomaly is defined as the configuration which suppresses the conductance significantly. Such anomalies can pose a hurdle in quantum optimisation even in presence of weak impurity [22]. The main objective of the present work is to systematically identify these anomalies with a machine learning algorithm which can be utilised to understand the nature of such unusual configurations. In this paper we demonstrate the application of both classical [19] and and quantum-classical hybrid anomaly detection scheme [23; 24; 25; 26] for physical problems that manifests anomalous behaviour as a complex function of large number of variables. Taking the Anderson model as a prototype, we systematically show that how a anomaly detection scheme can identify the outliers without any prior knowledge of their existence. We consider three methods namely isolation forest [27], auto encoder [28; 29; 30] and hybrid quantum-classical auto encoder [26] and compare their performances in terms of suitable performance metrics. Our analysis shows that the quantum anomaly detection schemes performs better compared to their classical counterpart due to their inherent ability to deal with the complex feature mapping in the latent dimension. Note that, although the present work is focused on anomalous behaviour of conductance due to impurity scattering, the framework is applicable for detecting anomaly in any physical observable as a function of arbitrarily large number of parameters and therefore would play an instrumental role in discovering new exotic configurations for different physical systems. ## II Model and method For our study we consider the Anderson model given by \[H=\epsilon\sum_{i}c_{i}^{\dagger}c_{i}+t\sum_{\langle i,j\rangle}c_{i}^{ \dagger}c_{j}+\sum_{i}V_{i}c_{i}^{\dagger}c_{i} \tag{1}\] where \(c^{\dagger},c\) is the creation/annihilation operator. \(t\) is the hopping parameter which we choose to be -1. \(\epsilon\) is the onsite energy which we choose as \(-4t\). \(V_{i}\) in the onsite random potential. For this study we consider a \(240\times 240\) scattering region and use it in a two terminal device configuration (Fig. 1). We choose total 80 impurities with same strength distributed within a \(200\times 200\) region in the centre. We assign a constant negative value \(-V_{0}\) for all the impurities. The Fermi level is kept at \(0.0005t\) which gives a conductance of 1 in clean limit. The zero bias conductance of the system is given by \[T=Tr\left[\Gamma_{1}G^{R}\Gamma_{2}G^{A}\right], \tag{2}\] where \(G^{R,A}=\left[E-H-\Sigma_{1}^{R,A}-\Sigma_{2}^{R,A}\right]^{-1}\) is the retarded/advanced Green's function of the scattering region. \(\Gamma_{1,2}=i\left[\Sigma_{1,2}^{R}-\Sigma_{1,2}^{A}\right]\) where \(\Sigma_{1,2}^{R,A}\) is the retarded/advanced self energy of the left/right electrode. For our calculation we use tight binding code KWANT [31] which uses scattering wave function formalism to obtain these quantities. The local density of states can be obtained directly from the scattering wave function. From Fig. 1a one can see that for a strong impurity strength, the conductance of the system is more likely to be close to zero. However, some configurations can also give rise to a high value of conductance although the probability of such outcome is quite small. For strong impurity we label such outcome as _anomaly_. Similar behaviour can be observed with a weak impurity. The _anomaly_ in this case is a configuration that can completely suppress the current flow resulting a insulating behaviour. From Fig. 1 one can see that the impurity configurations corresponding to a high (anomaly) and low (normal) value of conductance don't have any characteristic difference. It is therefore impossible to detect such anomalous behaviour with any conventional method only from the knowledge of the distribution. In the following, we are going to show how an anomaly detection algorithm can detect such anomalies without any a priory knowledge of such outcome. ### Classical anomaly detection Here, we summarise two different classical machine learning methods we use for anomaly detection. The first method is called isolation forest (IF) [27], which is an unsupervised anomaly detection algorithm that uses a random forest algorithm under the hood to detect outliers in the data set. The algorithm tries to isolate the data points using decision trees such that each observation gets isolated from the others. The second method is called _autoencoder_ (AE) [28; 29; 30], which is a deep neural network architecture (Fig.2). It aims to learn a compressed representation for an input through minimizing its reconstruction error [32; 33]. It consists of two parts - an _encoder_ (\(e\)) and a _decoder_ (\(d\)). The encoder learns a non-linear transformation \(e:\mathcal{X}\rightarrow\mathcal{Z}\), that projects the data from the origi Figure 1: Data distribution and schematic of the device configuration. (a) Distribution of conductance for \(V_{0}\)=0.9 (red) and \(V_{0}\)=0.3 (blue). Inset shows the variation of conductance (solid black line, grey region shows the rms deviation) with vertical red line denoting \(V_{0}\)=0.3.0.9. (b) and (c) show two configurations for \(V_{0}\)=0.9 with local dos (in gray scale) and conductance (in legend). The green regions show the electrodes. The red dots denotes the impurities which are confined within the central region marked by black dashed line. nal high-dimensional input space \(\mathcal{X}\equiv\{x\}\) to a lower-dimensional latent space \(Z\equiv\{z\}\). For our study we consider a latent space with 4 nodes. A decoder learns a non-linear transformation \(d:\mathcal{Z}\rightarrow\mathcal{X}\) that projects the latent vectors \(z=e(x)\) back into the original high-dimensional input space \(\mathcal{X}\). This transforms the latent vector \(z=e(x)\) and reconstruct the original input data as \(\hat{x}=d(z)=d\left(e(x)\right)\), where \(\hat{x}\) is the output corresponding to an input \(x\). One can obtain a more robust decoding of latent vectors with a _variational autoencoder_ (VAE) [34], which is a neural network that unify variational inference approaches with autoencoders. For our study we focus only on IF and AE. ### Hybrid Quantum-Classical Autoencoder (HAE) The quantum-classical hybrid anomaly detection scheme [23, 24, 25, 26] is the state of the art approach which utilises quantum machine learning [35, 36, 37, 38, 39] along with its classical counterpart. For our study we use the Hybrid Classical-Quantum Autoencoder (HAE) introduced by Sakhnenko et.al. in 2022 [26], which significantly enhances the performance metrics of the anomaly detection compared to its fully classical counterpart. The HAE consists of a classical encoder, a parameterized quantum circuit (PQC) [40], and a classical decoder (Fig. 3). The input goes to PQC via the encoder. The PQC is consists of quantum circuits containing different rotation gates. After the blocks of quantum circuits there are measurements followed by the post measurement processing block. After post-processing, the information is fed into the classical decoder. It is worth mentioning that a PQC performs much better compared to an classical circuit with equal dimension [26]. Schuld [41, 42] showed a connection between the quantum neural networks and kernel methods, where the quantum networks encode and process the data in a high-dimensional Hilbert space through a highly non linear feature mapping. This is classically intractable and only can be revealed through the inner products and measurements of the quantum states. In our case the PQC in HAE also expands the latent space into a higher dimensional Hilbert space. Therefore its internal degrees of freedom increases, resulting in a performance boost. For HAE we consider the same encoder and decoder as the classical AE which is combined with a 4-qubit PQC (Fig. 3). The final quantum state is measured in the Pauli \(Z\)-basis, and the corresponding expectation value for each qubit construct the latent space for the anomaly detection. This information is fed to the decoder via the post processing module which expands the compressed data to its original size. The model is implemented using Qiskit [43] for our analysis. ### Training and testing of IF, AE and HAE for the anomaly detection For isolation forest (IF), we choose a training data set with nominal data points only. After training the model we apply the trained IF model on the testing data and obtain the predicted labels. For the AE and HAE, we first train the networks with the above training data for 50 epochs with batch size of 16 and learning rate 0.001, and then compute the losses during training. We compute the mean squared error loss (MSE) defined as, \[\text{MSE}=\frac{1}{n}\sum_{i=1}^{n}\left(x_{i}-\hat{x}_{i}\right)^{2}, \tag{3}\] where \(x\) and \(\hat{x}\) are the original and the predicted values of the datapoints respectively. By minimizing the above losses for the training data, we search for a suitable threshold with extensive empirical trials. Then we predict the outputs with respect to this threshold from the training data and compute the MSEs with respect to the actual training data. The threshold is then defined by Figure 3: A schematic diagram of a hybrid classical-quantum autoencoder (HAE) architecture [26]. The data coming from the classical encoder are embedded into the PQC. In PQC, the \(U_{1}(\theta_{ij})\) and \(U_{2}(\phi_{ij})\) are the blocks of quantum circuits containing different rotation gates with rotation parameters \(\theta,\phi\). After the blocks of quantum circuits there are measurements followed by the post measurement processing block (denoted as Post Process). After post-processing, the information is fed into the classical decoder. Figure 2: Schematic representation of an autoencoder. The input data is compressed by the encoder and the decoder expands the compress data to its original size. The intermediate space with compressed dimension is called the latent space. the ratio between the mean and standard deviation of the MSEs. After defining the threshold we apply the trained model over the testing data and predict the outputs. We compute the MSEs with the predicted data and the actual test-data. For each prediction if the MSE is greater than the threshold then we label it as an anomaly/outlier, otherwise it is labelled as a nominal/normal data point. With the above training and testing procedure we finally compute the precision, recall, and F1 scores by comparing the predicted labels and the actual labels of the testing data. ### Performance measures of the model: precision-recall and F1 score A reliable metric is necessary for measuring the performance of an anomaly-detection model, which should describe the fractions of uncovered anomalies from a mixture of nominal data and outliers. This is usually described by _precision_ (fraction of true anomalies of all discovered instances), _recall_ (fraction of true anomalies that were discovered), and their harmonic mean _F1 score_[44, 45], which are computed based on the counts of true positives (\(TP\)), false positives (\(FP\)), and false negatives (\(FN\)), are defined as follows: \[precision=\frac{TP}{TP+FP},\ recall=\frac{TP}{TP+FN},\] \[F1\ score=\frac{2\times precision\times recall}{precision+ recall}. \tag{4}\] An outcome with high _recall_ and low _precision_ contains more results but most of them would be wrong (FP). A low _recall_ and high _precision_ on the other hand correspond less result but most of them would be right (TP). Most desirable outcome is one with both high _recall_ and high _precision_ which in turns give a high _F1 score_. ## III Results For our study we consider a two-terminal device configuration with randomly distributed 80 impurities (Fig. 1). We use their coordinates (\(x_{1},y_{1},x_{2},y_{2},\cdots\), total 160 features) as the input and the resulting conductance (\(T\)) as the output. We consider two magnitudes of the impurity (\(V_{0}\)=0.3, 0.9). Considering the distribution of conductance in these two cases (Fig. 1), for \(V_{0}\)=0.9, \(T>\)0.5 is considered as anomaly, whereas for \(V_{0}\)=0.3, \(T<\)0.5 is considered as anomaly. For our study we consider two data sets, one for \(V_{0}\)=0.3 and one for \(V_{0}\)=0.9, each with 5000 different random configurations. In both cases total number of anomalies is less than 10% of the entire data set. From each data set we prepare four different train-test samples by randomly choosing 900 nominal and 100 anomalous data points. After training and testing with four samples we compute the individual performance metrics (precision, recall, F1 score) and present their respective mean values in Table 1. For AE, input size is 160 (the original data size), which is followed by the encoder containing three layers with 106, 56, and 4 nodes respectively. The decoder is the mirror image of the encoder followed by the output layer of size 160 (Fig. 2). The latent size of our AE model is equal to 4 (Fig. 3). Same classical encoder and decoder is used for HAE with a 4-qubit PQC. A bigger latent dimension could improve the outcome, however due to the limitation of computational resources, we can't consider more than 4 qubit which also restricts the latent dimension of classical AE. Fig. 4 shows two data sets with respect to two arbitrary features. Fig. 4a,e show the original data where the normal data (light blue) and anomalies (red) are uniformly distributed over the whole space. Fig. 4b,f show the output from PCA which show some clustering. This clustering is enhanced with a classical AE (Fig. 4c,g) and even more with a HAE (Fig. 4d,h). In a higher dimensional space such clustering leads to a better isolation of anomalies from the normal data which is also reflected in their individual performance metric (Table 1). Note that the data set corresponding to \(V_{0}=0.9\) shows better performance compared to data set corresponding to \(V_{0}=0.3\). This can be understood from the physical \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Set** & **Metric** & **IF** & **AE** & **HAE** \\ \hline & Precision & 0.474 & 0.494 & 0.484 \\ \(V_{0}\)=0.9 & Recall & 0.675 & 0.715 & 0.748 \\ & F1 score & 0.556 & 0.583 & 0.588 \\ \hline & Precision & 0.431 & 0.436 & 0.432 \\ \(V_{0}\)=0.3 & Recall & 0.497 & 0.595 & 0.618 \\ & F1 score & 0.461 & 0.503 & 0.509 \\ \hline \end{tabular} \end{table} Table 1: Performance metrics for anomaly detection IF, AE, and HAE using two data sets (\(V_{0}\)=0.3 and \(V_{0}\)=0.9). Each value shows the mean of the corresponding metric. Figure 4: Visualization of anomalies (red) and nominal data (light blue). Top (a,b,c,d) and bottom (e,f,g,h) panels present \(V_{0}\)=0.9 and \(V_{0}\)=0.3 data sets respectively with respect to two arbitrary input dimension. (a,e) show the original input data. (b,f) show the PCA of the original data. (c,g) show the output from the classical encoder and (d,h) show the output from PQC. nature of the problem as well. For \(V_{0}=0.9\) most of the configuration lead to a localization reducing the conductance. This is reflected in large occurrence of zero conductance in Fig. 1a. Consequently the anomaly is very well defined since the majority of data has same characteristics. Compared to that, the distribution due to \(V_{0}=0.3\) is relatively flat which makes the boundary between the nominal and anomalous data more smudged. Nevertheless, both AE and HAE show exceptional performance in both cases. To benchmark our results we compare them with the results obtained with other data sets. We find that the performance metrics obtained in Table 1 are comparable to what is observed with standard publicly available data sets and even better than the dataset used to detect anomaly for gas turbine using same models [26]. From table 1, we see that the HAE is performing better in terms of recall and F1 scores keeping the precision comparable to the other models. This is expected due to the inclusion of PQC as we discussed previously. Due the limitation of computational resource, we are limited to a 4 qubit PQC where the distinction between the performance of AE and HAE is not very prominent. By the increasing the dimension of the PQC and the latent space one can further enhance the performance of the HAE. ## IV Conclusion In this paper we demonstrate the application of anomaly detection to reveal exotic features of a condensed matter system. We consider an Anderson model which shows two kind of anomalies, i.e. a high transmission at strong impurity strength and low conductance at weak impurity strength. Here we focus on three different approaches namely isolation forest (IF) which is based on the classification scheme random forest, auto encoder (AE) which is based on classical neural network and the hybrid classical-quantum auto encoder (HAE) which is a combination of a classical neural network and a parametric quantum circuit. Unlike classification scheme, here the training is done only on the normal data and the learning algorithm detect the anomalous outcome without any prior knowledge of the anomaly class. Performance of these algorithms are quantified with three different scores namely precision, recall, and F1 score. Predicting such high level of non-linear outcome is only possible via a neural network which is also reflected in their individual scores. We also demonstrate that the HAE performs better compared to its classical counterpart (AE) due to its inherent ability to deal with highly non linear feature mapping [41; 42] which can't be achieved with classical circuit with same dimension. In context of quantum transport, these anomaly detection schemes can be instrumental in understanding the behaviour of Anderson localisation as well as formation of solitons in disordered system. The method we present here is quite generic and can be extended to other systems. For example, in case of optical lattices, this formalism can be exploited to investigate Anderson localisation of light [46]. For an abstract higher dimensional phase space, where the input parameters are made of different physical observable such as electronic or chemical properties of a system this approach can reveal new exotic configurations which can not be explored with any conventional methods, and thus would be instrumental in new physics hidden in remote corners of complex phase space.
2310.12909
Collaborative Adaptation: Learning to Recover from Unforeseen Malfunctions in Multi-Robot Teams
Cooperative multi-agent reinforcement learning (MARL) approaches tackle the challenge of finding effective multi-agent cooperation strategies for accomplishing individual or shared objectives in multi-agent teams. In real-world scenarios, however, agents may encounter unforeseen failures due to constraints like battery depletion or mechanical issues. Existing state-of-the-art methods in MARL often recover slowly -- if at all -- from such malfunctions once agents have already converged on a cooperation strategy. To address this gap, we present the Collaborative Adaptation (CA) framework. CA introduces a mechanism that guides collaboration and accelerates adaptation from unforeseen failures by leveraging inter-agent relationships. Our findings demonstrate that CA enables agents to act on the knowledge of inter-agent relations, recovering from unforeseen agent failures and selecting appropriate cooperative strategies.
Yasin Findik, Paul Robinette, Kshitij Jerath, S. Reza Ahmadzadeh
2023-10-19T17:00:09Z
http://arxiv.org/abs/2310.12909v1
# Collaborative Adaptation: Learning to Recover from Unforeseen Malfunctions in Multi-Robot Teams ###### Abstract Cooperative multi-agent reinforcement learning (MARL) approaches tackle the challenge of finding effective multi-agent cooperation strategies for accomplishing individual or shared objectives in multi-agent teams. In real-world scenarios, however, agents may encounter unforeseen failures due to constraints like battery depletion or mechanical issues. Existing state-of-the-art methods in MARL often recover slowly - if at all - from such malfunctions once agents have already converged on a cooperation strategy. To address this gap, we present the Collaborative Adaptation (CA) framework. CA introduces a mechanism that guides collaboration and accelerates adaptation from unforeseen failures by leveraging inter-agent relationships. Our findings demonstrate that CA enables agents to act on the knowledge of inter-agent relations, recovering from unforeseen agent failures and selecting appropriate cooperative strategies. ## I Introduction Multi-robot+ scenarios are commonly encountered in various domains, including search & rescue operations [1], autonomous driving [2, 3], and logistics & transportation [4]. The coordination and cooperation between agents are essential in these scenarios, enabling them to achieve shared or individual goals [5]. They become particularly crucial when addressing unexpected malfunctions that robots may experience, such as battery failure leading to immobilization or rotation failure restricting movement to a single direction. It is imperative for agents to effectively cooperate with each other, autonomously recover from such failures promptly, and adapt their strategies as a team to overcome the challenges arising from agent malfunction(s). Footnote †: 1}\) PeARL Lab, Richard Miner School of Computer and Information Sciences, University of Massachusetts Lowell, MA, USA [email protected], [email protected]\({}^{2}\) Department of Electrical and Computer Engineering, University of Massachusetts Lowell, MA, USA [email protected]\({}^{3}\) Department of Mechanical Engineering, University of Massachusetts Lowell, MA, USA [email protected]\({}^{*}\)The terms 'robot' and 'agent' are used interchangeably throughout this paper. Within the field of Multi-Agent Reinforcement Learning (MARL) for cooperative tasks, the Centralized Training with Decentralized Execution (CTDE) paradigm has emerged as a prominent approach. It effectively addresses a range of cooperative challenges, including curse of dimensionality [6, 7], non-stationarity [5], and global exploration [8]. Despite its impressive performance in coordination tasks, CTDE-based approaches suffer from a notable drawback: slow adaptation to unexpected agent failures. This issue arises from two primary factors. Firstly, these approaches lack explicit mechanisms to handle such unpredictable failure cases. Secondly, they do not incorporate features that promote enhanced collaboration between agents, resulting in a slower adaptation process where the model must independently discover which collaboration strategies to pursue after learning new ones. In this paper, we introduce a novel algorithm that extends the CTDE paradigm. Our algorithm leverages a relational network [9] to capture the relative importance assigned by agents to one another, enabling faster adaptation in the face of unexpected robot failures. To evaluate the effectiveness of our method, we experimented in a multi-robot environment, focusing on a cooperative task with simulated random malfunctions. We compared our approach to the state-of-the-art, Value Decomposition Networks (VDN) [10]. The findings of our study demonstrate that our proposed approach facilitates effective cooperation within a multi-robot team, enabling faster adaptation to unforeseen malfunctions through the utilization of relational networks. ## II Related Work In recent years, Multi-Agent Reinforcement Learning (MARL) has emerged as a prominent research area, particularly in cooperative settings. Numerous approaches have been explored to enable effective collaboration among agents in pursuit of a common objective. One widely studied approach is fully centralized learning, where a single controller is shared among all agents, allowing them to learn a joint policy or value function collectively [11]. Despite its potential advantages, fully centralized learning can be computationally demanding and face intractability challenges due to the exponential growth of the observation and action space as the number of agents increases. An alternative strategy in MARL is fully decentralized learning, where each agent independently learns its own policy. The cooperative behavior then emerges from the application of these learned policies within the environment. For example, Independent Q-Learning (IQL) [12] employs separate action-value tables for each agent, utilizing Q-learning as the underlying learning mechanism. To address the limitations of tabular Q-learning in high-dimensional state and action spaces, the IQL framework was later extended to incorporate function approximation techniques [13]. However, independent learning approaches in multi-agent settings are prone to non-stationarity issues, which arise from the changing actions of other agents as perceived by a given agent. Due to the violation of the Markov property in non-stationary environments, the convergence of decentralized algorithms based on Q-learning cannot be guaranteed [14]. In cooperative MARL scenarios, the limitations of fully centralized and fully decentralized learning approaches have led to the development of a novel paradigm known as Centralized Training with Decentralized Execution (CTDE) [15]. CTDE enables individual agents to execute their actions autonomously while leveraging a centralized mechanism to integrate their strategies, thereby facilitating effective coordination and alignment towards a common objective. By employing centralized training, CTDE effectively addresses the challenge of non-stationarity in decentralized learning, while also overcoming the scalability challenges associated with centralized learning through decentralized execution. This paradigm has been implemented using two main approaches: policy-based and value-based methods. Policy-based methods such as Multi-Agent Deep Deterministic Policy Gradient (MADDPG) [16] and Multi-Agent Proximal Policy Optimization (MAPPO) [17] incorporate a critic that takes into account the global observations of all agents. On the other hand, value-based techniques including Value Decomposition Networks [10], QMIX [18], and QTRAN [19] enhance Q-Learning by incorporating a centralized function that calculates the joint Q-value based on the individual action-values of each agent. These approaches have demonstrated effectiveness in addressing challenges related to multi-agent coordination and have shown superior performance across a range of scenarios. Existing research in cooperative MARL has primarily focused on achieving optimal solutions, ranging from fully centralized learning to the CTDE paradigm. However, when unforeseen failures occur during the execution of learned behaviors, these approaches may not promptly adapt the agents' policies. One possible approach to recover from robot malfunctions is to predict the malfunctioning robot and its timing by enabling agents to estimate the actions of other agents. The concept of LOLA [20] can be leveraged to improve performance in such cases. However, when malfunctions or failures of the agents are not predictable, the challenge lies in enhancing the agents' adaptation capability. One approach to address this challenge is to guide the agents on how to cooperate under the current environmental circumstances, enabling them to make faster policy changes. In this study, we propose a novel framework to enhance collaborative adaptation by steering the agents' behavior [21] in scenarios where unexpected agent malfunction(s) occur. Our framework focuses on considering the inter-agent relationships, represented as a relational network, which captures the importance agents place on each other. By leveraging this relational network, agents can quickly adapt their learned behaviors to overcome unpredictable failures of their teammates. We specifically explore this concept using the VDN approach, a fast and powerful CTDE method for learning cooperative behaviors. Yet, it is crucial to emphasize that our framework of utilizing relationships to address unforeseen malfunctions can also be extended to other CTDE methods. ## III Background ### _Markov Decision Process_ We characterized Decentralized Markov Decision Process as a tuple \(\langle\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},\gamma\rangle\) where \(s\in\mathcal{S}\) indicates the true state of the environment, the joint set of individual actions and rewards are represented by \(\mathcal{A}\coloneqq\{a_{1},a_{2},\ldots,a_{n}\}\), \(\mathcal{R}\coloneqq\{r_{1},r_{2},\ldots,r_{n}\}\), respectively, \(\mathcal{T}(s,A,s^{\prime})\colon\mathcal{S}\times\mathcal{A}\times\mathcal{ S}\mapsto[1,0]\) is the dynamics function defining the transition probability, \(n\) is the the number of agents, and \(\gamma\in[0,1)\) is the discount factor. ### _Value Function Factorization_ Value function factorization methods, which our proposed method build upon, adhere to the CTDE paradigm. These methods successfully tackle the non-stationarity issue in decentralized learning by employing centralized training and effectively address the scalability problem in centralized learning by adopting decentralized execution. Notably, QMIX [18] and VDN [10] serve as exemplary approaches in factorizing value functions. QMIX and VDN both maintain a separate action-value, which defined as \(Q_{i}(s,a_{i})=\mathbb{E}[G|\mathcal{S}=s,\mathcal{A}=a_{i}]\) where \(G\) denotes the return, for each agent \(i\in\{1,...,n\}\). They merge these individual \(Q_{i}\) values to obtain the central action value \(Q_{\text{total}}\) using monotonicity and additivity. Specifically, VDN sums \(Q_{i}\)s to obtain \(Q_{\text{total}}\), as \[Q_{\text{total}}=\sum_{i=1}^{n}Q_{i}(s,a_{i}),\] while QMIX combines them using a state-dependent continuous monotonic function, as follows: \[Q_{\text{total}}=f_{s}(Q_{1}(s,a_{1}),...,Q_{n}(s,a_{n})),\] where \(\frac{\partial f_{s}}{\partial Q_{i}}\geq 0,\forall i\in\{1,...,n\}\). These value function factorization methods rely on Deep Q-Network (DQN) [22] to approximate the action-value function \(\hat{Q}_{i}(s,a_{i},\theta_{i})\) where \(\theta_{i}\) is the weight vector. DQN is advantageous compared to tabular Q-learning as it can effectively handle high-dimensional state and action spaces by utilizing deep learning techniques. However, training DQN presents significant challenges due to instability and divergence resulting from updating the Q-network parameters in each step, violating the assumption of independently and identically distributed (i.i.d) data points. To tackle these challenges, Mnih et al. [22] introduced techniques such as experience replay and fixed Q-target networks, which have now become standard in various deep reinforcement learning algorithms. In brief, these value function factorization methods commonly utilize two deep Q-neural networks for each Q-function (i.e., each agent), namely the Prediction Neural Network (P-NN), and the fixed Target Neural Network (T-NN) which is essentially a copy of the P-NN from a previous iteration. Additionally, a replay memory is employed to store a large number of transitions experienced by the agent during its interactions with the environment. Each transition consists of a tuple \(\langle s,a,r,s^{\prime}\rangle\). To train the P-NN, a batch of transitions of size \(b\) is sampled from memory, and the Temporal Difference (TD) error is calculated between the \(\hat{Q}_{\text{total}}^{\text{target}}\) and \(\hat{Q}_{\text{total}}^{\text{prediction}}\), as follows: \[e_{\text{TD}}=\sum_{i=1}^{b}[r_{\text{team}}+\gamma\max_{u^{\prime}}(\hat{Q}_{ \text{total}}(s^{\prime},u^{\prime},\theta_{t}))-\hat{Q}_{\text{total}}(s,u, \theta_{p})], \tag{1}\] where \(r_{\text{team}}\) defined as the sum of rewards obtained by the agents, each having equal weights, \(u\) denoted as the joint action of the agents, \(\theta_{p}\) represents the weights of the P-NN and \(\theta_{t}\) indicates the weights of the T-NN, which are regularly updated with \(\theta_{p}\). And, \(\theta_{p}\) are updated using an optimizer to minimize the \(e_{\text{TD}}\). This process facilitates the coordination of agent actions towards maximizing the team reward. As a result, the key aspect of the CTDE paradigm becomes evident: the agent networks are trained using a centralized \(Q_{\text{total}}\), while each agent's actions are determined by its own neural network, resulting in decentralized execution. ## IV Proposed Method In cooperative MARL, different team structures often result in multiple solutions of varying optimality. Value factorization methods and similar approaches aim to maximize team rewards and converge towards one of several solutions, potentially achieving the global optimum. The stochastic nature of agents' exploration can influence convergence towards a specific team behavior, particularly when multiple cooperation strategies exist with the same maximum total reward. However, in real-world scenarios, individual robots may encounter unexpected malfunctions (e.g., battery failure, rotation failure, etc.) after their policies have converged to a particular cooperative strategy, posing challenges for learning and adapting to new strategies without a deep understanding of the team structure. To overcome these challenges, it would be beneficial to have a mechanism that considers inter-agent relationships and prioritizes assisting malfunctioning agents. This mechanism could improve team performance or accelerate adaptation by guiding agents' behavior towards either helping the malfunctioning agent solve its task or completing the task on its behalf. Unfortunately, the current cooperative MARL algorithms lack such a mechanism, making it more difficult and time-consuming to adapt to unforeseen malfunctions. To address this issue, we propose a novel framework called Collaborative Adaptation (CA). The CA framework enables agents to comprehend inter-agent relationships and select a cooperative strategy accordingly, allowing them to handle the adaptation of new environmental settings collaboratively. In our research, we explore and study this framework using the VDN algorithm, referred to as CA-VDN, due to its simplicity and effectiveness as a cooperative behavior learning approach. The proposed framework employs a relational network in the form of a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})\) to represent the relationships between agents. In this graph, each agent \(i\in\{1,...,n\}\) is represented as a vertex \(v_{i}\), \(\mathcal{E}\) denotes the set of directed edges \(e_{ij}\) directed from \(v_{i}\) to \(v_{j}\), and the weights of these edges are captured in the matrix \(\mathcal{W}\), with elements \(w_{ij}\in[0,1]\) assigned to each edge. The direction and weight of the edges in the graph signify the importance or vested interest that agent \(i\) places on the outcomes for agent \(j\). Moreover, the framework modifies to MDP as \(\langle\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},\mathcal{G},\gamma\rangle\) to incorporate \(\mathcal{G}\). And, the \(r_{\text{team}}\), used in (1), is calculated based on the relational network, as follows: \[r_{\text{team}}=\sum_{i\in\mathcal{V}}\sum_{j\in\mathcal{E}_{i}}w_{ij}r_{j}, \tag{2}\] where \(\mathcal{E}_{i}\) denotes the set of vertex indices that have an edge directed from \(v_{i}\), and \(r_{j}\) is the reward of the agent represented by \(v_{j}\). This allows for the agents to follow a cooperative strategy that assists the malfunction agent since they place extra importance on its reward. The pseudo-code for the CA framework can be found in Algorithm 1. ``` input : P-NN, \(\hat{Q}^{\text{prediction}}\); T-NN, \(\hat{Q}^{\text{target}}\); relational network, \(G\); batch size, \(b\); number of iterations for updates, \(m\); update frequency of T-NN, \(k\) 1foreachepisodedo 2 Initialize \(s\)foreachstep of episodedo 3 Choose \(a\) from \(s\) using policy derived from \(\hat{Q}^{\text{prediction}}\) (with \(\varepsilon\)-greedy) 4 Take action \(a\), observe \(r\), \(s^{\prime}\) 5 Store \(s\), \(a\), \(r\), \(s^{\prime}\) in memory 6\(s\gets s^{\prime}\) 7for\(i=1,\ldots,m\)do 8\(S\), \(A\), \(R\), \(S^{\prime}\) \(\leftarrow\) sample chunk, size of \(b\), from memory 9\(Q^{\text{prediction}}_{\text{values}}\leftarrow\hat{Q}^{\text{prediction}}(S)\) 10\(Q^{\text{prediction}}_{\text{values}}\leftarrow\) action \(A\) of \(Q^{\text{prediction}}_{\text{values}}\) of every agent in every sample 11\(Q^{\text{prediction}}_{\text{total}}\leftarrow\) sum \(Q^{\text{prediction}}\) per sample 12\(Q^{\text{target}}_{\text{values}}\leftarrow\)\(\hat{Q}^{\text{target}}(S^{\prime})\) 13\(Q^{\text{target}}\leftarrow\) max of \(Q^{\text{target}}_{\text{values}}\) of every agent in every sample 14\(Q^{\text{target}}_{\text{total}}\leftarrow\) sum \(Q^{\text{target}}\) per sample 15\(R^{\text{team}}\leftarrow\) use (2) with \(G\) and \(R\) 16\(loss\leftarrow\) use (1) with \(R^{\text{team}}\), \(Q^{\text{target}}_{\text{total}}\), \(Q^{\text{prediction}}_{\text{total}}\) 17 Backpropagate the loss to the parameters of \(\hat{Q}^{\text{prediction}}\) 18 Update the parameters of \(\hat{Q}^{\text{target}}\) with the parameters of \(\hat{Q}^{\text{prediction}}\) foreach \(k^{\text{th}}\) episode ``` **Algorithm 1**Collaborative Adaptation To identify malfunctioning agents and determine when these malfunctions occur, and facilitate changes in inter-agent relations to support these agents, a mechanism - malfunction trigger - has been implemented to track individual agents' rewards. The underlying assumption is that agents may experience malfunctions after converging on a specific behavior, highlighting the challenges faced by existing co operative MARL algorithms in altering already converged behaviors in response to unpredictable failures. When the malfunction trigger observes a significant decrease in an individual agent's reward over a certain number of episodes, it signals this information to the framework, indicating the presence of malfunctioning agent(s). Upon receiving this information, the framework dynamically adjusts the relational network, leading other agents to assign importance to the malfunctioning agent(s) by modifying the weights of the corresponding edges. Additionally, the framework resets the exploration process to allow the agent to discover new cooperative strategies based on the updated relationships. It's important to note that during the comparison of results with other methods, the exploration parameter of these methods is also reset to enable them to explore anew. ## V Experiments ### _Environment_ To evaluate the effectiveness of the proposed approach in influencing agents' behaviors and enhancing their adaptation to unforeseen failures of the agent(s), we conducted experiments using the CA-VDN and VDN algorithms in a multi-agent grid-world environment. The environment is represented as a 4x4 grid with four agents and four undeclicated resources, as illustrated in Fig. (a). In this environment, the objective of each episode is for the agents to consume all the resources by visiting their respective locations. To achieve this, the agents have five possible actions: move up, down, left, right, or stay idle. Additionally, they can engage in a special action called _push_, which allows them to push adjacent agents, provided that the pushing agent takes a non-idle action towards the pushed agent, who must be idle. As a result of a _push_, the pushing agent remains in place while the other agent moves one space in the direction of the push. Upon successfully consuming a resource, the consumer agent receive a reward of \(+10\) and each resource can only be consumed once. Nevertheless, the agent incurs an individual penalty of \(-1\) for every time-step per unconsumed resource, except when they are occupying a resource location, which serves as a safe spot. The episode terminates either when all the resources are consumed or when the maximum time steps are reached. We intentionally designed this environment to be solvable by VDN while also highlighting the challenges that unexpected malfunctions can bring, even in a seemingly simple setting. Furthermore, our goal is to showcase how the integration of relationships between agents into the learning process can effectively overcome these challenges. ### _Models and Hyperparameters_ In our experimental setup, we utilized a Multi-Layer Perceptron (MLP) with two hidden layers, each containing \(128\) neurons, and using the ReLU activation function. To train each agent's prediction model, we conducted \(m=10\) iterations per episode, using batches of size \(b=32\) randomly sampled from a replay memory with a capacity of \(50\)k time-steps. The optimization was performed using the _Adam_ optimizer with a learning rate of \(0.001\), and the squared TD-error served as the loss function. To maintain stability during training, we updated the weights of the target network with the prediction network's weights every \(k=200\) episodes. For exploration, we employed the \(\varepsilon\)-greedy method, with \(\varepsilon\) linearly decreasing over time. Lastly, we set the discount factor (\(\gamma\)) to 0.99 to account for future rewards in the reinforcement learning process. ### _Results & Discussion_ The experimental results, presented in Fig. 2 and Fig. 3, show the average training reward over \(10\) runs, represented by the shaded regions, as well as the average test rewards of the agents indicated by the solid lines. The test rewards are evaluated based on a greedy strategy, interrupting the training process every \(50\) episodes to assess individual agent rewards. During each run, at the five thousandth episode, we simulate a malfunction that prevents the green agent (refer \begin{table} \begin{tabular}{c c c|c|c} \hline \hline & \multicolumn{2}{c|}{Before Malfunction} & \multicolumn{2}{c}{After Malfunction} \\ \cline{2-5} & VDN & CA-VDN & VDN & CA-VDN \\ \cline{2-5} Blue Agent & 5.80±0.25 & 5.50±0.64 & -74.20±18.14 & **6.90±0.19** \\ Red Agent & 5.50±0.50 & 5.70±0.40 & -63.60±02.26 & **9.90±0.19** \\ Orange Agent & 5.20±0.67 & 5.40±0.79 & -66.60±21.47 & **9.70±0.40** \\ Green Agent & 5.70±0.40 & 5.60±0.74 & -35.70±26.60 & **10.50±0.93** \\ \hline \hline \end{tabular} \end{table} TABLE I: Average reward with 95% confidence intervals for ten runs after training completed. Fig. 1: (a) multi-agent grid-world environment with four agents. (b-c) Relational networks employed in CA-VDN Fig. 2: Results before malfunction. (a) VDN, (b) CA-VDN with relational network in Fig. 1(b). to Fig. 1(a)) from moving. It is crucial to highlight that this malfunction was not anticipated by the algorithms. During the initial phase, when all agents are fully functional, both VDN and CA-VDN, which incorporate the relational network illustrated in Fig. 1(b), demonstrate comparable performance. This similarity is evident in the agents' average rewards, as depicted in Fig. 2(a) and Fig. 2(b) respectively, and they eventually converge to the same behavior. The resemblance in performance arises from the fact that in VDN, each agent contributes their reward equally to the team's overall reward, aligning with the utilization of the self-interested relational network. Essentially, the agents share resources among themselves based on their proximity to those resources and subsequently consume them. The similarity in individual rewards for both algorithms after the initial phase concludes is demonstrated in Table 2. At the five thousandth episode, our framework's malfunction trigger detects the occurrence of a malfunction. As VDN lacks a support mechanism that considers inter-agent relationships, the only available option is to reset the \(\varepsilon\) value to increase exploration. Despite this attempt, as shown in Fig. 3(a), VDN still faces challenges in recovering from malfunction scenarios. On the other hand, during the reset of \(\varepsilon\), we also modify the applied relational network from Fig. 1(b) to Fig. 1(c). This alteration ensures that other agents place importance on the malfunctioning agent. As a result, agents can adapt faster to the new condition, as indicated in Fig. 3(b). To evaluate the effectiveness, we present the numeric results of each agent's individual reward in Table I for both before and after the malfunction. Overall, it is essential to highlight that agents trained with CA-VDN can learn together to recover from unforeseen malfunctions, a capability that VDN lacks even after 20k episodes following the occurrence of a malfunction. ## VI Conclusion and Future Work We propose a novel framework that incorporates inter-agent relationships into agents' learning, enabling agents to recover from unforeseen malfunctions as a team. Our experiments validated the effectiveness of our approach in faster adaptation to the environment in the face of unexpected robot failures. As a next step, we aim to conduct additional experiments in more complex environments that involve multiple agents with different malfunctions and compare the performance of our algorithm with other state-of-the-art methods. ## Acknowledgement This work is supported in part by NSF (IIS-2112633) and the Army Research Lab (W911NF20-2-0089).
2303.12831
Observation of non-Hermitian edge burst in quantum dynamics
The non-Hermitian skin effect, by which the eigenstates of Hamiltonian are predominantly localized at the boundary, has revealed a strong sensitivity of non-Hermitian systems to the boundary condition. Here we experimentally observe a striking boundary-induced dynamical phenomenon known as the non-Hermitian edge burst, which is characterized by a sharp boundary accumulation of loss in non-Hermitian time evolutions. In contrast to the eigenstate localization, the edge burst represents a generic non-Hermitian dynamical phenomenon that occurs in real time. Our experiment, based on photonic quantum walks, not only confirms the prediction of the phenomenon, but also unveils its complete space-time dynamics. Our observation of edge burst paves the way for studying the rich real-time dynamics in non-Hermitian topological systems.
Lei Xiao, Wen-Tan Xue, Fei Song, Yu-Min Hu, Wei Yi, Zhong Wang, Peng Xue
2023-03-22T18:00:02Z
http://arxiv.org/abs/2303.12831v1
# Observation of non-Hermitian edge burst in quantum dynamics ###### Abstract The non-Hermitian skin effect, by which the eigenstates of Hamiltonian are predominantly localized at the boundary, has revealed a strong sensitivity of non-Hermitian systems to the boundary condition. Here we experimentally observe a striking boundary-induced dynamical phenomenon as the non-Hermitian edge burst, which is characterized by a sharp boundary accumulation of loss in non-Hermitian time evolutions. In contrast to the eigenstate localization, the edge burst represents a generic non-Hermitian dynamical phenomenon that occurs in real time. Our experiment, based on photonic quantum walks, not only confirms the prediction of the phenomenon, but also unveils its complete space-time dynamics. Our observation of edge burst paves the way for studying the rich real-time dynamics in non-Hermitian topological systems. Non-Hermitian physics has attracted increasing attention in a vast variety of contexts ranging from classical waves to open quantum systems [1; 2]. Intriguingly, the spatial boundary plays a much more dramatic role in non-Hermitian systems than in Hermitian ones. In particular, for certain non-Hermitian systems, the eigenstates concentrate predominantly at the boundary, which is known as the non-Hermitian skin effect (NHSE) [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Among many other consequences, it implies a fundamental revision of the principle of bulk-boundary correspondence [11; 12]. Whereas the NHSE has revealed intriguing static properties such as novel behaviors of eigenstates and energy spectra, in this work we unveil a striking dynamic boundary effect in non-Hermitian systems. We experimentally observe that in a class of lossy quantum walk of single photons, the loss rate is drastically enhanced at the boundary. Specifically, for a lossy particle initially located at a position far from the boundary of a lattice system, the space-resolved loss has a surprisingly high boundary peak, in sharp contrast to the common expectation that the particle loss should decay away from the initial position. Remarkably, the relative height of the edge peak even grows as the distance between the initial position and boundary increases. This striking phenomenon, dubbed non-Hermitian edge burst, has been predicted in recent theories [15; 16]. Since both the NHSE and edge burst involve boundary localization, it is tempting to attribute the latter to the former. However, it turns out that NHSE does not guarantee the emergence of edge burst. Closing the gap of the imaginary part of energy spectrum (i.e., the imaginary gap or dissipative gap) is the other necessary condition, which highlights the rich implication of spectral profile and topology in non-Hermitian systems [9; 10]. At a deeper level, a novel dynamic bulk-edge scaling relation has been suggested as the origin of edge burst [15]. Thus, the edge burst signifies an unprecedented interplay between non-Hermitian topological physics and non-Hermitian dynamical phenomena. _Lossy quantum walk.--_To study the non-Hermitian edge burst, we design a one-dimensional quantum walk [17; 18; 19; 20] with the Floquet operator \[U=R\left(\frac{\theta_{2}}{2}\right)SR\left(\frac{\theta_{1}}{2}\right)L( \gamma). \tag{1}\] The shift operator \(S=\sum_{x}|x-1\rangle\langle x|\otimes|0\rangle\langle 0|+|x+1\rangle\langle x| \otimes|1\rangle\langle 1|\), so that the walker's position is shifted from the site \(x\) to \(x-1\) or \(x+1\) according to the coin state \(|0\rangle\) or \(|1\rangle\). The coin state is rotated along the \(y\) axis by \(R(\theta)=\mathds{1}_{\mathrm{w}}\otimes e^{-i\theta\sigma_{y}}\), where \(\mathds{1}_{\mathrm{w}}=\sum_{x}|x\rangle\langle x|\) is the identity operator. The operator \(L(\gamma)=\mathds{1}_{\mathrm{w}}\otimes\begin{pmatrix}1&0\\ 0&e^{-2\gamma}\end{pmatrix}\) generates a state-selective loss. For our photonic platform, it is more convenient to create a domain wall instead of an open boundary [see Fig. 1(a)]. The left (L) and right (R) regions are characterized by coin parameters \(\theta_{1,2}^{L}\) and \(\theta_{1,2}^{R}\), respectively. The dynamics of the non-Hermitian quantum walk follows \[|\psi(t)\rangle=U^{t}|\psi(0)\rangle, \tag{2}\] where \(|\psi(0)\rangle\) is the initial state and \(t\) is the integer discrete time. One can also define an effective non-Hermitian Hamiltonian \(H_{\mathrm{eff}}\) by \(U=\exp(-iH_{\mathrm{eff}})\), which shares the same eigenstates as \(U\). The Floquet operator \(U\) defined in Eq. (1) and the associated \(H_{\mathrm{eff}}\) exhibit the NHSE, which originates from the state-dependent directional hoppings built in the model (akin to Refs. [11; 21]). In the presence of a domain wall [Fig. 1(a)], all the eigenstates of \(U\) exhibit localization at the domain wall when the non-Hermiticity is nonzero, i.e., \(\gamma\neq 0\). Accordingly, the generalized Brillouin zone (GBZ) deviates from the unit circle [see Figs. 2(a) and (b)][3; 22; 23]. Here, we focus on two sets of parameters, \(\theta_{2}^{R}=0.12\pi\) and \(\theta_{2}^{R}=0.48\pi\), with other parameters fixed as \(\theta_{1,2}^{L}=0.85\pi\), \(\theta_{1}^{R}=0.12\pi\), and \(\gamma=0.8\). In Figs. 2(c) and (d), we show the energy spectrum of \(H_{\mathrm{eff}}\), which clearly indicates that the imaginary gap (the gap between \(0\) and the maximum imaginary part of the spectrum) is zero for \(\theta_{2}^{R}=0.12\pi\) but nonzero for \(\theta_{2}^{R}=0.48\pi\). In fact, the imaginary gap vanishes along the lines \(\theta_{1}=2\pi n\pm\theta_{2}\) (\(n\in\mathbb{Z}\)) (see Supplementary Information). _Observation of edge burst.--_In our experiment, a walker is initialized at a site \(x_{0}\), which evolves under Eq. (2) in discrete time steps. The key quantity for edge burst is the probability \(P(x)\) that the walker escapes from the position \(x\). In practice, one can measure the space-time-resolved loss \(p(x,t)\) from \(t=1\) to \(t=T\), with \(T\) being a large integer so that the loss is almost complete. The sum over \(t\) then gives \[P(x)=\sum_{t=1}^{T}p(x,t). \tag{3}\] According to the specific form of loss adopted here, we have \[p(x,t)=(1-e^{-4\gamma})|\left<1\right|\otimes\left<x\right|\psi(t-1))|^{2}. \tag{4}\] It may also be written as \(p(x,t)=|\left<1\right|\otimes\left<x\right|M|\psi(t-1))|^{2}\) with \(M=\mathds{1}_{\rm w}\otimes\begin{pmatrix}0&0\\ 0&\sqrt{1-e^{-4\gamma}}\end{pmatrix}\), which can be implemented by a partial measurement via the PPBS [see Fig. 1(a)] at the time step \(t\). We also define a time-dependent total loss probability \[P(t)=\sum_{t^{\prime}=1}^{t}\sum_{x}p(x,t^{\prime}), \tag{5}\] so that the survival probability after a \(t\)-step evolution is \(1-P(t)\). In our quantum-walk platform, \(p(x,t)\) can be readily extracted from photon-number measurements (see Methods), and \(P(x)\), \(P(t)\) can be obtained from Eqs. (3)(5). We implement a 14-step (\(T=14\)) quantum walk with initial walker location \(x_{0}=10\). The space-resolved loss probability \(P(x)\) is shown in Figs. 2(e) and (f) for the aforementioned two sets of parameters. In both (e) (\(\theta_{2}^{R}=0.12\pi\)) and (f)(\(\theta_{2}^{R}=0.48\pi\)), we observe that the loss probability initially decays away from \(x_{0}\). Moreover, the \(P(x)\) profile is asymmetric around \(x_{0}\), which can be naturally attributed to the NHSE. The surprising feature is an exceptionally high peak emerging at the domain wall in Fig. 2(e). Intuitively, one may resort to the NHSE to explain this edge burst. However, the NHSE is also strong for the parameters of Fig. 2(f), yet the edge burst is not seen there. Therefore, the origin of edge burst cannot be explained by the NHSE alone. In fact, the imaginary gap plays an essential role here [15]. The corresponding imaginary gap, shown in Figs. 2(c) and (d), is zero and nonzero for Figs. 2(e) and (f), respectively. To unveil the space-time profile of walker's loss, we plot \(p(x,t)\) for the above two sets of parameters. Figs. 2(g) and (h) show that the walker propagates almost ballistically with concurrent loss along the trajectory. In the case of edge burst, a large loss peak in \(p(x,t)\) emerges when the walker hits the domain wall. It also indicates that the burst occurs around a particular time, before which it is indiscernible. Figure 1: Experimental implementation. (a) The domain-wall geometry of the non-Hermitian quantum walk. The operations of \(S,R,L\) contained in \(U\) are pictorially shown. (b) Experimental setup. Photon pairs are created by the spontaneous parametric down conversion process in a type-II cut PPKTP crystal. One of the photon is injected into the quantum-walk interferometric network, and the other is used as the trigger. The walker photon passes the polarizing beam splitter (PBS) and the half-wave plate (HWP), so that its polarization is prepared in the coin state \(|0\rangle\). It then undertakes the quantum walk through the network containing partially polarizing beam splitters (PPBSs), HWPs, beam displacers (BDs). Finally, avalanche photodiodes (APDs) are used to detect the walker photons that coincide with the trigger photons. Furthermore, we vary the initial position \(x_{0}=5,6,7,8,9,10\) and measure the time-dependent loss probability \(P(t)\). As shown in Fig. 3(a), for \(\theta_{2}^{R}=0.12\pi\) (with edge burst), \(P(t)\) suddenly increases near the domain wall. In contrast, in Fig. 3(b), for \(\theta_{2}^{R}=0.48\pi\) (without edge burst), \(P(t)\) increases steadily with \(t\) without sudden change. Similarly, the space-resolved survival probability \(|\psi(x=-1,t)|^{2}\) at the domain wall at each step \(t\) behaves differently with and without the edge burst [see Fig. 3(c)]. The value of \(|\psi(x=-1,t)|^{2}\) is significantly larger in the presence of edge burst. In Fig. 3(d), we show that the edge burst remains robust when the starting position varies. In contrast, when the edge burst is absent, \(P(x)\) decays rapidly as \(x_{0}\) moves away from the domain wall [see Fig. 3(e)]. To further characterize the edge burst, we measure the relative height \(P_{\text{domain}}/P_{\text{min}}\), where \(P_{\text{domain}}\equiv P(x=-1)\) is the probability that the photon escapes from the domain wall \(x=-1\), and \(P_{\text{min}}\equiv\min_{x=-1,\cdots,x_{0}}\{P(x)\}\) is the minimum of \(P(x)\) in the interval between the initial location \(x_{0}\) and the domain wall location \(x=-1\). The edge burst is characterized by \(P_{\text{domain}}/P_{\text{min}}\gg 1\), while its absence means that \(P_{\text{domain}}/P_{\text{min}}\) is on the order of unity. As shown in Fig. 3(f), for \(\theta_{2}^{R}=0.48\pi\), the measured relative height remains close to \(1\) as \(x_{0}\) increases. In stark contrast, for \(\theta_{2}^{R}=0.12\pi\), the relative height increases with \(x_{0}\) and fits well with a linear relation \(P_{\text{domain}}/P_{\text{min}}\sim x_{0}\). Thus, the relative height grows as the initial walker position moves away from the domain wall. While counterintuitive, this behavior is a consequence of a novel bulk-edge scaling relation [15]. _Discussions.--_We present the first experimental observation of the non-Hermitian edge burst by using discrete-time non-Hermitian quantum walk of photons. Our experiment not only demonstrates that edge burst originates from the intriguing interplay between two unique non-Hermitian concepts, the NHSE and imaginary gap, but also unveils the real-time dynamics of this phenomenon. The observation of non-Hermitian edge burst paves the way for investigating the real-time dynamics in non-Hermitian topological systems, which remains largely unexplored. From a practical perspective, the edge burst may offer a promising non-Hermitian approach for the on-demand harvesting of light or particles at a prescribed position. **Methods** _Implementation.--_For the experimental implementation, we adopt the scheme of single-photon discrete-time quantum walks illustrated in Fig. 1(b). Photon pairs are created by spontaneous parametric down conversion, where a 20mm type-II periodically poled potassium titanyl phosphate (PPKTP) crystal is pumped by a 405nm continuous wave diode laser with the power of 1mW. One photon serves as a trigger, and the other as a heralded single photon undertaking the quantum walk. The photon polarizations are adopted as the coin state. The Figure 2: Edge burst in non-Hermitian quantum walks. The fixed parameters are \(\theta_{1,2}^{L}=0.85\pi\), \(\theta_{1}^{R}=0.12\pi\) and \(\gamma=0.8\). (a)(b) Brillouin zone (BZ) and generalized Brillouin zone (GBZ) for \(\theta_{2}^{R}=0.12\pi\) and \(\theta_{2}^{R}=0.48\pi\). (c)(d) Energy spectra (for the right region in which the walker is initialized) under the periodic boundary condition (PBC) for two indicated values of \(\theta_{2}^{R}\). (e)(f) Experimentally measured \(P(x)\) of a 14-step non-Hermitian quantum walk with the initial state \(|x_{0}=10\rangle\otimes|0\rangle\). (g)(h) The space-time-resolved loss probability \(p(x,t)\) for the two values of \(\theta_{2}^{R}\). Error bars represent the statistical uncertainty under the assumption of Poissonian statistics. walker photon is initialized in the spatial mode \(|x_{0}\rangle\) with the internal state \(|0\rangle\), i.e. \(|\psi(0)\rangle=|x_{0}\rangle\otimes|0\rangle\). The localized initial state is prepared by passing the walker photons through a half-wave plate (HWP) and a polarizing beam splitter (PBS). For the quantum-walk dynamics, the shift operator \(S\) is implemented by a beam displacer (BD) whose optical axis is cut in the way so that the vertically polarized photons are directly transmitted and the horizontally polarized photons are laterally displaced into a neighboring mode. The coin rotation \(R(\frac{\theta_{1(2)}}{2})\) is realized by two HWPs at \(0\) and \(\frac{\theta_{1(2)}}{4}\), respectively. The loss operator \(L(\gamma)\) is realized by a partially polarizing beam splitter (PPBS), which completely transmits the coin state \(|0\rangle\) but reflects the coin state \(|1\rangle\) with a probability \(e^{-4\gamma}\). At last, avalanche photodiodes (APDs) are used to detect the walker photons coinciding with the trigger photons. The total number of coincidences is approximately \(23000\). The measurements are based on photon-number counting. The space-time-resolved probability \(p(x,t)\) can be calculated from the photon number through \[p(x,t)=\frac{N(x,t)}{\sum_{x^{\prime}}N^{\prime}(x^{\prime},t)+\sum_{t^{\prime }=1}^{t}\sum_{x^{\prime}}N(x^{\prime},t^{\prime})}, \tag{6}\] where \(N(x,t)\) is the number of photons escaping from the position \(x\) at the time step \(t\), and \(N^{\prime}(x,t)\) is the number of remaining photons at \(x\) after a \(t\)-step evolution. Finally, the space-resolved survival probability at \(x\) can be calculated as \[|\psi(x,t)|^{2}=\frac{N^{\prime}(x,t)}{\sum_{x^{\prime}}N^{\prime}(x^{\prime}, t)+\sum_{t^{\prime}=1}^{t}\sum_{x^{\prime}}N(x^{\prime},t^{\prime})}. \tag{7}\] **Note.** After completing this work, we learned of a related experiment by a team at Southern University of Science and Technology. **Acknowledgments** This work has been supported by the National Natural Science Foundation of China (Grant Nos. 92265209, 12025401, 12125405, 11974331 and 12104036.)
2307.16178
On Updating Static Output Feedback Controllers Under State-Space Perturbation
In this paper, we propose a novel update of a nominal stabilizing static output feedback (SOF) controller for a perturbed linear system. In almost every classical feedback controller design problem, a stabilizing feedback controller is designed given a stabilizable unstable system. In realistic scenarios, the system model is usually imperfect and subject to perturbations. A typical approach to attenuate the impacts of such perturbations on the system stability is repeating the whole controller design procedure to find an updated stabilizing SOF controller. Such an approach can be inefficient and occasionally infeasible. Using the notion of minimum destabilizing real perturbation (MDRP), we construct a simple norm minimization problem (a least-squares problem) to propose an efficient update of a nominal stabilizing SOF controller that can be applied to various control engineering applications in the case of perturbed scenarios like abrupt changes or inaccurate system models. In particular, considering norm-bounded known or unknown perturbations, this paper presents updated stabilizing SOF controllers and derives sufficient stability conditions. Geometric metrics to quantitatively measure the approach's robustness are defined. Moreover, we characterize the corresponding guaranteed stability regions, and specifically, for the case of norm-bounded unknown perturbations, we propose non-fragility-based robust updated stabilizing SOF controllers. Through extensive numerical simulations, we assess the effectiveness of the theoretical results.
MirSaleh Bahavarnia, Ahmad F. Taha
2023-07-30T09:16:20Z
http://arxiv.org/abs/2307.16178v3
# On Updating Static Output Feedback Controllers Under State-Space Perturbation ###### Abstract In this paper, we propose a novel update of a nominal stabilizing static output feedback (SOF) controller for a perturbed linear system. In almost every classical feedback controller design problem, a stabilizing feedback controller is designed given a stabilizable unstable system. In realistic scenarios, the system model is usually imperfect and subject to _perturbations_. A typical approach to attenuate the impacts of such perturbations on the system stability is repeating the whole controller design procedure to find an updated stabilizing SOF controller. Such an approach can be inefficient and occasionally infeasible. Using the notion of _minimum destabilizing real perturbation_ (MDRP), we construct a simple norm minimization problem (a least-squares problem) to propose an efficient update of a nominal stabilizing SOF controller that can be applied to various control engineering applications in the case of perturbed scenarios like abrupt changes or inaccurate system models. In particular, considering norm-bounded known or unknown perturbations, this paper presents updated stabilizing SOF controllers and derives sufficient stability conditions. Geometric metrics to quantitatively measure the approach's robustness are defined. Moreover, we characterize the corresponding guaranteed stability regions and specifically, for the case of norm-bounded unknown perturbations, we propose non-fragility-based robust updated stabilizing SOF controllers. Through extensive numerical simulations, we assess the effectiveness of the theoretical results. Stability of linear systems, robust control, output feedback control, uncertain linear systems. ## I Introduction Stability robustness is a significant classical notion in robust control theory [1, 2, 3, 4, 5, 6, 7, 8]. Stability robustness simply means how sensitive the stability of the control system is against the perturbations/uncertainties. The varying nature of engineering systems' models necessitates the thorough analysis of stability robustness and its potential applications to develop robustly stable engineering systems. Several studies have quantitatively investigated the impacts of perturbations on the stability robustness of the control systems. In [1, 3], a class of non-destabilizing linear constant perturbations is characterized for the linear-quadratic state feedback (LQSF) designs. The authors in [2], propose a guaranteed cost LQSF for which the closed-loop system is stable for any variation of a vector-valued parameter. In [4], for the LQSF designs, the stability robustness bounds are derived based on the Algebraic Riccati equation and Lyapunov stability theory. In [5], bounds on the non-destabilizing time-varying nonlinear perturbations are obtained for asymptotically stable linear systems to provide computationally efficient quantitative robustness measures. Various stability robustness tests are investigated in [6] to highlight the trade-off between the stability robustness conservatism and the information about the perturbation. In [7], utilizing the Lyapunov stability theory, the author has proposed an improved non-destabilizing perturbation bound over the bound proposed by [5]. Taking advantage of appropriately chosen coordinate transformations, the authors in [8] have reduced the conservatism of non-destabilizing perturbation bounds proposed by [5, 7]. In this paper, in contrast to the aforementioned studies, we do not go through the derivation of non-destabilizing perturbation bounds. Instead, we mainly focus on attenuating the impacts of perturbations on the system stability via updating a nominal stabilizing static output feedback (SOF) controller. With that in mind, the control problem considered in this paper is an SOF controller _update_ problem. In order to put into perspective, it is noteworthy that our considered problem slightly differs from the robust feedback controller design problems for uncertain linear systems (with norm-bounded unknown perturbation) [9, 10, 11, 12] in the sense that, the robust feedback controller in those problems is robustly stabilizing for all perturbations \(\Delta\) satisfying \(0<\|\Delta\|_{F}\leq\rho\) while in our case, the robust updated stabilizing SOF controller is robustly stabilizing for a subset of perturbations \(\Delta\) satisfying \(0<\|\Delta\|_{F}\leq\rho\) that will mathematically be characterized. Specifically, the more accurate estimate \(\hat{\Delta}\) of a norm-bounded unknown perturbation we have, the more robustly stabilizing updated stabilizing SOF controller we propose. In general, the SOF controller stabilization problem is known to be an NP-hard problem as it is intrinsically equivalent to solving a bi-linear matrix inequality (BMI) [13]. Then, utilizing a typical approach by repeating the whole controller design procedure can become computationally cumbersome. Also, we avoid utilizing any Lyapunov-based approach as it enforces an extra computational burden (mostly in the case of bi-linear matrix inequality (BMI) or linear matrix inequality (LMI) formulations in semi-definite programs (SDPs) [14]) which is not desired in terms of computational efficiency. It is remarkable that the Lyapunov-based SOF controller synthesis hinges on approximately solving BMIs [15, 16] or incorporating sufficient LMI conditions [12, 17] which induces a conservatism. The alternative non-Lyapunov approach that we take is built upon the notion of _minimum destabilizing real perturbation_[18] which has inspired [19, 20] to synthesize sparse feedback controllers for the large-scale systems. Throughout the paper, we utilize the fundamental linear algebraic results from [21] where needed. The main contributions of this paper can be itemized as follows: * Built upon the notion of minimum destabilizing real perturbation [18], we construct a simple norm minimization problem (a least-squares problem) to propose a novel update of a nominal stabilizing SOF controller that can be applied to various control engineering applications in the case of perturbed scenarios like abrupt changes or inaccurate system models. * Considering known perturbations and unknown perturbations with a known upper bound on their norm, we propose novel updates of nominal stabilizing SOF controllers and derive sufficient stability conditions. * We define geometric metrics to quantitatively measure the stability robustness of the proposed updates of nominal stabilizing SOF controllers, characterize the corresponding guaranteed stability regions, and specifically, for the case of unknown perturbations with a known upper bound on their norm, we propose non-fragility-based robust updated stabilizing SOF controllers. * Through extensive numerical simulations, we validate the effectiveness of the theoretical results and present a thorough analysis of the empirical visualizations. The remainder of the paper is structured as follows: Section II states the main objective of the paper by arising a question to be answered throughout the following sections. Section III presents a novel updated stabilizing SOF controller via updating a nominal stabilizing SOF controller built upon a simple norm minimization problem (a least-squares problem). Section IV contains the main results of the paper detailing the stability regions for the corresponding updated stabilizing SOF controllers. Through various numerical simulations, Section V empirically verifies the effectiveness of the theoretical results. Finally, the paper is concluded via drawing a few concluding remarks in Section VI. **Paper's Notation.** We denote the vectors and matrices by lowercase and uppercase letters, respectively. To represent the set of real numbers, \(n\)-dimensional real-valued vectors, and \(m\times n\)-dimensional real-valued matrices, we respectively use \(\mathbb{R}\), \(\mathbb{R}^{n}\), and \(\mathbb{R}^{m\times n}\). We show the set of positive real numbers with \(\mathbb{R}_{++}\). We denote the identity matrix of dimension \(n\) with \(I_{n}\). For a square matrix \(M\), \(\alpha(M)\) represents the spectral abscissa (i.e., the maximum real part of the eigenvalues) of \(M\). We say a square matrix \(M\) is stable (Hurwitz) if \(\alpha(M)<0\) holds. For a matrix \(M\), symbols \(M^{T}\), \(\|M\|_{F}\), \(\textbf{vec}(M)\), and \(U_{M}\Sigma_{M}V_{M}^{T}\) denote its transpose, Frobenius norm, vectorization, and singular value decomposition (SVD), respectively. Given a full-column rank matrix \(M\), \(M^{+}:=(M^{T}M)^{-1}M^{T}\) denotes the Moore-Penrose inverse of \(M\). We represent the Kronecker product with the symbol \(\otimes\). For a vector \(v\), we respectively denote its Euclidean norm and vectorization inverse with \(\|v\|\) and \(\textbf{vec}^{-1}(v)\) where \(\textbf{vec}^{-1}(v)\) is a matrix that satisfies \(\textbf{vec}(\textbf{vec}^{-1}(v))=v\). We represent the set union with \(\cup\). Given two real numbers \(a<b\), we denote the open, closed, and half-open intervals with \(]a,b[\), \([a,b]\), \([a,b]\), \([\), and \(]a,b]\), respectively. We represent the logical or and the logical and with \(\vee\) and \(\wedge\), respectively. We show the computation complexity with big O notation, i.e., \(\mathcal{O}()\). We denote the Gamma function with \(\Gamma(.)\). Symbols \(\mathcal{U}(0,1)\) and \(\mathcal{N}(0,I)\) respectively represent the uniform distribution on \([0,1]\) and the normal distribution with zero mean and unit variance. ## II Problem Statement We consider the following linear state-space model: \[\dot{x}(t)=(A+BFC)x(t), \tag{1}\] where \(x(t)\in\mathbb{R}^{n}\), \(A\in\mathbb{R}^{n\times n}\), \(B\in\mathbb{R}^{n\times m}\), \(C\in\mathbb{R}^{p\times n}\), and \(F\in\mathbb{R}^{m\times p}\) denote the state vector, state matrix, input matrix, output matrix, and a nominal stabilizing SOF controller matrix (i.e., \(\alpha(A+BFC)<0\) holds), respectively. Suppose that a norm-bounded perturbation \(\Delta\in\mathbb{R}^{n\times n}\) with an upper bound \(\rho>0\) on its Frobenius norm, (i.e., \(0<\|\Delta\|_{F}\leq\rho\)) hits the state-space model (1) as follows: \[\dot{x}(t)=(A+BFC+\Delta)x(t). \tag{2}\] On one hand, for non-destabilizing perturbations (e.g., sufficiently small perturbations), although \(A+BFC+\Delta\) in (2) is still a stable matrix, the stability robustness can be degraded. On the other hand, for destabilizing perturbations (e.g., more severe perturbations), \(A+BFC+\Delta\) in (2) can become unstable. To attenuate the impacts of such perturbations on the stability robustness and the stability, a typical approach can be repeating the whole controller design procedure to find a new SOF controller, namely \(F^{\rm typical}\), to stabilize \(A+\Delta\) and get a stable \(A+\Delta+BF^{\rm typical}C\). Such a typical approach can be inefficient in terms of scalability and even infeasible in some cases. Motivated by such an issue and utilizing a simple norm minimization problem (a least-squares problem) built upon the notion of _minimum destabilizing real perturbation_[18], we propose a novel update of a nominal stabilizing SOF controller that can be applied to various control engineering applications in the case of perturbed scenarios like abrupt changes or inaccurate system models. In a nutshell, the main objective of this paper is to find an answer to the following question: _Q1: Given the perturbed state-space model (2), how can we update a nominal stabilizing SOF controller \(F\) such that the closed-loop system remains stable?_ ## III A Novel Update of a Nominal Stabilizing SOF Controller This section consists of twofold: _(i)_ motivation and _(ii) main idea_. First, we present what motivates us to propose a novel update of a nominal stabilizing SOF controller. Second, we detail the main idea behind the proposed updated stabilizing SOF controller. ### _Motivation_ In order to improve the stability robustness of the perturbed state-space (2), let us consider the updated stabilizing SOF controller, as \(F+G\), with the following state-space model: \[\dot{x}(t)=(A+\Delta+B(F+G)C)x(t). \tag{3}\] For instance, for the special case of the typical approach, \(G^{\mathrm{typical}}=F^{\mathrm{typical}}-F\) holds. Defining the notion of _minimum destabilizing real perturbation_ (MDRP) of a given stable matrix \(\mathcal{A}\in\mathbb{R}^{n\times n}\), namely \(\beta_{\mathbb{R}}(\mathcal{A})\), as follows ((\(3.2\)) in [18]): \[\beta_{\mathbb{R}}(\mathcal{A}):=\min\{\|\mathcal{X}\|_{F}:\alpha(\mathcal{A} +\mathcal{X})=0,\mathcal{X}\in\mathbb{R}^{n\times n}\},\] and choosing \(\mathcal{A}=A+BFC\) and \(\mathcal{X}=BGC+\Delta\) based on the updated perturbed state-space model (3), we see that if \[\|BGC+\Delta\|_{F}<\beta_{\mathbb{R}}(A+BFC), \tag{4}\] holds, then \(A+\Delta+B(F+G)C\) is stable, i.e., \(F+G\) is an updated stabilizing SOF controller for \(A+\Delta\). Inequality (4) motivates us to search for an efficient update \(F+G\) via minimizing the \(\|BGC+\Delta\|_{F}\). In the sequel, we present the lower and upper bounds on MDRP of \(A+BFC\) followed by a brief description of its exact value computation. #### Ii-A1 Lower bound Considering the fact that \(\alpha(X)\) is a continuous function with respect to \(X\), we have by definition \[\forall\epsilon>0,\exists\delta(\epsilon)>0,\;\mathrm{s.t.}\; \mathrm{if}\;\|\mathcal{X}\|_{F}<\delta(\epsilon)\;\mathrm{holds},\;\mathrm{ then}\] \[\alpha(\mathcal{A})-\epsilon<\alpha(\mathcal{A}+\mathcal{X})< \alpha(\mathcal{A})+\epsilon\;\mathrm{holds},\] Then, choosing \(\mathcal{A}=A+BFC\) and \(\mathcal{X}=BGC+\Delta\), we realize that for any \(\epsilon\) satisfying \(\epsilon<-\alpha(A+BFC)\), if \(\|BGC+\Delta\|_{F}<\delta(\epsilon)\) holds, then \(A+\Delta+B(F+G)C\) is stable. That suggests the following lower bound on MDRP of \(A+BFC\): \[0<\delta_{\sup}\leq\beta_{\mathbb{R}}(A+BFC), \tag{5a}\] \[\delta_{\sup}:=\sup\{\delta(\epsilon):\epsilon\in]0,-\alpha(A+ BFC)[\}, \tag{5b}\] #### Ii-A2 Upper bound On one hand, since \(\alpha(\mathcal{A}+\mathcal{X})=0\) holds for the choice of \(\mathcal{X}=-\alpha(\mathcal{A})I_{n}\), then choosing \(\mathcal{A}=A+BFC\) and \(\mathcal{X}=-\alpha(A+BFC)I_{n}\), we get the following upper bound on MDRP of \(A+BFC\)[18]: \[\beta_{\mathbb{R}}(A+BFC)\leq-\sqrt{n}\alpha(A+BFC). \tag{6}\] On the other hand, given \(\mathcal{A}=U_{A}\Sigma_{\mathcal{A}}V_{\mathcal{A}}^{T}\) as the singular value decomposition (SVD) of \(\mathcal{A}\) and choosing \(\mathcal{X}=-\sigma_{\mathcal{A}}^{\min}u_{\mathcal{A}}^{\min}v_{\mathcal{A} }^{\min\,T}\) (superscript \(\min\) denotes the corresponding minimum singular value and vectors), it can be verified that \(\alpha(\mathcal{A}+\mathcal{X})=0\) holds. Then, choosing \(\mathcal{A}=A+BFC\) and according to (6), we get the following upper bound on MDRP of \(A+BFC\)[18]: \[\beta_{\mathbb{R}}(A+BFC)\leq\beta_{\mathbb{R}}^{u}, \tag{7a}\] \[\beta_{\mathbb{R}}^{u}=\min\{\sigma^{\min}(A+BFC),-\sqrt{n} \alpha(A+BFC)\}. \tag{7b}\] For the special case of a diagonalizable \(A+BFC\) with the eigendecomposition \(A+BFC=V\Lambda V^{-1}\), since \(\sigma^{\min}(A+BFC)=-\alpha(A+BFC)\) holds, (7) reduces to \[\beta_{\mathbb{R}}(A+BFC)\leq-\alpha(A+BFC), \tag{8}\] which is a tighter bound compared to the upper bound in (6). Specifically, for the case of symmetric \(A+BFC\), according to Corollary \(3.5\). in [22], the equality in (8) holds. #### Ii-A3 Exact value Unfortunately, computing the exact value of \(\beta_{\mathbb{R}}(A+BFC)\) is not theoretically possible [18]. Also, there is no systematic tractable way to compute the exact value of the lower bound \(\delta_{\sup}\) in (5) due to the fact that we only know about the existence of \(\delta(\epsilon)\) and nothing more. However, taking advantage of the upper bounds on \(\beta_{\mathbb{R}}(A+BFC)\) (derived in (7) and (8)), we may utilize heuristics to obtain an appropriate approximate value of \(\beta_{\mathbb{R}}(A+BFC)\) in a reasonable computational time. It is remarkable that if the equality in (7) becomes active (i.e., the case of a tight upper bound), then the proposed updated stabilizing SOF controller in this paper becomes efficient as it only requires the value of \(\beta_{\mathbb{R}}^{u}\) which can efficiently be computed (e.g., the case of a symmetric \(A+BFC\) for which \(\beta_{\mathbb{R}}(A+BFC)=-\alpha(A+BFC)\) holds). For the special case of structured perturbation, i.e., \(\Delta=BMC\) for a matrix \(M\in\mathbb{R}^{m\times p}\), one may compute MDRP via (frequency domain)-based algorithms detailed by [23]. ### _Main idea_ Since (4) provides a sufficient condition on the stability of \(A+\Delta+B(F+G)C\), our main idea to propose an efficient updated stabilizing SOF controller \(F+G\) is to compute \(G\) via minimizing \(\|BGC+\Delta\|_{F}^{2}\) and verifying that under which conditions, the minimized value of \(\|BGC+\Delta\|_{F}^{2}\) would be less than \(\beta_{\mathbb{R}}(A+BFC)^{2}\). It is noteworthy that if the most optimistic scenario occurs, (i.e., the scenario in which for a known \(\Delta\), equation \(\|BGC+\Delta\|_{F}=0\) has a solution \(G\)), then one can completely cancel out the effect of the hitting perturbation \(\Delta\) and retrieve the primary unperturbed \(A+BFC\) as detailed later on. With that in mind and to find a reasonable answer to the question stated in Section II (_QI_), we consider the following optimization problem: \[\min_{G\in\mathbb{R}^{m\times p}}\|BGC+\Delta\|_{F}^{2}. \tag{9}\] By vectorizing \(BGC+\Delta\), defining \(g:=\mathbf{vec}(G),\delta:=\mathbf{vec}(\Delta),H:=C^{T}\otimes B\), and noting that \(\mathbf{vec}(\mathcal{X}\mathcal{YZ})=(\mathcal{Z}^{T}\otimes\mathcal{X}) \mathbf{vec}(\mathcal{Y})\) holds for any triplet \((\mathcal{X},\mathcal{Y},\mathcal{Z})\) with consistent dimensions and \(\|\mathbf{vec}(X)\|=\|X\|_{F}\) holds for any \(X\), optimization problem (9) can equivalently be cast as the following least-squares problem [24]: \[\min_{g\in\mathbb{R}^{m\times p}}\|Hg+\delta\|^{2}. \tag{10}\] Assuming that \(B\) and \(C\) are respectively full-column rank and full-row rank and noting that identity \((C^{T}\otimes B)^{+}=C^{T+}\otimes B^{+}\) holds, optimization problem (10) can analytically be solved as \[g_{\delta}^{*}=-(C^{T+}\otimes B^{+})\delta, \tag{11}\] and the analytic optimal solution of (9) can subsequently be presented as follows: \[G_{\Delta}^{*}=\mathbf{vec}^{-1}(g_{\delta}^{*})=-B^{+}\Delta(C^{T+})^{T}, \tag{12}\] for which the computation complexity is \(\mathcal{O}(n^{2}\min\{m,p\})\) while the computation complexity of (11) is \(\mathcal{O}(n^{2}m^{2}p^{2})\) Substituting \(g_{\delta}^{*}\) of (11) in (10), the optimal value of the objective function in (10), namely \(J^{*}(\delta)\), becomes \[J^{*}(\delta):=\|Hg_{\delta}^{*}+\delta\|^{2}=\|(I_{n^{2}}-HH^{+})\delta\|^{2}. \tag{13}\] Defining \(P:=I_{n^{2}}-HH^{+}\) and noting that \(P^{T}P=P\) holds (since \(H^{+}H=I_{mp}\) holds), (13) reduces to \[J^{*}(\delta)=\delta^{T}P\delta. \tag{14}\] For the sake of preciseness, with a bit of abuse of notation, we simply define \(J^{*}(\Delta):=J^{*}(\mathbf{vec}(\Delta))=J^{*}(\delta)\). ## IV Main Results This section consists of the main results of the paper. The main results are twofold: _(i)_ In Section IV-A, given a known norm-bounded perturbation \(\Delta\) with \(0<\|\Delta\|_{F}\leq\rho\), we investigate the dependency of \(J^{*}(\Delta)\) on \(\Delta\) via inspecting the linear algebraic properties of \(P\) in (14). Proposition 1 analytically parameterizes the norm-bounded perturbation and proposes a closed-form formula for \(J^{*}(\Delta)\). Proposition 2 elaborates on deriving sufficient conditions for the stability of the proposed updated stabilizing SOF controllers while analytically characterizing the guaranteed stability regions. Furthermore, we define a geometric metric to quantify the stability robustness of the proposed updated stabilizing SOF controllers, _(ii)_ in Section IV-B, given an unknown norm-bounded perturbation \(\Delta\) with a known upper bound \(\rho\) on its Frobenius norm, we derive sufficient conditions on the stability of the proposed updated stabilizing SOF controllers in Proposition 3. Proposition 4 mathematically characterizes the guaranteed stability regions for which the proposed updated SOF controllers are stabilizing. Similarly, we define a geometric metric to quantify the stability quality of the proposed updated stabilizing SOF controllers. Also, built upon a notion of _non-fragility_ utilized in the literature of robust non-fragile PID controller designs [25, 26, 27, 28], we propose non-fragility-based robust updated stabilizing SOF controllers. In the sequel, to save space, whenever needed, we refer to \(\beta_{\mathbb{R}}(A+BFC)\) as \(\beta\). ### _Known norm-bounded perturbation_ In the following lemma, we present an SVD-based parameterization of \(P\) in (14) that facilitates parameterizing the norm-bounded perturbation \(\Delta\) and subsequently proposing a closed-form expression for \(J^{*}(\Delta)\). **Lemma 1**.: _Suppose that \(H=U_{H}\Sigma_{H}V_{H}^{T}\) is the SVD of \(H\). Then, \(P\) in (14) can be parameterized as follows:_ \[P=U_{H}\begin{bmatrix}0&0\\ 0&I_{n^{2}-mp}\end{bmatrix}U_{H}^{T}, \tag{15}\] _where \(U_{H}=(V_{C}\otimes U_{B})U_{\Omega}\) holds provided that \(B=U_{B}\Sigma_{B}V_{B}^{T}\), \(C=U_{C}\Sigma_{C}V_{C}^{T}\), and \(\Omega:=\Sigma_{C}^{T}\otimes\Sigma_{B}=U_{\Omega}\Sigma_{\Omega}V_{\Omega}^ {T}\) denote the SVDs of \(B\), \(C\), and \(\Omega\), respectively._ Proof.: See Appendix A. #### Iv-A1 Norm-bounded perturbation analytic parameterization Built upon Lemma 1, we present the following proposition that analytically parameterizes the norm-bounded perturbation \(\Delta\) while proposing a closed-form expression for \(J^{*}(\Delta)\). **Proposition 1**.: _Given the norm-bounded perturbation \(\Delta\) with \(\|\Delta\|_{F}=r\) and \(r\in]0,\rho]\), and considering \(r=\rho\sin(\frac{\pi\tau}{2})\) with \(\tau\in]0,1]\), the norm-bounded perturbation \(\Delta\) can be parameterized as follows:_ \[\Delta=\rho\sin\Big{(}\frac{\pi\tau}{2}\Big{)}U_{B}\mathbf{vec}^{-1}\bigg{(}U _{\Omega}\begin{bmatrix}\phi_{c}\cos(\frac{\pi\theta}{2})\\ \phi_{s}\sin(\frac{\pi\theta}{2})\end{bmatrix}\bigg{)}V_{C}^{T}, \tag{16}\] _where \(\phi_{c}\in\mathbb{R}^{mp}\) with \(\|\phi_{c}\|=1\), \(\phi_{s}\in\mathbb{R}^{n^{2}-mp}\) with \(\|\phi_{s}\|=1\), and \(\theta\in[0,1]\), and we can compute \(J^{*}(\Delta)\) in (14) as follows:_ \[J^{*}(\Delta)=\bigg{(}\rho\sin\Big{(}\frac{\pi\tau}{2}\Big{)}\sin\Big{(}\frac{ \pi\theta}{2}\Big{)}\bigg{)}^{2}. \tag{17}\] Proof.: See Appendix B. The following corollary provides an alternative formula to compute \(G_{\Delta}^{*}\) in (12). **Corollary 1**.: _Considering the following identities:_ \[(U_{H},\Sigma_{H},V_{H})=((V_{C}\otimes U_{B})U_{\Omega},\Sigma_ {\Omega},(U_{C}\otimes V_{B})V_{\Omega}),\] \[B=U_{B}\Sigma_{B}V_{B}^{T},C=U_{C}\Sigma_{C}V_{C}^{T},\Sigma_{C} ^{T}\otimes\Sigma_{B}=U_{\Omega}\Sigma_{\Omega}V_{\Omega}^{T},\] _(12) can alternatively be computed as follows:_ \[G_{\Delta}^{*}=-\mathbf{vec}^{-1}(V_{H}\begin{bmatrix}(\begin{bmatrix}I_{mp}&0 \end{bmatrix}\Sigma_{H})^{-1}&0\end{bmatrix}U_{H}^{T}\mathbf{vec}(\Delta)).\] Fig. 1 depicts the dependency of \(\frac{J^{*}(\Delta)}{\rho^{2}}\) on \(\tau\) and \(\theta\). As expected, since functions \(\sin(\frac{\pi\tau}{2})\) and \(\sin(\frac{\pi\theta}{2})\) have monotonic behaviors versus \(\tau\) (for \(\tau\in]0,1]\) and \(\theta\) (for \(\theta\in[0,1]\)), respectively, the smaller \(\tau\) and/or \(\theta\), the smaller \(\frac{J^{*}(\Delta)}{\rho^{2}}\) we get. Note that the smaller value of \(\frac{J^{*}(\Delta)}{\rho^{2}}\) is equivalent to the higher chance of satisfaction of the sufficient stability condition (4). In other words, its intuitive interpretation is that handling a less severe perturbation via an updated stabilizing SOF controller \(F+G_{\Delta}^{*}\) with \(G_{\Delta}^{*}\) in (12) is easier. #### Iv-A2 The guaranteed stability region analytic characterization We state the following proposition that derives sufficient conditions on the stability of the proposed updated stabilizing SOF controllers while analytically characterizing the guaranteed stability regions. **Proposition 2**.: _Given the norm-bounded perturbation \(\Delta\) parameterized by (16), \(F+G_{\Delta}^{*}\) with \(G_{\Delta}^{*}\) in (12) is an updated stabilizing SOF controller,_ 1. _if_ \(\rho<\beta_{\mathbb{R}}(A+BFC)\) _holds._ 2. _else if_ \(\rho\geq\beta_{\mathbb{R}}(A+BFC)\) _and_ \((\tau_{\Delta},\theta_{\Delta})\in S_{\kappa}\) _hold where the guaranteed stability region_ \(S_{\kappa}\) _is defined as:_ \[S_{\kappa} :=\hat{S}\cup\bar{S},\] (18a) \[\bar{S} :=\{(\tau,\theta):\tau\in]0,\kappa[,\theta\in[0,1]\},\] (18b) \[\kappa :=\frac{2}{\pi}\arcsin\Big{(}\frac{\beta_{\mathbb{R}}(A+BFC)}{ \rho}\Big{)},\] (18c) \[\bar{S} :=\{(\tau,\theta):\tau\in[\kappa,1],\theta\in[0,\zeta_{\tau,\kappa }[),\] (18d) \[\zeta_{\tau,\kappa} :=\frac{2}{\pi}\arcsin\Big{(}\frac{\sin(\frac{\pi\pi}{2})}{\sin( \frac{\pi\tau}{2})}\Big{)}.\] (18e) _Moreover, the following geometric metric provides a percentage-based lower bound on the stability of the updated perturbed state-space (3):_ \[\xi_{\kappa}\ (\%):=100\times\bigg{(}\kappa+\int_{\kappa}^{1}\zeta_{\tau, \kappa}d\tau\bigg{)}, \tag{19}\] _and_ \(\xi_{\kappa}\) _is an increasing function of_ \(\kappa\) _(equivalently_ \(\xi_{\rho}\) _is a decreasing function of_ \(\rho\) _for a fixed_ \(\beta_{\mathbb{R}}(A+BFC)\) _and an increasing function of_ \(\beta_{\mathbb{R}}(A+BFC)\) _for a fixed_ \(\rho\)_)._ Proof:: See Appendix C. For the case of \(\rho<\beta_{\mathbb{R}}(A+BFC)\), the guaranteed stability region would be \(|0,1|\times[0,1]=S_{\kappa}|_{\kappa=1}\cup\{(1,1)\}\), i.e., the unit square in the non-negative quadrant of \((\tau,\theta)\). For the sake of notation simplicity, we define \(\mathbb{S}=]0,1]\times[0,1]\) and utilize the unified notation of \(S\) to refer to both guaranteed stability regions \(S_{\kappa}\) and \(\mathbb{S}\). The following corollary thoroughly sheds light on the dependency and limiting behaviors of \(\xi_{\rho}\) and \(\xi_{\beta}\) on \(\rho\) and \(\beta\), respectively. **Corollary 2**.: _For the case of \(\rho\geq\beta_{\mathbb{R}}(A+BFC)\), considering the following expression for \(\xi_{\rho}\):_ \[\xi_{\rho}=\frac{2}{\pi}\arcsin\Big{(}\frac{\beta}{\rho}\Big{)}+\frac{2}{\pi} \int_{\frac{\pi}{2}\arcsin\big{(}\frac{\rho}{\rho}\big{)}}^{1}\arcsin\Big{(} \frac{\beta}{\rho\sin(\frac{\pi\tau}{2})}\Big{)}d\tau,\] _we compute the derivative of \(\xi_{\rho}\) with respect to \(\rho\) as follows:_ \[\frac{d\xi_{\rho}}{d\rho}=-\frac{2}{\pi\rho}\int_{\frac{\pi}{2}\arcsin\big{(} \frac{\beta}{\rho}\big{)}}^{1}\frac{\beta}{\rho\sqrt{\sin(\frac{\pi\tau}{2}) ^{2}-(\frac{\beta}{\rho})^{2}}}d\tau. \tag{20}\] _Moreover, as \(\rho\) tends to \(\beta\) and \(\infty\) in (20), we get_ \[\lim_{\rho\rightarrow\beta^{+}}\frac{d\xi_{\rho}}{d\rho}=-\frac{2}{\pi\beta},\lim_{\rho\rightarrow\infty}\frac{d\xi_{\rho}}{d\rho}=0,\lim_{\rho \rightarrow\beta^{+}}\xi_{\rho}=1,\lim_{\rho\rightarrow\infty}\xi_{\rho}=0.\] _Similarly, considering the following expression for \(\xi_{\beta}\):_ \[\xi_{\beta}=\frac{2}{\pi}\arcsin\Big{(}\frac{\beta}{\rho}\Big{)}+\frac{2}{ \pi}\int_{\frac{\pi}{2}\arcsin\big{(}\frac{\beta}{\rho}\big{)}}^{1}\arcsin \Big{(}\frac{\beta}{\rho\sin(\frac{\pi\tau}{2})}\Big{)}d\tau,\] _we compute the derivative of \(\xi_{\beta}\) with respect to \(\beta\) as follows:_ \[\frac{d\xi_{\beta}}{d\beta}=\frac{2}{\pi\rho}\int_{\frac{\pi}{2}\arcsin\big{(} \frac{\beta}{\rho}\big{)}}^{1}\frac{1}{\sqrt{\sin(\frac{\pi\tau}{2})^{2}-( \frac{\beta}{\rho})^{2}}}d\tau. \tag{21}\] _Moreover, by tending \(\beta\) to \(0\) and \(\rho\) in (21), we get_ \[\lim_{\beta\to 0^{+}}\frac{d\xi_{\beta}}{d\beta}=\infty,\lim_{\beta \to\rho^{-}}\frac{d\xi_{\beta}}{d\beta}=\frac{2}{\pi\rho},\lim_{\beta \to 0^{+}}\xi_{\beta}=0,\] \[\lim_{\beta\to\rho^{-}}\xi_{\beta}=1.\] Fig. 2 visualizes the guaranteed stability region \(S_{\kappa}\) for \(\kappa=\frac{1}{3}\) and the percentage-based lower bounds on the stability of the updated perturbed state-space (3) versus \(\kappa\), \(\rho\), and \(\beta\). As expected, the empirical observations of Fig. 2 are consistent with the theoretical results of Proposition 2 and Corollary 2. Precisely, as \(\kappa\) decreases, e.g., for an increased perturbation upper bound \(\rho\) or a decreased MDRP \(\beta\), the percentage-based lower bound on the stability of the updated perturbed state-space (3) \(\xi\) (\(\%\)) degrades which is expected. As Fig. 2 (Top-Left) depicts, for the sufficiently large values of \(\tau\) and/or \(\theta\), i.e., more severe perturbations, \((\tau,\theta)\) lies outside the \(S_{\kappa}\) and there is no stability guarantee for the proposed updated SOF controller which is aligned with the expectations around the negative impacts of perturbations on the stability. ### _Unknown norm-bounded perturbation_ Given an unknown norm-bounded perturbation \(\Delta\) with \(0<\|\Delta\|_{F}\leq\rho\), let us denote a known norm-bounded perturbation with an upper bound \(\rho\) on its Frobenius norm as \(\hat{\Delta}\). We refer to \(\hat{\Delta}\) as an _estimate_ of unknown \(\Delta\). Also, whenever needed, for ease of representation, we will simply denote \(\tau_{\hat{\Delta}}\) and \(\theta_{\hat{\Delta}}\) with \(\hat{\tau}\) and \(\hat{\theta}\), respectively. Moreover, we represent the guaranteed stability regions associated with \(\hat{\Delta}\) by \(\hat{S}_{\kappa}\), \(\hat{\mathbb{S}}\), and \(\hat{S}\) (the unified notation for both \(\hat{S}_{\kappa}\) and \(\hat{\mathbb{S}}\)). In the following proposition, we derive sufficient stability conditions under which the proposed updated SOF controllers are stabilizing. **Proposition 3**.: _Given an unknown norm-bounded perturbation \(\Delta\) and its estimate \(\hat{\Delta}\) both with an upper bound \(\rho\) on their Figure 2: (a) The guaranteed stability region \(S_{\kappa}\) for \(\kappa=\frac{1}{3}\), (b) the percentage-based lower bound on the stability of the updated perturbed state-space (3) \(\xi_{\kappa}\) (\(\%\)) versus \(\kappa\), (c) the percentage-based lower bound on the stability of the updated perturbed state-space (3) \(\xi_{\rho}\) (\(\%\)) versus \(\rho\) for \(\beta=1\), and (d) the percentage-based lower bound on the stability of the updated perturbed state-space (3) \(\xi_{\beta}\) (\(\%\)) versus \(\beta\) for \(\rho=1\). Frobenius norms, \(F+G_{\Delta}^{*}\) with \(G_{\Delta}^{*}\) in (12) is an updated stabilizing SOF controller,_ * _if_ \(\rho<\beta_{\mathbb{R}}(A+BFC)\) _and_ \[\|\Delta-\hat{\Delta}\|_{F}<v,\] (22a) \[\upsilon:=\beta_{\mathbb{R}}(A+BFC)-\rho\sin\left(\frac{\pi\tau_{ \hat{\Delta}}}{2}\right)\sin\left(\frac{\pi\theta_{\hat{\Delta}}}{2}\right),\] (22b) _hold._ _if_ \(\rho\geq\beta_{\mathbb{R}}(A+BFC)\)_, (_22_), and_ \((\tau_{\hat{\Delta}},\theta_{\hat{\Delta}})\in\hat{S}_{\kappa}\) _hold._ Proof:: See Appendix D. The following lemma enables us to have a more thorough quantitative understanding of the estimation inaccuracy and its dependency on various factors. **Lemma 2**.: _Given the \(\Delta\) and \(\hat{\Delta}\) as in Proposition 3, the following identity holds:_ \[\|\Delta-\hat{\Delta}\|_{F} =\rho\sqrt{s_{\tau}^{2}+s_{\tau}^{2}-2s_{\tau}s_{\tau}c_{2\eta}}, \tag{23a}\] \[s_{\tau} :=\sin\left(\frac{\pi\tau_{\Delta}}{2}\right),s_{\tau}:=\sin \left(\frac{\pi\tau_{\hat{\Delta}}}{2}\right),\] (23b) \[c_{2\eta} :=\cos(\pi\eta),\eta:=\frac{1}{\pi}\arccos(\psi^{T}\hat{\psi}),\] (23c) \[\psi :=\psi_{\Delta},\hat{\psi}:=\psi_{\hat{\Delta}}. \tag{23d}\] Proof:: See Appendix E. Note that \(\pi\eta\) denotes the phase difference between \(\psi\) and \(\hat{\psi}\). #### Iii-B1 The guaranteed stability region mathematical characterization Built upon Proposition 3 and Lemma 2, we present the following proposition that lists all the possible parametric scenarios for mathematically characterizing the guaranteed stability regions. **Proposition 4**.: _Given the \(\Delta\) and \(\hat{\Delta}\) as in Proposition 3 and defining_ \[s_{\theta}:=\sin\left(\frac{\pi\theta_{\hat{\Delta}}}{2}\right),\iota:=\frac{\beta_{\mathbb{R}}(A+BFC)}{\rho s_{\hat{\tau}}}-s_{\theta}= \frac{\upsilon}{\rho s_{\hat{\tau}}},\] \[\bar{\eta}:=\frac{1}{\pi}\arcsin(\iota),\text{for }0<\iota\leq 1, \ \hat{b}:=\frac{s_{\hat{\tau}}}{2}(1-\iota^{2})+\frac{1}{2s_{\hat{\tau}}},\] \[\underline{\eta}:=\frac{1}{\pi}\arccos(\hat{b}),\text{for }|\hat{b}| \leq 1,\ s_{2\eta}:=\sin(\pi\eta),\] \[b_{l}(\eta):=s_{\hat{\tau}}\Big{(}c_{2\eta}-\sqrt{\iota^{2}-s _{2\eta}^{2}}\Big{)},\] \[\varphi_{l}(\eta):=\frac{2}{\pi}\arcsin(b_{l}(\eta)),\text{for }0\leq b _{l}(\eta)<1,\] \[b_{u}(\eta):=s_{\hat{\tau}}\Big{(}c_{2\eta}+\sqrt{\iota^{2}-s _{2\eta}^{2}}\Big{)},\] \[\varphi_{u}(\eta):=\frac{2}{\pi}\arcsin(b_{u}(\eta)),\text{for }0<b_{u}( \eta)\leq 1,\] _if \((\eta_{\Delta},\tau_{\Delta})\in\mathcal{S}\) holds, then \(F+G_{\Delta}^{*}\) with \(G_{\Delta}^{*}\) in (12) is an updated stabilizing SOF controller where the guaranteed stability region \(\mathcal{S}\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) can be characterized via the following itemized approach:_ * _if_ \(0<\iota\leq 1\) _and_ \(\hat{b}\leq 1\) _hold, then_ \(\mathcal{S}\) _is defined as_ \[\mathcal{S} :=\tilde{\mathcal{S}}\cup\tilde{\mathcal{S}},\] (24a) \[\tilde{\mathcal{S}} :=\{(\eta,\tau):\eta\in[0,\underline{\eta}[,\tau\in]\varphi_{l}( \eta),1]\},\] (24b) \[\tilde{\mathcal{S}} :=\{(\eta,\tau):\eta\in[y,\bar{\eta}[,\tau\in]\varphi_{l}(\eta), \varphi_{u}(\eta)]\},\] (24c) * _if_ \(0<\iota\leq 1\) _and_ \(\hat{b}>1\) _hold, then_ \(\mathcal{S}\) _is defined as_ \[\mathcal{S} :=\{(\eta,\tau):\eta\in[0,\bar{\eta}[,\tau\in]\varphi_{l}(\eta), \varphi_{u}(\eta)]\},\] (25) \[\tilde{\mathcal{S}} :=\{(\eta,\tau):\eta\in[y,\bar{\eta}[,\tau\in]\varphi_{l}(\eta), \varphi_{u}(\eta)]\},\] (26a) \[\tilde{\mathcal{S}} :=\{(\eta,\tau):\eta\in[0,\underline{\eta}[,\tau\in]0,1]\},\] (26b) \[\tilde{\mathcal{S}} :=\{(\eta,\tau):\eta\in[\underline{\eta},1],\tau\in]0,\varphi_{u}( \eta)]\},\] (26c) * _if_ \(\iota>1\) _and_ \(\hat{b}>1\) _hold, then_ \(\mathcal{S}\) _is defined as_ \[\mathcal{S} :=\{(\eta,\tau):\eta\in[0,1],\tau\in]0,\varphi_{u}(\eta)]\},\] (27) * _if_ \(\iota>1\) _and_ \(\hat{b}<-1\) _hold, then_ \(\mathcal{S}\) _is defined as_ \[\mathcal{S} :=\{(\eta,\tau):\eta\in[0,1],\tau\in]0,1]\},\] (28) _Moreover, if \(\rho<\beta_{\mathbb{R}}(A+BFC)\) holds, then \(0<\iota\) automatically holds. Also, in the case of \(\rho\geq\beta_{\mathbb{R}}(A+BFC)\), \(0<\iota\) holds if and only if \((\tau_{\hat{\Delta}},\theta_{\hat{\Delta}})\in\hat{S}_{\kappa}\) holds._ Proof:: See Appendix F. Utilizing the following equivalences: \[\iota>0 \iff s_{\hat{\tau}}s_{\hat{\theta}}\leq\frac{\beta}{\rho},\ \iota\leq 1 \iff\frac{\beta}{\rho}\leq s_{\hat{\tau}}(s_{\hat{\theta}}+1),\] \[\hat{b}\leq 1 \iff\iota\geq\frac{1}{s_{\hat{\tau}}}-1 \iff 1+s_{\hat{\tau}}(s_{\hat{\theta}}-1)\leq\frac{\beta}{\rho},\] \[\hat{b}\geq-1 \iff\iota\leq\frac{1}{s_{\hat{\tau}}}+1 \iff\frac{\beta}{\rho}\leq 1+s_{\hat{\tau}}(s_{\hat{\theta}}+1),\] the following corollary facilitates the itemized characterization proposed by Proposition 4. **Corollary 3**.: _The items presented by Proposition 4 can be simplified into the following items:_ * _if_ \(1+s_{\hat{\tau}}(s_{\hat{\theta}}-1)\leq\frac{\beta}{\rho}\leq s_{\hat{\tau}}(s_{ \hat{\theta}}+1)\) _and_ \(\frac{1}{3}\leq\tau_{\hat{\Delta}}\leq 1\) _hold._ * _1) if_ \(s_{\hat{\tau}}s_{\hat{\theta}}<\frac{\beta}{\rho}\leq s_{\hat{\tau}}(s_{\hat{ \theta}}+1)\) _and_ \(0<\tau_{\hat{\Delta}}<\frac{1}{3}\) _hold,_ _or_ \[2) _if_ \(s_{\hat{\tau}}s_{\hat{\theta}}<\frac{\beta}{\rho}<1+s_{\hat{\tau}}(s_{\hat{\theta}}-1)\) _and_ \(\frac{1}{3}\leq\tau_{\hat{\Delta}}<1\) _hold._ * _1) if_ \(1+s_{\hat{\tau}}(s_{\hat{\theta}}-1)\leq\frac{\beta}{\rho}\leq 1+s_{\hat{\tau}}(s_{\hat{ \theta}}+1)\) _and_ \(0<\tau_{\hat{\Delta}}<\frac{1}{3}\) _hold,_ _or_ \[2) _if_ \(s_{\hat{\tau}}(s_{\hat{\theta}}+1)<\frac{\beta}{\rho}\leq 1+s_{\hat{\tau}}(s_{\hat{ \theta}}+1)\) _and_ \(\frac{1}{3}\leq\tau_{\hat{\Delta}}\leq 1\) _hold._ * _if_ \(s_{\hat{\tau}}(s_{\hat{\theta}}+1)<\frac{\beta}{\rho}<1+s_{\hat{\tau}}(s_{\hat{ \theta}}-1)\) _and_ \(0<\tau_{\hat{\Delta}}<\frac{1}{3}\) _hold._ * _if_ \(1+s_{\hat{\tau}}(s_{\hat{\theta}}+1)<\frac{\beta}{\rho}\) _holds._ Note that the upper bounds of \(\frac{\beta}{\rho}\) in item _ii_ in Corollary 3 and the lower bounds of \(\frac{\beta}{\rho}\) in item _iii_ in Corollary 3 can compactly be expressed as follows: * _if_ \(s_{\hat{\tau}}s_{\hat{\theta}}+\min\{s_{\hat{\tau}},1-s_{\hat 3 can only occur for the case of \(\rho<\beta\) as \(1<1+s_{\uparrow}(s_{\hat{\theta}}+1)\) should be satisfied. Also, the non-trivial boundary points with \(\eta=0\) ((\(0,\tau_{0}^{1}\)) and \((0,\tau_{u}^{0}\))) or \(\eta=1\) ((\(0,\tau_{u}^{1}\))) can be computed via the following formulas: \[\tau_{l}^{0} =\frac{2}{\pi}\arcsin\bigg{(}s_{\uparrow}(1+s_{\hat{\theta}})- \frac{\beta}{\rho}\bigg{)},\] \[\tau_{u}^{0} =\frac{2}{\pi}\arcsin\bigg{(}s_{\uparrow}(1-s_{\hat{\theta}})+ \frac{\beta}{\rho}\bigg{)},\] \[\tau_{u}^{1} =\frac{2}{\pi}\arcsin\bigg{(}s_{\hat{\tau}}(-1-s_{\hat{\theta}})+ \frac{\beta}{\rho}\bigg{)}.\] It is noteworthy that the extreme cases \(\eta=0\) (no phase difference) and \(\eta=1\) (maximum phase difference) represent the special cases \(\hat{\psi}=\psi\) and \(\hat{\psi}=-\psi\), respectively. Similar to the case with known perturbation, we define a geometric metric to provide a percentage-based lower bound on the stability of the updated perturbed state-space (3). Given the guaranteed stability region \(\mathcal{S}\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) (as presented by Proposition 4 and Corollary 3), we define the following geometric metric: \[\Xi_{\tau_{\hat{\Delta}},\theta_{\hat{\Delta}};\frac{\theta}{\rho},n}\;(\%) :=100\times\frac{\mathcal{V}_{n^{2}}(\mathbb{D}(\mathcal{S}))}{\mathcal{V}_{n^ {2}}(\mathbb{S}_{\rho}^{n^{2}})}. \tag{29}\] where \(\mathcal{V}_{N}(.)\), \(\mathbb{S}_{r}^{N}\), and \(\mathbb{D}(\mathcal{S})\) denotes the \(N\)-dimensional volume of an object, \(N\)-dimensional hypersphere of radius \(r\) centered at origin, and set of all \(\delta\) with \(\|\mathbf{vec}^{-1}(\delta)\|_{F}\leq\rho\) corresponding to \(\mathcal{S}\). In order to compute \(\Xi_{\tau_{\hat{\Delta}},\theta_{\hat{\Delta}};\frac{\theta}{\rho},n}\;(\%)\) (defined by (29)), we need to compute \(\mathcal{V}_{n^{2}}(\mathbb{D}(\mathcal{S}))\) and \(\mathcal{V}_{n^{2}}(\mathbb{S}_{\rho}^{n^{2}})\). We compute both volumes via integral computation techniques similarly utilized by [29]. First, \(\mathcal{V}_{n^{2}}\big{(}\mathbb{S}_{\rho}^{n^{2}}\big{)}\) can simply be computed as follows: \[\mathcal{V}_{n^{2}}\big{(}\mathbb{S}_{\rho}^{n^{2}}\big{)}=\frac{\pi\frac{n^{ 2}}{\rho}\rho^{n^{2}}}{\Gamma\big{(}\frac{n^{2}}{2}+1\big{)}}. \tag{30}\] Second, according to the spherical symmetry, \(\mathcal{V}_{n^{2}}(\mathbb{D}(\mathcal{S}))\) can be computed as follows: \[\mathcal{V}_{n^{2}}(\mathbb{D}(\mathcal{S})) =\int_{\pi\eta_{l}}^{\pi\eta_{l}}f_{u}(\varphi)-f_{l}(\varphi)\ d\varphi, \tag{31a}\] \[f_{u}(\varphi) :=\mathcal{V}_{n^{2}-1}\Big{(}\mathbb{S}_{r_{u}(\varphi)\sin( \varphi)}^{n^{2}-1}\Big{)}\frac{d(r_{u}(\varphi)\cos(\varphi))}{d\varphi},\] (31b) \[f_{l}(\varphi) :=\mathcal{V}_{n^{2}-1}\Big{(}\mathbb{S}_{r_{l}(\varphi)\sin( \varphi)}^{n^{2}-1}\Big{)}\frac{d(r_{l}(\varphi)\cos(\varphi))}{d\varphi}. \tag{31c}\] where \(\varphi:=\pi\eta\), \(r:=\rho\sin(\frac{\pi\eta}{2})\), and \(r_{u}(\varphi)\eta_{u}\) and \(r_{l}(\varphi)\)/\(\eta_{l}\) represent the upper and lower curves/bounds corresponding to \(\mathcal{S}\), respectively. Note that the following identities: \[\mathcal{V}_{n^{2}-1}\Big{(}\mathbb{S}_{r_{u}(\varphi)\sin( \varphi)}^{n^{2}-1}\Big{)}=\frac{\pi\frac{n^{2}-1}{\tau}(r_{u}(\varphi)\sin( \varphi))^{n^{2}-1}}{\Gamma\big{(}\frac{n^{2}-1}{2}+1\big{)}}, \tag{32a}\] \[\mathcal{V}_{n^{2}-1}\Big{(}\mathbb{S}_{r_{l}(\varphi)\sin( \varphi)}^{n^{2}-1}\Big{)}=\frac{\pi\frac{n^{2}-1}{\tau}(r_{l}(\varphi)\sin( \varphi))^{n^{2}-1}}{\Gamma\big{(}\frac{n^{2}-1}{2}+1\big{)}},\] (32b) \[\frac{d(r_{u}(\varphi)\cos(\varphi))}{d\varphi} =-r_{u}(\varphi)\sin(\varphi)+\frac{dr_{u}(\varphi)}{d\varphi} \cos(\varphi),\] (32c) \[\frac{d(r_{l}(\varphi)\cos(\varphi))}{d\varphi} =-r_{l}(\varphi)\sin(\varphi)+\frac{dr_{l}(\varphi)}{d\varphi} \cos(\varphi), \tag{32d}\] hold. Then, utilizing (30), (31), and (32) enables us to compute \(\Xi_{\tau_{\hat{\Delta}},\theta_{\hat{\Delta}};\frac{\theta}{\rho},n}\;(\%)\). Fig. 4 depicts the dependency of \(\Xi_{\tau_{\hat{\Delta}},\theta_{\hat{\Delta}};\frac{\theta}{\rho},n}\;(\%)\) on \(\tau_{\hat{\Delta}}\) and \(\theta_{\hat{\Delta}}\) for \(\frac{\beta}{\rho}=\frac{1}{2}\) and \(n=4\). As observed, approaching the origin, the value of the geometric metric gets improved (a maximum value of \(1.5259\times 10^{-3}\%\)), meaning that a larger amount of perturbations can be handled provided that they are less severe. Similarly, approaching the instability boundary, the value of the geometric metric gets degraded, that is, a smaller amount of perturbations can be handled provided that they are more severe. Then, there exists a fundamental trade-off between the potential severeness of perturbations and the successfully handled amount of perturbations. Figure 3: The guaranteed stability region \(\mathcal{S}\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) for various cases itemized by Corollary 3 (a) \(i\), (b) \(ii\)-\(1\), (c) \(ii\)-\(2\), (d) \(iii\)-\(1\), (e) \(iii\)-\(2\), (f) \(iv\), and (g) \(v\). #### Iv-B2 Non-fragility-based robust update Inspired by Propositions 1, 2, 3, and 4 and employing a notion of _non-fragility_ (NF) utilized by [25, 26, 27, 28], we propose a robust update for the case of dealing with an unknown norm-bounded perturbation \(\Delta\) with a known upper bound \(\rho\) on its Frobenius norm, based on the following criterion: _C1: Choose the point deepest inside the guaranteed stability region \(\hat{S}\) (i.e., farthest from the boundary) as a robust update._ To choose the point deepest inside the guaranteed stability region \(\hat{S}\), we utilize three well-known geometric notions: _(i)_ Chebyshev center, _(ii)_ centroid, and _(iii) weighted centroid_. **Chebyshev center**: A robust update based on the Chebyshev center can be computed as follows: \[G^{*}_{\hat{\Delta}_{\mathrm{NF}}}=-B^{+}\hat{\Delta}_{\mathrm{ NF}}(C^{T+})^{T}, \tag{33a}\] \[\hat{\Delta}_{\mathrm{NF}}=\rho\sin\Big{(}\frac{\pi\hat{\tau}_{ \mathrm{NF}}}{2}\Big{)}\mathbf{vec}^{-1}\Bigg{(}U_{H}\begin{bmatrix}\hat{ \phi}_{c}\cos(\frac{\pi\hat{\theta}_{\mathrm{NF}}}{2})\\ \hat{\phi}_{s}\sin(\frac{\pi\hat{\theta}_{\mathrm{NF}}}{2})\end{bmatrix} \Bigg{)},\] (33b) \[\hat{\tau}_{\mathrm{NF}}=\frac{4-2\sqrt{2}}{\pi}\arcsin\Bigg{(} \sqrt{\frac{\beta}{\rho}}\Bigg{)},\hat{\theta}_{\mathrm{NF}}=\hat{\tau}_{ \mathrm{NF}}. \tag{33c}\] **Centroid**: A robust update based on centroid can be computed as follows: \[G^{*}_{\hat{\Delta}_{\mathrm{NF}}}=-B^{+}\hat{\Delta}_{\mathrm{ NF}}(C^{T+})^{T}, \tag{34a}\] \[\hat{\Delta}_{\mathrm{NF}}=\rho\sin\Big{(}\frac{\pi\hat{\tau}_{ \mathrm{NF}}}{2}\Big{)}\mathbf{vec}^{-1}\Bigg{(}U_{H}\begin{bmatrix}\hat{\phi} _{c}\cos(\frac{\pi\hat{\theta}_{\mathrm{NF}}}{2})\\ \hat{\phi}_{s}\sin(\frac{\pi\hat{\theta}_{\mathrm{NF}}}{2})\end{bmatrix} \Bigg{)},\] (34b) \[\hat{\tau}_{\mathrm{NF}}=\frac{\int_{\hat{S}}\hat{\tau}d\hat{ \theta}d\hat{\tau}}{\int_{\hat{S}}d\hat{\theta}d\hat{\tau}},\hat{\theta}_{ \mathrm{NF}}=\frac{\int_{\hat{S}}\hat{\theta}d\hat{\theta}d\hat{\tau}}{\int_{ \hat{S}}d\hat{\theta}d\hat{\tau}}. \tag{34c}\] wherein \(\hat{\theta}_{\mathrm{NF}}=\hat{\tau}_{\mathrm{NF}}\) holds due to the symmetry of \(\hat{S}\) with respect to \(\hat{\theta}=\hat{\tau}\). Specifically, for the case of \(\rho<\beta_{\mathrm{R}}(A+BFC)\), both of the robust updates in (33) and (34) reduce to the following form: \[G^{*}_{\hat{\Delta}_{\mathrm{NF}}}=-B^{+}\hat{\Delta}_{\mathrm{ NF}}(C^{T+})^{T}, \tag{35a}\] \[\hat{\Delta}_{\mathrm{NF}}=\frac{\rho}{2}\mathbf{vec}^{-1}\Bigg{(}U_{H} \begin{bmatrix}\hat{\phi}_{c}\\ \hat{\phi}_{s}\end{bmatrix}\Bigg{)}. \tag{35b}\] Note that in (35) \((\hat{\tau}_{\mathrm{NF}},\hat{\theta}_{\mathrm{NF}})=(\frac{1}{2},\frac{1}{2})\) holds as \(\hat{S}=\hat{S}\) holds. **Weighted centroid**: A robust update based on a weighted centroid can be computed as follows: \[G^{*}_{\hat{\Delta}_{\mathrm{NF}}}=-B^{+}\hat{\Delta}_{\mathrm{ NF}}(C^{T+})^{T}, \tag{36a}\] \[\hat{\Delta}_{\mathrm{NF}}=\rho\sin\Big{(}\frac{\pi\hat{\tau}_{ \mathrm{NF}}}{2}\Big{)}\mathbf{vec}^{-1}\Bigg{(}U_{H}\begin{bmatrix}\hat{\phi} _{c}\cos(\frac{\pi\hat{\theta}_{\mathrm{NF}}}{2})\\ \hat{\phi}_{s}\sin(\frac{\pi\hat{\theta}_{\mathrm{NF}}}{2})\end{bmatrix} \Bigg{)},\] (36b) \[\hat{\tau}_{\mathrm{NF}}=\frac{\int_{\hat{S}}\Xi(\hat{\tau},\hat{ \theta})\hat{\tau}d\hat{\theta}d\hat{\tau}}{\int_{\hat{S}}\Xi(\hat{\tau},\hat{ \theta})d\hat{\theta}d\hat{\tau}},\hat{\theta}_{\mathrm{NF}}=\frac{\int_{\hat{S }}\Xi(\hat{\tau},\hat{\theta})\hat{\theta}d\hat{\theta}d\hat{\tau}}{\int_{ \hat{S}}\Xi(\hat{\tau},\hat{\theta})d\hat{\theta}d\hat{\tau}}. \tag{36c}\] The following corollary highlights that since \(G^{*}_{\hat{\Delta}_{\mathrm{NF}}}\) in (33), (34), and (36) all lie inside the guaranteed stability region \(\hat{S}\), the corresponding \(F+G^{*}_{\hat{\Delta}_{\mathrm{NF}}}\) is a robust updated stabilizing SOF controller. **Corollary 4**.: _Given an unknown norm-bounded perturbation \(\Delta\) with an upper bound \(\rho\) on its Frobenius norm, then \(F+G^{*}_{\hat{\Delta}_{\mathrm{NF}}}\) with \(G^{*}_{\hat{\Delta}_{\mathrm{NF}}}\) in (33), (34), and (36) is a robust updated stabilizing SOF controller._ We highlight that for an arbitrary choice of \((\hat{\tau},\hat{\theta})\), one can similarly compute the corresponding \(G^{*}_{\hat{\Delta}}\) via \[G^{*}_{\hat{\Delta}}=-B^{+}\hat{\Delta}(C^{T+})^{T}, \tag{37a}\] \[\hat{\Delta}=\rho\sin\Big{(}\frac{\pi\hat{\tau}}{2}\Big{)} \mathbf{vec}^{-1}\Bigg{(}U_{H}\begin{bmatrix}\hat{\phi}_{c}\cos(\frac{\pi \hat{\theta}}{2})\\ \hat{\phi}_{s}\sin(\frac{\pi\hat{\theta}}{2})\end{bmatrix}\Bigg{)}. \tag{37b}\] Given \((\rho,n)\), computing \((\beta_{\mathrm{R}}(A+BFC),\hat{\tau}_{\mathrm{NF}},\hat{\theta}_{\mathrm{NF}})\), and having access to sufficiently accurate estimates \((\hat{\phi}_{c},\hat{\phi}_{s})\) of \((\phi_{c},\hat{\phi}_{s})\), we can utilize (33), (34), and (36) to propose robust updated stabilizing SOF controllers. ## V Numerical Simulations This section is naturally divided into two main parts: _(i)_ known norm-bounded perturbation and _(ii)_ unknown norm-bounded perturbation. To assess the effectiveness of the theoretical results, we employ two benchmarks of the SOF controller benchmarks collected by [30]. To design a nominal stabilizing SOF controller \(F\), we utilize MATLAB built-in function \(\mathtt{hinfstruct}(.)\)[31] that has been developed built upon [32] to synthesize structured \(\mathcal{H}_{\infty}\) controllers. As mentioned earlier in the paper, computing the exact value of MDRP \(\beta\) is theoretically impossible. However, we utilize the following optimization problem: \[\max_{v\in\mathbb{R}^{n^{2}},\;\beta\in\mathbb{R}_{++}}\;\alpha\bigg{(}A+BFC+ \beta\mathbf{vec}^{-1}\bigg{(}\frac{v}{\|v\|}\bigg{)}\bigg{)}. \tag{38}\] along with a specialized bisection method (fixing the value of \(\beta\) and solving for a \(v\in\mathbb{R}^{n^{2}}\)), to obtain a near-optimal value of \(\beta\). We initialize \(\beta\) with \(\beta^{n}_{\mathrm{R}}\) and at each step, we check if the maximum value, namely \(\alpha^{*}\), is non-negative or not. To solve the optimization problem, one could utilize MATLAB's built-in function \(\mathtt{fminunc}(.)\). We put an emphasis that the efficiency of the proposed updated stabilizing SOF controller mainly depends on the computational efficiency of MDRP \(\beta\) as the computation complexity of (12) is \(\mathcal{O}(n^{2}\min\{m,p\})\). ### _Known norm-bounded perturbation_ Let us consider a lateral axis model of an \(L-1011\) aircraft in cruise flight conditions (\(AC3\)) [30]. We design the following nominal stabilizing SOF controller \(F\) via \(\mathtt{hinfstruct}(.)\): \[F=\begin{bmatrix}0&0&0&-0.5057\\ 0.7521&0&-3.0713&1.1408\end{bmatrix}.\] for which \(\alpha(A+BFC+\Delta)=0.0483\) (i.e., a destabilizing \(\Delta\)), \(\beta=0.1931\), and \(\beta_{\mathbb{R}}^{\mathrm{q}}=0.3230\) hold. Fig. 5 (Left) visualizes the stability regions for \(AC3\) benchmark with \(\frac{\beta}{\rho}=\frac{1}{2}\): the guaranteed (conservative) stability region based on Proposition 2 and the exact one based on \(\alpha(A+BFC+\Delta+BG_{\Delta}^{*}C)<0\) with \(G_{\Delta}^{*}\) in (12). As expected, the guaranteed (conservative) stability region is a subset of the exact one. For instance, the update \(G_{\Delta}^{*}\) for \((\tau_{\Delta},\theta_{\Delta})=(0.45,0.45)\) is as follows: \[G_{\Delta}^{*}=\begin{bmatrix}0.0745&-0.2034&0.0214&-0.0939\\ 0.0115&-0.0302&0.0018&-0.0169\end{bmatrix},\] for which \(\alpha(A+BFC+\Delta+BG_{\Delta}^{*}C)=-0.0637\) holds and the updated stabilizing SOF controller \(F+G_{\Delta}^{*}\) is as follows: \[F+G_{\Delta}^{*}=\begin{bmatrix}0.0745&-0.2034&0.0214&-0.5996\\ 0.7636&-0.0302&-3.0695&1.1239\end{bmatrix}.\] It is remarkable that the accurate computing of \(\beta\) plays a significant role in accurately identifying the stability regions. As Fig. 5 (Right) depicts, choosing \(\rho\) equal to \(2\times 0.1931\) (as chosen for Fig. 5 (Left)) and \(\beta\) equal to \(0.3230\) (an inaccurate value), leads to the misleading stability regions. First, the guaranteed (conservative) stability region has erroneously been enlarged. Second, the guaranteed (conservative) stability region has erroneously become the superset of the exact one. ### _Unknown norm-bounded perturbation_ Let us consider the autopilot control problem for an air-to-air missile (\(AC4\)) [30]. The number of states for such a control problem is \(n=4\). For \(\frac{\beta}{\rho}=\frac{1}{2}\) and \(n=4\), we get the following NF-based designs: \[(\hat{\tau}_{\mathrm{NF}}^{\mathrm{Cbbe.\ center}},\hat{\theta }_{\mathrm{NF}}^{\mathrm{Cbbe.\ center}})=\bigg{(}\frac{2-\sqrt{2}}{2},\frac{2 -\sqrt{2}}{2}\bigg{)},\] \[(\hat{\tau}_{\mathrm{NF}}^{\mathrm{Centroid}},\hat{\theta}_{ \mathrm{NF}}^{\mathrm{Centroid}})=(0.3787,0.3787),\] \[(\hat{\tau}_{\mathrm{NF}}^{\mathrm{W-centroid}},\hat{\theta}_{ \mathrm{NF}}^{\mathrm{W-centroid}})=(0.1603,0.2278),\] where attain \(\Xi_{\frac{\beta}{\rho},\tau_{\Delta},\theta_{\Delta}}^{\mathrm{Cheb.\ center}}=5.0077\times 1 0^{-7}\%\), \(\Xi_{\frac{\beta}{\rho},\tau_{\Delta},\theta_{\Delta}}^{\mathrm{Centroid}}=2.0 450\times 10^{-10}\%\), and \(\Xi_{\frac{\beta}{\rho},\tau_{\Delta},\theta_{\Delta}}^{\mathrm{W-centroid}}= 7.0944\times 10^{-5}\%\), respectively. Fig. 6 visualizes the guaranteed stability region \(\mathcal{S}\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) for various NF-based robust updates. As observed, the weighted centroid update attains the best average performance as it considers both being far from the boundary and obtaining a large guaranteed stability region (i.e., a large value of the geometric metric). Also, unlike the Chebyshev center update and the centroid update, for the case of weighted centroid update the identity \(\hat{\theta}_{\mathrm{NF}}=\hat{\tau}_{\mathrm{NF}}\) does not necessarily hold for \(c_{1}\neq c_{2}\). Fig. 7 illustrates the weighted centroid updates for \(\frac{\beta}{\rho}=\frac{1}{2}\) and various values of \(n\). As Fig. 7 depicts, the higher the dimension \(n\), the closer to the origin, the weighted centroid update we get. Tab. I reflects the corresponding values of the geometric metric \(\Xi_{\tau_{\hat{\Delta}},\theta_{\hat{\Delta}};\frac{\beta}{\rho},n}\) (\(\%\)) for the weighted centroid updates with \(\frac{\beta}{\rho}=\frac{1}{2}\) and various values of \(n\). As Tab. I shows, the higher the dimension \(n\), the smaller geometric metric \(\Xi_{\tau_{\hat{\Delta}},\theta_{\hat{\Delta}};\frac{\beta}{\rho},n}\) (\(\%\)) we get. Given an arbitrary point \((\eta^{\mathrm{ap}},\tau^{\mathrm{ap}})\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) and utilizing the itemized characterization proposed by Proposition 4, we visualize all the \((\hat{\tau},\hat{\theta})\)'s belonging to \(\hat{S}\) for which the guaranteed stability region \(\mathcal{S}\) contains \((\eta^{\mathrm{ap}},\tau^{\mathrm{ap}})\). For instance, Fig. 8 depicts such a visualization for \((\eta^{\mathrm{ap}},\tau^{\mathrm{ap}})=(0.1,0.5)\) and \(\frac{\beta}{\rho}=\frac{1}{2}\). Fig. 9 visualizes all the \(G_{\hat{\Delta}}^{*}\)-stabilizable points \((\eta^{\mathrm{ap}},\tau^{\mathrm{ap}})\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) for \(\frac{\beta}{\rho}=\frac{1}{2}\). As expected, from Fig. 9, we realize that the perturbations with both high gain (\(\propto\tau\)) and high phase difference (\(\propto\eta\)) are not \(G_{\hat{\Delta}}^{*}\)-stabilizable. Fig. 5: The stability regions for \(AC3\) benchmark with (a) \(\rho=2\beta^{\mathrm{accurate}}\) and \(\beta=\beta^{\mathrm{accurate}}\) and (b) \(\rho=2\beta^{\mathrm{accurate}}\) and \(\beta=\beta^{\mathrm{ inaccurate}}\); the guaranteed (conservative) stability regions based on Proposition 2 (filled with blue circles) and the exact ones based on \(\alpha(A+BFC+\Delta+BG_{\Delta}^{*})<0\) with \(G_{\Delta}^{*}\) in (12) (filled with red asterisks). Fig. 6: The guaranteed stability region \(\mathcal{S}\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) for various NF-based robust updates (Chebyshev center in red, Centroid in green, and Weighted centroid in blue) for \(\frac{\beta}{\rho}=\frac{1}{2}\) and \(n=4\). Colored circles on the vertical axis represent the corresponding perturbations in the ideal case, i.e., \(\hat{\Delta}_{\mathrm{NF}}=\Delta\). Given a \(G^{*}_{\hat{\Delta}}\)-stabilizable point \((\eta^{\text{ap}},\tau^{\text{ap}})\) in \(2\)-dimensional parametric space of \((\eta,\tau)\), we define the following geometric metric: \[\mathcal{M}_{\eta,\theta,\frac{\hat{\sigma}}{\rho}}\ (\%):=100\times G^{*}_{\hat{ \Delta}}-\text{stabilizing\ Region\ Area}, \tag{39}\] to quantify the \(G^{*}_{\hat{\Delta}}\)-stabilizability. Fig. 10 visualizes the \(G^{*}_{\hat{\Delta}}\)-stabilizability geometric metric \(\mathcal{M}_{\eta,\theta,\frac{\hat{\sigma}}{\rho}}\ (\%)\) for \(\frac{\hat{\sigma}}{\rho}=\frac{1}{2}\). The larger \(\mathcal{M}_{\eta,\theta,\frac{\hat{\sigma}}{\rho}}\ (\%)\), the easier to stabilize a \(G^{*}_{\hat{\Delta}}\)-stabilizable point \((\eta^{\text{ap}},\tau^{\text{ap}})\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) we have. As Fig. 10 depicts, the largest value of \(\mathcal{M}_{\eta,\theta,\frac{\hat{\sigma}}{\rho}}\ (\%)\), i.e., \(39.1386\%\), is attained by \((\eta^{\text{ap}},\tau^{\text{ap}})=(0,\frac{1}{3})\). A possible justification for such an observation can be the fact that \(\eta^{\text{ap}}=0\) corresponds to a zero phase difference and \(\tau^{\text{ap}}=\frac{1}{3}\) corresponds to \(r=\frac{\theta}{2}=\frac{0.5\mu}{2}\). Note that \(\mathcal{M}_{\eta,\theta,\frac{\hat{\sigma}}{\rho}}\ (\%)=0\%\) in Fig. 10 represents the points \((\eta^{\text{ap}},\tau^{\text{ap}})\) in \(2\)-dimensional parametric space of \((\eta,\tau)\) that are not \(G^{*}_{\hat{\Delta}}\)-stabilizable. Fig. 11 depicts the corresponding \(G^{*}_{\hat{\Delta}}\)-stabilizing region for \((\eta^{\text{ap}},\tau^{\text{ap}})=(0,\frac{1}{3})\). To empirically verify the relative performance of the weighted centroid update compared to the centroid update, the Chebyshev center update, and the update based on a point close to the origin \((\hat{\tau},\hat{\theta})=(0.01,0)\), we generate uniformly random samples of an unknown norm-bounded perturbation \(\Delta\) with \(0<\|\Delta\|_{F}\leq\rho\)[33] and check if \(\|\Delta-\hat{\Delta}\|_{F}<v\) holds. To be more precise, we generate the uniformly random samples as follows: \[\Delta=\mathbf{vec}^{-1}\bigg{(}r\frac{\vartheta}{\|\vartheta\|} \bigg{)},\ r\in\rho\times\mathcal{U}(0,1)^{\frac{1}{n^{2}}},\vartheta\in \mathcal{N}(0,I_{n^{2}}).\] According to (23), considering \[\psi=\begin{bmatrix}\phi_{c}\cos(\frac{\pi\theta}{2})\\ \phi_{s}\sin(\frac{\pi\theta}{2})\end{bmatrix},\hat{\psi}=\begin{bmatrix} \hat{\phi}_{c}\cos(\frac{\pi\theta}{2})\\ \hat{\phi}_{s}\sin(\frac{\pi\theta}{2})\end{bmatrix}\] and defining \((\gamma_{c},\gamma_{s}):=(\phi^{T}_{c}\hat{\phi}_{c},\phi^{T}_{s}\hat{\phi}_{s })\), \(c_{\theta}:=\cos(\frac{\pi\theta}{2})\), \(s_{\theta}:=\sin(\frac{\pi\theta}{2})\), and \(c_{\theta}:=\cos(\frac{\pi\theta}{2})\), we get \[\eta=\frac{1}{\pi}\arccos(\psi^{T}\hat{\psi})=\frac{1}{\pi}\arccos(\gamma_{c}c _{\theta}c_{\hat{\theta}}+\gamma_{s}s_{\theta}s_{\hat{\theta}}). \tag{40}\] It is noteworthy that \(\gamma_{c}\in[-1,1]\) and \(\gamma_{s}\in[-1,1]\) hold. For the ideal case of estimated \(\hat{\Delta}\), i.e., \(\hat{\Delta}=\Delta\), on the one hand, we have \(\hat{\theta}=\theta\) and subsequently \((c_{\hat{\theta}},s_{\hat{\theta}})=(c_{\theta},s_{\theta})\). Also, we have \((\hat{\phi}_{c},\hat{\phi}_{s})=(\phi_{c},\phi_{s})\) and subsequently \((\gamma_{c},\gamma_{s})=(1,1)\). Consequently, according to (40), we observe that \(\eta=0\) holds. On the other hand, for the ideal case of estimated \(\hat{\Delta}\), \(\hat{\tau}=\tau\) or equivalently \(\hat{r}=\eta\) holds. Since \(U_{H}\psi=\frac{\vartheta}{\|\vartheta\|}\) and \(U_{H}^{T}U_{H}=I_{n^{2}}\) hold, we have \(\psi=U_{H}^{T}\frac{\vartheta}{\|\vartheta\|}\). Then, defining \(\mu:=\begin{bmatrix}I_{mp}&0\end{bmatrix}\psi\) and \(\nu:=\begin{bmatrix}0&I_{n^{2}-mp}\end{bmatrix}\psi\), we get \[\phi_{c}=\frac{\mu}{\|\mu\|},\ \mu=\begin{bmatrix}I_{mp}&0\end{bmatrix}U_{H}^{T} \frac{\vartheta}{\|\vartheta\|}, \tag{41a}\] \[\phi_{s} =\frac{\nu}{\|\nu\|},\,\,\nu=\begin{bmatrix}0&I_{n^{2}-mp}\end{bmatrix} U_{H}^{T}\frac{\vartheta}{\|\vartheta\|}, \tag{41b}\] \[\theta =\frac{2}{\pi}\arcsin(\|\nu\|). \tag{41c}\] Note that given \(\vartheta\), we can compute \(\phi_{c}\), \(\phi_{s}\), and \(\theta\) via (41). In order to compute \(G_{\Delta}^{*}\) in (37), we need to choose \((\hat{\phi}_{c},\hat{\phi}_{s})\) given \((\gamma_{c},\gamma_{s})\). The more accurate \((\gamma_{c},\gamma_{s})\) (i.e., the larger values of \(\gamma_{c}\) and/or \(\gamma_{s}\)) and/or \((\hat{\tau},\hat{\theta})\) (i.e., the smaller values of \(\hat{\tau}-\tau\) and/or \(\hat{\theta}-\theta\)), the more accurate estimate \(\hat{\Delta}\) we have. Given \((\gamma_{c},\gamma_{s})\) and computing \((\phi_{c},\phi_{s})\), we solve the following equations: \[\phi_{c}^{T}\hat{\phi}_{c}-\gamma_{c}=0,\phi_{s}^{T}\hat{\phi}_{s}-\gamma_{s} =0, \tag{42}\] for \((\hat{\phi}_{c},\hat{\phi}_{s})\) via the MATLAB built-in function \(\texttt{fsolve}(.)\). We generate \(N_{\Delta}=10^{6}\) uniformly random samples inside the \(n^{2}\)-dimensional hypersphere of radius \(\rho\) centered at origin by the Cartesian product of \(N_{r}=10^{4}\) samples of \(r\) and \(N_{\vartheta}=10^{2}\) samples of \(\vartheta\). Fig. 12 depicts the relative performance of the weighted centroid update compared to the centroid update, the Chebyshev center update, and the update based on a point close to the origin \((\hat{\tau},\hat{\theta})=(0.01,0)\) for various choices of \((\gamma_{c},\gamma_{s})\). As Fig. 12 shows, for the case of a more accurate estimate \(\hat{\Delta}\) (i.e., the larger values of \(\gamma_{c}\) and/or \(\gamma_{s}\)), the weighted centroid update outperforms all the other updates. Interestingly, as the estimation quality degrades (i.e., the values of \(\gamma_{c}\) and/or \(\gamma_{s}\) decrease as visualized by the trend from Fig. 12 (Top-Left) to Fig. 12 (Bottom-Right)), a point close to the origin attains the best relative performance. Such an observation can be interpreted in this way that when we have no accurate information about the perturbation, the best strategy is choosing a point close to the origin (e.g., \((\hat{\tau},\hat{\theta})=(0.01,0)\)) as it attains the largest value of the geometric metric \(\Xi_{\tau_{\Delta},\theta_{\Delta}^{*},\hat{\rho},n}\)\((\%)\). Also, as Fig. 12 (Top-Left) depicts, we observe that the Chebyshev center update outperforms the centroid update. Fig. 13 depicts the corresponding plots for the case of checking \(\|BG_{\Delta}^{*}C+\Delta\|_{F}<\beta\) (the exact one) instead of \(\|\Delta-\hat{\Delta}\|_{F}<\upsilon\) (the guaranteed (conservative) stability region). Similar observations/rends to the observations/trends depicted in Fig. 12 are also observed in Fig. 13. One difference is that fortunately, the relative performance of the NF-based robust updates in the exact scenario can be better than the guaranteed (conservative) scenario. Fig. 14 visualizes the relative performance (both guaranteed (conservative) and exact scenarios) of the weighted centroid update compared to the centroid update, the Chebyshev center update, and the update based on a randomly generated point \((\hat{\tau},\hat{\theta})=(0.4081,0.3969)\) for a randomly generated choice of \((\gamma_{c},\gamma_{s})=(0.9212,0.8315)\). As Fig. 14 (Left) shows, the weighted centroid update is the only successful update among all the updates. Fig. 14 (Right) similarly depicts the outperformance of the weighted centroid update compared to the other updates. Also, it depicts that in the exact scenario, the other updates have attained some positive results. The descending order of the performance according to Fig. 14 (Right) is the W-centroid update, the Cheb. center update, the centroid update, and the random update. Interestingly, we observe that the corresponding values of the geometric metric \(\Xi_{\tau_{\Delta},\theta_{\Delta}^{*},\hat{\rho},n}\)\((\%)\) have the same order \((7.0944\times 10^{-5}\%,5.0077\times 10^{-7}\%,2.0450\times 10^{-10}\%,7.1872 \times 10^{-12}\%)\). ## VI Concluding Remarks In this paper, we propose a simple yet efficient update of a nominal stabilizing SOF controller. According to the derived theoretical and empirical results throughout the paper, we present the following answer to the question stated in Section II (_Q1_): _A1_: A least-squares problem built upon the notion of MDRP enables us to propose an efficient updated stabilizing SOF controller. For both known and unknown perturbations with a known upper bound on their norm, we derive sufficient stability conditions followed by the characterized guaranteed stability regions. Moreover, we define geometric metrics to quantify the stability robustness of the proposed updated stabilizing SOF controllers. Specifically, for unknown perturbations with a known upper bound on their norm, we interestingly observe Figure 12: The relative performance of the weighted centroid update compared to the centroid update (Case \(1\)), the Chebyshev center update (Case \(2\)), and the update based on a point close to the origin \((\hat{\tau},\hat{\theta})=(0.01,0)\) (Case \(3\)). The Left, Middle, and Right bars in each case respectively correspond to Better, Equal, and Worse relative performances. Note that in each case the scenarios in which \(\|\Delta-\hat{\Delta}\|_{F}<\upsilon\) holds neither the weighted centroid update nor by the counterpart, are eliminated. (a) \((\gamma_{c},\gamma_{s})=(1,1)\), (b) \((\gamma_{c},\gamma_{s})=(0.9,0.9)\), (c) \((\gamma_{c},\gamma_{s})=(0.8,0.8)\), (d) \((\gamma_{c},\gamma_{s})=(0.7,0.7)\), (e) \((\gamma_{c},\gamma_{s})=(0.6,0.6)\), and (f) \((\gamma_{c},\gamma_{s})=(0.5,0.5)\). that the NF-based robust updates attain better performance compared to the random update. Moreover, in the case of a sufficiently accurate estimation of the unknown perturbation, the descending order of the NF-based robust updates in terms of performance is the weighted centroid update, the Cheb. center update, and the centroid design. _Limitations:_ Like any engineering solution, the proposed updated stabilizing SOF controller has some limitations. The main limitations are three-fold: _(i)_ we propose a semi-dynamic solution to a dynamic problem. The static nature comes from the utilized least-squares problem and the dynamic nature comes from the information stored in the nominal stabilizing SOF controller \(F\) for the state-space triplet \((A,B,C)\) (i.e., \(\beta_{\mathbb{R}}(A+BFC)\)), _(ii)_ computing the exact value of MDRP \(\beta_{\mathbb{R}}(A+BFC)\) is theoretically impossible and the practical heuristics to estimate \(\beta_{\mathbb{R}}(A+BFC)\) may provide the less accurate values. The less accurate \(\beta_{\mathbb{R}}(A+BFC)\), the less accurate guaranteed stability we get for the proposed update. Also, the more time-consuming practical heuristics we utilize to estimate \(\beta_{\mathbb{R}}(A+BFC)\), the less efficient update we get, and _(iii)_ Unlike the typical update, the proposed update can be destabilizing for a subset of perturbations as illustrated by the region outside the guaranteed stability region \(S_{\kappa}\) for \(\kappa<1\), i.e., \(\beta_{\mathbb{R}}(A+BFC)<\rho\). However, note that the positive point about the proposed update is that, unlike the typical update, it always provides a non-empty guaranteed stability region (the typical approach can fail to propose a updated stabilizing SOF controller as it is a sophisticated problem in general).
2308.11092
Do the Outburst Properties of M31N 2008-12a Depend on the Time Since the Previous Eruption?
Photometric observations spanning the UV to the near IR during the nine most recent eruptions (2014-2022) of the extragalactic nova M31N 2008-12a are presented and analyzed in order to explore whether the lightcurve properties for a given eruption, specifically the peak magnitudes and fade rates, are correlated with the time interval since the previous eruption. No significant correlation between the pre-eruption interval and the rate of decline was found, however it appears that the brightness at the peak of an outburst may be positively correlated with the time interval since the previous eruption.
William A. Burris, Allen W. Shafter, Kamil Hornoch
2023-08-22T00:19:17Z
http://arxiv.org/abs/2308.11092v1
# Do the Outburst Properties of M31N 2008-12a Depend on the Time Since the Previous Eruption? ###### Abstract Photometric observations spanning the UV to the near IR during the nine most recent eruptions (2014-2022) of the extragalactic nova M31N 2008-12a are presented and analyzed in order to explore whether the lightcurve properties for a given eruption, specifically the peak magnitudes and fade rates, are correlated with the time interval since the previous eruption. No significant correlation between the pre-eruption interval and the rate of decline was found, however it appears that the brightness at the peak of an outburst may be positively correlated with the time interval since the previous eruption. Andromeda Galaxy (39) - Novae (1127) - Recurrent Novae (1366) + Footnote †: journal: (and accepted) RNAAS 0000-0002-4002-8885]William A. Burris 0000-0002-4882-0888]Allen W. Shafter 0000-0002-4882-0888]Kamil Hornoch ## 1 Introduction M31N 2008-12a was discovered by Koichi Nishiyama and Fujio Kabashima, on 2008 Dec. 26.48 UT in the outskirts of M311. After the object was seen again in the fall of 2011 and 2012, Shafter et al. (2012) speculated that the object was either a recurrent nova or a slow nova undergoing multiple rebrightenings. When the object erupted again in the fall of 2013, the recurrent nova nature of the object became clear (Darnley et al., 2014; Tang et al., 2014). Based on timings of the 15 eruptions observed every year since 2008, the mean recurrence time is \(364.18\pm 2.18\) d, or \(0.997\pm 0.006\) yr, which is the shortest of any known nova. Footnote 1: [http://www.cbat.eps.harvard.edu/iau/CBAT_M31.html#](http://www.cbat.eps.harvard.edu/iau/CBAT_M31.html#) The \(\sim 1\) yr recurrence time offers an exceptional opportunity to study how the properties of a nova progenitor affects the outburst behavior. In the case of 2008-12a, the short recurrence time requires that the white dwarf in the system must be near the Chandrashekhar mass and accreting at a rate a few times \(10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\)(e.g., Kato et al., 2014). Starting in 2014 a worldwide campaign to monitor 2008-12a has produced a wealth of photometric data (e.g., see Darnley and Henze, 2020, and references therein). Here, we analyze lightcurve data from 2014 to the most recent eruption in 2022. Our primary goal is to explore whether properties of the UV and optical lightcurves are affected by the time since the previous eruption (\(t_{\rm pre}\)). ## 2 Photometric Data Optical (Johnson \(B,V,R,I\) and Sloan \(r\)) and UVOT (UVW1 and UVW2) photometry for eruptions from 2014 to 2022 have been analyzed to determine lightcurve parameters. The optical data were gleaned from the literature, while the UVOT photometry was obtained from the _Swift_ archive. The UVOT data were reanalyzed using the _HEASoft_ tool _uvotmaghist_ to achieve the largest temporal coverage available, while using smaller apertures (\(3^{\prime\prime}\) instead of the \(5^{\prime\prime}\) used in the default _Swift_ calibration) and multiple background regions to maximize the signal-to-noise. Unfortunately, the UVOT data were not always obtained using the same filters making direct comparisons between the various eruptions difficult. Both the UVW1 (\(\lambda_{\rm eff}\sim 2600\) A) and UVW2 (\(\lambda_{\rm eff}\sim 1928\) A) filters were used in 2014, while in subsequent years only one filter was employed: UVW1 in 2015, and UVW2 from 2016 onwards. ## 3 Lightcurve Parameters Linear least-square fits to \(\sim\)2 mag below peak were performed on the rise (when observed) and fall from maximum light. When the rise was observed, the intersection of the two linear fits was taken to be the time of maximum light, and the extrapolated magnitude at that time was taken to be the peak magnitude. When the rise was not captured, we took the brightest observed point to represent maximum light2. In all cases, the rate of decline from maximum light was used to determine the corresponding \(t_{2}\) times (the time to decline by 2 mag from maximum light) for each eruption. There were some years where only a few data points in a particular filter were measured, not enough to measure a meaningful \(t_{2}\) time. In these cases, we also adopted the brightest measured magnitude as maximum light. Footnote 2: Formally, this represents a lower limit to the actual peak magnitude, but due to the sustained monitoring around the world, observations were typically made within hours of discovery, and thus likely close to peak brightness. The peak magnitudes and \(t_{2}\) times are shown in Figure 1 where we have plotted the parameters as a function of the time interval since previous eruption, \(t_{\rm pre}\). Given that \(t_{\rm pre}\) was (remarkably!) nearly identical for several epochs, we averaged the maximum magnitudes and \(t_{2}\) times for those pre-eruption intervals. This process resulted in five mean intervals of \(\langle t_{\rm pre}\rangle=\) 310.49 d (2014/2018), 329.83 d (2015), 362.34 d (2019/2020), 382.09 d (2017/2021/2022), and 471.34 d (2016). No correlation is apparent between the \(t_{2}\) and \(t_{\rm pre}\) times, however we do find tentative evidence for a correlation between the peak brightness and \(t_{\rm pre}\). The most compelling evidence is provided by the 2016 eruption, which had the longest pre-eruption interval of Figure 1: _Top Left_: The peak optical magnitudes reached by 2008-12a as a function of \(t_{\rm pre}\). For values of \(t_{\rm pre}\) representing multiple eruptions, the average peak magnitude is shown. A likely trend of peak brightness with \(t_{\rm pre}\) is evident. The \(B\) peak magnitude for the 2016 eruption, which was obtained \(\sim\)5 hr after the other filters, may have faded somewhat during that interval. _Bottom Left_: Peak magnitude as a function of \(t_{\rm pre}\) for the _Swift_ UVOT bandpasses. No clear trend is observed, likely because the data were typically obtained after optical identification. _Top (Bottom) Right_: The \(t_{2}\) time as a function of \(t_{\rm pre}\) for the optical (UVOT) bandpasses. In both cases no correlation between \(t_{2}\) time and \(t_{\rm pre}\) is observed. (See Appendix for DBF) \(t_{\rm pre}=471.34\) d. In 4 out of the 5 optical filters, and in the UVW2 filter, the peak brightness of the 2016 eruption is largest seen, even though the peak was not captured in any filter that year. On the other hand, the 2014 and 2018 eruptions, which had the shortest pre-eruption mean interval (\(\langle t_{\rm pre}\rangle=310.49\) d), were the faintest of the nine most recent eruptions (in all but the \(I\) and \(R\) filters), although the 2018 lightcurve morphology was somewhat complicated exhibiting a double-peaked maximum: a dip after the original peak followed by a short resurgence. It is unclear whether the double-peak structure of the outburst is related to the short pre-eruption interval, but it will be interesting to see if 2008-12a exhibits similar behavior during future eruptions with short \(t_{\rm pre}\). There is no obvious double-peaked structure in the 2014 eruption, but the data are limited. ## 4 Conclusions Our analysis has failed to reveal any obvious relationship between the \(t_{2}\) time and \(t_{\rm pre}\). However, we did find tentative evidence that the peak optical brightness reached by M31N 2008-12a increases with the time since the previous eruption. The trend is not obvious in the UV likely because the _Swift_ observations were always triggered after the eruption was discovered in the optical giving the nova more time to fade before the UV photometry could be performed. The accreted mass required to trigger an eruption depends on \(M_{\rm WD}\) and the mean accretion rate onto the white dwarf's surface (e.g., Townsley & Bildsten, 2005). A longer \(t_{\rm pre}\) implies a lower \(\langle dM/dt\rangle\) between eruptions. The observed trend of peak brightness with \(t_{\rm pre}\) may be the result of increased degeneracy in the accreted layer that would be expected for this slightly lower accretion rate. Continued monitoring of 2008-12a during future eruptions will be required in order to confirm this finding, and if warranted, to explore its origin. This work has been supported by NASA grant 80NSSC20K0547 (AWS) and by the project RVO:67985815 (KH). We thank K. Page for her advice on the _Swift_ reductions.
2305.00605
Classification and Online Clustering of Zero-Day Malware
A large amount of new malware is constantly being generated, which must not only be distinguished from benign samples, but also classified into malware families. For this purpose, investigating how existing malware families are developed and examining emerging families need to be explored. This paper focuses on the online processing of incoming malicious samples to assign them to existing families or, in the case of samples from new families, to cluster them. We experimented with seven prevalent malware families from the EMBER dataset, four in the training set and three additional new families in the test set. Based on the classification score of the multilayer perceptron, we determined which samples would be classified and which would be clustered into new malware families. We classified 97.21% of streaming data with a balanced accuracy of 95.33%. Then, we clustered the remaining data using a self-organizing map, achieving a purity from 47.61% for four clusters to 77.68% for ten clusters. These results indicate that our approach has the potential to be applied to the classification and clustering of zero-day malware into malware families.
Olha Jurečková, Martin Jureček, Mark Stamp, Fabio Di Troia, Róbert Lórencz
2023-05-01T00:00:07Z
http://arxiv.org/abs/2305.00605v2
# Classification and Online Clustering of Zero-Day Malware ###### Abstract A large amount of new malware is constantly being generated, which must not only be distinguished from benign samples, but also classified into malware families. For this purpose, investigating how existing malware families are developed and examining emerging families need to be explored. This paper focuses on the online processing of incoming malicious samples to assign them to existing families or, in the case of samples from new families, to cluster them. We experimented with seven prevalent malware families from the EMBER dataset, four in the training set and three additional new families in the test set. Based on the classification score of the multilayer perceptron, we determined which samples would be classified and which would be clustered into new malware families. We classified 97.21% of streaming data with a balanced accuracy of 95.33%. Then, we clustered the remaining data using a self-organizing map, achieving a purity from 47.61% for four clusters to 77.68% for ten clusters. These results indicate that our approach has the potential to be applied to the classification and clustering of zero-day malware into malware families. **Keywords: Malware Classification, Online Clustering, Static Analysis, Zero-Day Malware** ## 1 Introduction Malware is one of the most significant security threats today, which includes several different categories of malicious code, such as viruses, trojans, bots, worms, backdoors, syware, and ransomware. The number of new malicious software is growing exponentially. Therefore, malware detection is an important issue in cyber security, which is a key area to combat these threats. Every day, approximately 560,000 new malware samples are detected, according to the AV-Test Institute [1]. Due to a large amount of new malware, detailed manual analysis of each one is impractical. Therefore, automatic categorization of malware into groups corresponding to malware families is necessary. Antivirus companies frequently keep a knowledge base of the behavior of malware families. Samples of the same group share a lot of code and exhibit similar behaviors, making them variants. Such samples are similar to each other in terms of similarity metrics that can also be learned to improve classification accuracy [2]. Malware detection techniques are generally divided into two categories: signature-based and anomaly-detection techniques [3]. Signature-based detection uses a set of predefined signatures, typically sequences of bytes in the malware code, to determine whether or not a scanned software program is malicious. The signature-based method compares the program's content with known signatures, and if a match is found, the program is reported as malicious. The signature-based approach's main limitation is its inability to detect newly developed (zero-day) malware, which are emerging threats previously unknown to the malware detection system, as well as evolving threats like metamorphic and polymorphic malware [4]. Machine learning technologies are becoming more popular and are also being introduced into malware analysis and malware detection. Today, malware can be identified using one of three methods: static analysis, dynamic analysis, or hybrid analysis. Static analysis is a method of examining malware without running it. This is typically accomplished by analyzing the code of a binary file to comprehend its functionality and identify any malicious activity. Dynamic analysis involves executing the malware sample in a safe setting, like a sandbox, and watching its behavior in real-time. It is necessary to continuously monitor the malware's file system, registry, and network activity to detect any malicious behavior, such as data exfiltration or unauthorized connections to remote servers. Dynamic and static analysis components are combined in hybrid methods [5]. Malware classification is the process of categorizing malware samples into previously studied and known families. On the other hand, malware clustering divides unlabeled data into different clusters so that similar data fall into the same cluster and dissimilar data fall into different clusters. Clustering algorithms have been used to detect zero-day malware, i.e., previously unknown malware [6]. The groups formed through classification or clustering methods are then distributed to malware analysts, which usually focus only on a few malware families. This grouping can save malware analysts a significant amount of time since they may manually analyze malware samples similar to those already analyzed. Malicious and benign samples are represented using vectors of features extracted using static or dynamic analysis [5]. While static analysis is faster than dynamic analysis since it does not require running samples, dynamic analysis extracts more relevant features, such as system calls or network data, than those extracted from static analysis. Our work is based on the EMBER dataset [7], which contains features extracted from static analysis. We propose a malware family classification system that can process zero-day malware online. Sample by sample is processed in real-time and assigned to existing or newly emerging malware families. Classification into known malware families is done via multilayer perception, which we also use to determine known and new families. Clustering into new families uses online clustering algorithms, including self-organizing maps. Zero-day malware is challenging to detect using traditional signature-based detection techniques since no signature for such malware was created and appended in the database of known signatures [8]. The detection of zero-day malware is also difficult for a detection system based on machine learning, which is more robust and can better adapt to new threats however is more prone to have a high false positive rate than the signature-based detection method. The contribution of our work lies in its online nature, which enables the handling of even zero-day malware. Sample by sample is processed in real-time and assigned to an existing or newly emerging malware family. The rest of the paper is organized as follows: Section 2 reviews related works on malware family classification. In Section 3, we present three state-of-the-art online clustering algorithms used in the experimental part. Our proposed malware classification model is presented in Section 4. Section 5 provides an experimental setup, while the experiments description and the results are presented in Section 6. Conclusion and future work are given in Section 7. ## 2 Related Work The background of malware family classification and clustering that has been researched in the past is presented in this section. The authors of [9] present a non-signature-based virus detection method based on Self-Organizing Maps (SOMs) that can detect files with viruses without knowing virus signatures. Their approach used structural information about the data contained in the executable file. The researchers also developed the program VirusDetector, which can determine whether or not a file is virus-infected. They used the SOM in an unusual way in that it was "trained" with \(n\) fractions of the same sample rather than \(n\) different samples of data, and it can reflect the presence of data in an executable that is somehow different from the rest. In [10], the authors proposed a method for automatic analysis of malware behavior using clustering and classification. The authors monitored the malware binaries in a Sandbox environment and generated a sequential report of observed behavior for each binary. Rieck et al. used the CWSandbox monitoring tool for extracting API call names and parameters. The API call names and parameters were encoded into a multi-level representation called the Malware Instruction Set. The sequential messages were then put into a high-dimensional vector space where behavioral similarity could be assessed geometrically, allowing intuitive yet powerful clustering and classification methods to be designed. The embedded messages were then subjected to machine learning techniques for clustering, which enables identifying novel classes of malware with similar behavior and classification of behavior, which allows the assignment of malware to known classes of behavior. Their incremental method for behavior-based analysis is capable of processing the behavior of thousands of malware binaries daily. The authors of [11] developed a categorization system for automatically grouping phishing sites or malware samples into families that share specific common characteristics. Their system combined the individual clustering solutions produced by different algorithms using a cluster ensemble. Zhuang et al. used the \(k\)-medoids clustering method and the hierarchical clustering algorithm as feature selection algorithms to extract different attributes of phishing emails. In [6], authors describe a framework for malware detection that combines the accuracy of supervised classification methods for detecting known classes with the adaptability of unsupervised learning for detecting new malware from existing ones using a class-based profiling approach. The authors used a two-level classifier to solve the problem of the unbalanced distribution of classes due to a disproportionate number of benign and malicious network flows. Initially, a macro-level binary classifier isolates malicious streams from non-malicious ones. The multiclass classification technique was then also used to categorize malicious flows into one of the already existing malware classes or as a new malware class. The authors developed a class-based probabilistic profiling method to detect malware classes other than those in the training set. Comar et al. presented a tree-based feature transformation to handle the data imperfection issues in network flow data to create more informative nonlinear features to detect different malware classes precisely. The authors of [12] presented a method for the automatic classification of malware families using feed-forward Artificial Neural Networks. They resized and converted the malware binaries to grayscale images. Texture features are extracted using a Gabor wavelet with eight orientations and four scales. The authors used the Mahenhur Dataset, which contains 3,131 malware samples from 24 unique families. A total of 320 features were selected to train the malware using the neural network tool. The authors reported a classification accuracy of 96.35%. The authors of [13] created a zero-day malware detection system that used relevant features obtained from static and dynamic malware analysis. The dataset used contains 3,130 portable executables (PE) files, including 1,720 malicious and 1,410 benign files. Malicious samples were collected from an online repository of Virus-Share, and the benign files were collected manually from System directories of successive versions of the Windows Operating system. The authors used an information gain method and ranker algorithm to select seven features from the feature set, which were then used to build a classification model using machine learning algorithms from the WEKA library. The authors used seven classifiers, IB1, Naive Bayes, J48, Random Forest, Bagging, Decision Table, and Multi-Layer Perceptron, for distinguishing malicious files from benign ones. In [14], Radwan presented a method for classifying a portable executable file as benign or malicious using machine learning. The proposed method for extracting the integrated feature set, which used a static analysis method, was created by combining a few selected raw features from the PE files and a set of derived features. The author used a dataset of 5,184 samples, 2,683 of which were malware and 2,501 benign. The dataset was divided into two categories: raw sample dataset (53 features) and integrated dataset (74 features), which included derived and expanded features. Seven different machine learning classification models were used: \(k\)-nearest neighbors, Gradient boosted trees, Decision Tree, Random forest, File large margin, Logistic regression, and Naive Bayes. The classification algorithms are evaluated using the train test split method (70/30) and 10-fold cross-validation for splitting raw and integrated datasets. In [15], the authors proposed a static malware detection technique using the classification method. Zhang et al. used a dataset released by EMBER, where most PE file samples are labeled malicious or benign. Then, using the detection results of Virus Total and K7 Antivirus Gateway (K7GW), the authors relabeled the malware data into several classes, each representing a type of malware. The malware classifiers are constructed using two linear and two ensemble decision tree models. The authors used linear models such as Support vector classifier and logistic regression, and the ensemble decision tree models are random forest and an efficient gradient boosting decision tree named Light gradient boosting machine. The ensemble decision tree models outperformed the other linear models, especially random forest. The authors [16] proposed a new method for incremental automatic malware family identification and malware classification called MalFamAware, which is based on an online clustering algorithm. This method efficiently updates the clusters as new samples are added without having to rescan the entire dataset. BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies) was used by the authors as an online clustering algorithm and was compared with CURE (Clustering using Representatives), DBSCAN, k-means, and other clustering algorithms. Depending on the situation, MalFamAware classifies new incoming malware into the corresponding existing family or creates a class for a new family. In [17], the authors used self-organizing maps to generate clusters that capture similarities between malware behaviors. In their work, Pirscoveanu et al. used features chosen based on API calls. These features represent successful and unsuccessful calls (i.e., calls that have succeeded, resp. failed in changing the state of the system on the infected machine) and the return codes from failed calls. Then they apply principal component analysis (PCA) to reduce the set of features. Using the elbow method and gap statistics, the authors then determined the number of clusters. Each sample was then projected onto a two-dimensional map using self-organizing maps, where the number of clusters equaled the number of map nodes. The authors used the dataset to create a behavioral profile of the malicious types, which was passed to a self-organizing map to compare the proposed clustering result with labels obtained from Antivirus companies via VirusTotal1. Footnote 1: [https://www.virustotal.com](https://www.virustotal.com) In [18], the authors classified malware using continuous system activity data (such as CPU use, RAM/SWAP use, and network I/O). They also used SOFM (Self Organizing Feature Maps) to process machine activity data to capture fuzzy boundaries between machine activity and classes (malicious or benign). First, the authors used SOFM as a stand-alone malware classification method that uses machine activity data as input. In their paper Burnap et al. state that they trained two maps because it was difficult to separate clean files from malicious ones on one map due to the competitive nature of the SOFM. They used benign samples to train the "Good" map, and malicious samples were used to train the "Bad" map. The authors also mention that they created a voting system that gathers accurate classifications during counter-testing for each sequence provided in the maps. Testing with unseen data was accomplished by comparing the Best Matching Unit (BMU) output activity from each map for a given input vector. The authors then used the BMU output from the SOFM as a feature and combined the SOFM with an ensemble classifier built on a Logistic regression model. Finally, the authors' method demonstrated increased classification accuracy compared to classification algorithms such as Random forest, Support vector machines, and Multilayer perceptron. ## 3 Theoretical Background Cluster analysis or clustering is an unsupervised machine learning method of identifying and grouping a set of abstract objects into classes of similar objects (called clusters). Intuitively, data from the same cluster should be more similar to each other than data from different clusters. Sequential clustering algorithms are considered simple and fast and are among those that produce a single clustering as a result. In the following algorithms, all input data to be clustered are presented to the algorithms only once. ### Online \(k\)-means (OKM) Algorithm First, we introduce the online \(k\)-means (OKM) algorithm, also known as sequential \(k\)-means or MacQueen's \(k\)-means [19]. The sequential \(k\)-means algorithm sequentially clusters a new example and updates the centroid for that particular cluster. One disadvantage of the online \(k\)-means algorithm is that the number of clusters, \(k\), must be determined in advance. OKM algorithm can be initialized in different ways, for example, by selecting the first \(k\) data points or randomly selecting \(k\) data points from the entire data set. The pseudocode for the online \(k\)-means algorithm is given in Algorithm 1 below [20]. ``` 0: a number of clusters \(k\) to be created, a set of data points \(X\) 0: a set of \(k\) clusters 1: initialize cluster centroids \(\mu_{1},\ldots,\mu_{k}\) randomly 2: set the counts \(n_{1},\ldots,n_{k}\) to zero 3:repeat 4: select a random point \(x\) from \(X\) and find the nearest center \(\mu_{i}\) to this point 5:if\(\mu_{i}\) is closest to \(x\)then 6: increment \(n_{i}\) 7: replace \(\mu_{i}\) by \(\mu_{i}+\frac{1}{n_{i}}(x-\mu_{i})\) 8:endif 9:until interrupted ``` **Algorithm 1** Sequential \(k\)-means algorithm (OKM) ### Basic Sequential Algorithmic Scheme (BSAS) The Basis Sequential Algorithmic Scheme (BSAS) [21] is a well-known clustering method in which all feature vectors are presented to the algorithm only once, and the number of clusters is not known a priori. The clusters are gradually generated as the algorithm evolves. The basic idea of BSAS is to assign each newly considered feature vector \(x\) to an existing cluster or create a new cluster for that vector depending on the distance to already created clusters. The distance \(d(x,C)\) between a feature vector \(x\) and a cluster \(C\) may be defined in several ways. We will consider \(d(x,C)\) as the distance between \(x\) and the centroid of \(C\). The BSAS has the following parameters: the dissimilarity threshold \(\Theta\), i.e., the threshold used for creating new clusters, and a number \(q\), i.e., the maximum number of clusters allowed. When the distance between a new vector and any other clusters is beyond a dissimilarity threshold, and if the number of the maximum clusters allowed has not been reached, a new cluster containing the new presented vector is created. The value of the threshold \(\Theta\) directly affects the number of clusters formed by BSAS. If the user chooses the too small value of \(\Theta\), then unnecessary clusters will be created, while if the user chooses the too large value of \(\Theta\), less than an appropriate number of clusters will be formed. The pseudocode for the BSAS algorithm is given below in Algorithm 2. ### Self-organizing Map (SOM) A self-organizing map (SOM) was proposed by Finnish researcher Teuvo Kohonen in 1982 and is, therefore, sometimes called a Kohonen map [22]. The SOM is an unsupervised machine learning technique that transforms a complex high-dimensional input space into a simpler low-dimensional (typically two-dimensional grid) discrete output space while simultaneously preserving similarity relations between the presented data. Self-organizing maps apply competitive learning rules where output neurons compete with each other to be active neurons, resulting in only one of them being activated at any one time. An output neuron that wins the competition is called a winning neuron. ``` 0: the dissimilarity threshold \(\Theta\), the maximum allowed number of clusters \(q\), and a set of data points \(X\) 0: a set of clusters 1: initialize \(m=1\) 2: select a random point \(x_{1}\) from \(X\) 3: define the first cluster \(C_{m}=\{x_{1}\}\) 4:for each\(x\)in\(X\backslash\{x_{1}\}\)do 5: find \(C_{k}:d(x,C_{k})=min_{1\leq i\leq m}d(x,C_{i})\) 6:if\(d(x,C_{k})>\Theta\) and \(m<q\)then 7:\(m=m+1\) 8:\(C_{m}=\{x\}\) 9:else 10:\(C_{k}=C_{k}\cup\{x\}\) 11: update the centroid of \(C_{k}\) 12:endif 13:endfor ``` **Algorithm 2** Basic Sequential Algorithmic Scheme (BSA) Before running the algorithm, several parameters need to be set, including the size and shape of the map, as well as the distance at which neurons are compared for similarity. After selecting the parameters, a map with a predetermined size is created. Individual neurons in the network can be combined into layers. SOM typically consists of two layers of neurons without any hidden layers [23]. The input layer represents input vector data. A weight is a connection that connects an input neuron to an output neuron, and each output neuron has a weight vector associated with it. The formation of self-organizing maps begins by initializing the synaptic weights of the network. The weights are updated during the learning process. The winner is the neuron whose weight vector is most similar to the input vector. The winning neuron of the competition or the best-matching neuron \(c\) at iteration \(t\) (i.e., for the input data \(x_{t}\)) is determined using the following equation \[c(t)=\arg\min\left\{\left\|x(t)-w_{i}(t)\right\|\right\},\text{ for }i=1,2, \ldots,n\] where \(w_{i}(t)\) is the weight of \(i\)-th output neuron at time \(t\), and \(n\) is the number of output neurons. After the winning neuron \(c\) has been selected, the weight vectors of the winner and its neighboring units in the output space are updated. The weight update function is defined as follows: \[w_{i}(t+1)=w_{i}(t)+\alpha(t)h_{ci}(t)\left[x(t)-w_{i}(t)\right],\] where \(\alpha(t)\) is the learning rate parameter, and \(h_{ci}(t)\) is the neighborhood kernel function around the winner \(c\) at time \(t\). The learning rate is the speed with which the weights change. The connection between the input space and the output space is created by the neighborhood function, which also determines the rate of change of the neighborhood around the winner neuron. This function affects the training result of the SOM procedure. A Gaussian function is a common choice for a neighborhood function \(h_{ci}\) that determines how a neuron is involved in the training process: \[h_{ci}(t)=\exp\left(-\frac{d_{ci}^{2}}{2\sigma^{2}(t)}\right)\alpha(t).\] where \(d_{ci}\) denotes the distance between the winning neuron \(c\) and the excited neuron \(i\), \(\sigma^{2}(t)\) is a factor used to control the width of the neighborhood kernel at time \(t\). The learning rate \(\alpha(t)\) is a decreasing function toward zero. SOM can be used in a variety of ways, including clustering tasks. The authors of [24] assumed that each SOM unit is the center of a cluster, and as a result, the \(k\)-unit SOM performed a \(k\)-means-like task. The authors also added that when the radius of the neighborhood function in the SOM is zero, the SOM and \(k\)-means algorithms strictly correspond to one another. The basic SOM algorithm can be summarized by the following pseudocode: ## 4 The Proposed Approach This section contains a description of the proposed system for the classification and clustering of malware families. The definition of the problem that our system attempts to solve is as follows. Let \(S=\{s_{t},s_{t+1},s_{t+2},\ldots\}\) be a streaming data containing unlabeled malicious samples captured from time \(t\). Let us also have a dataset \(T\) with labeled malicious samples captured before the time \(t\) where labels are divided into \(k\) different classes corresponding to \(k\) known malware families. The goal is to process \(s_{i}\), \(i\geq t\), as follows: 1. if \(s_{i}\) is from the known malware family, then assign it to this family, 2. otherwise: 1. if \(s_{i}\) is similar to some already clustered (unlabeled) samples \(s_{j}\in S\), where \(t\leq j\leq i\), then assign \(s_{i}\) to the corresponding cluster 2. otherwise, create a new cluster and assign \(s_{i}\) to it. Our approach attempts to solve this problem in two phases: * _First phase_: deciding which stream data samples to classify and which to cluster, * _Second phase_: classification and clustering of samples based on the decision from the _first phase_. In the _first phase_, the streaming data \(S\) is first preprocessed using the standard score and the PCA algorithm. Then, the classification probabilities for the classes (i.e., known malware families) are predicted using already trained one or more classifiers. We considered two different methods for computing the classification probabilities prediction. In the first method, the classification probabilities pre-probabilities prediction for a given classifier is defined as a vector \((p_{1},\ldots,p_{k})\) of calibrated probabilities, where \(p_{i}\) is the probability estimation of the classifier that a given test sample belongs to the \(i\)-th class. The classification probabilities prediction from the second method is defined as a vector \((p^{\prime}_{1},\ldots,p^{\prime}_{k})\) of probabilities, where \(p^{\prime}_{i}\) is the probability estimation of the \(i\)-th classifier that a given test sample belongs to the \(i\)-th class. The concrete calculation of classification probability predictions depends on the given classifier and will be discussed in Section 6.2. Thus, the first method relies on one multiclass classifier, as shown in Fig. 1, where this classifier was trained using the labeled data from the dataset \(T\) with \(k\) classes. On the other hand, the second method relies on \(k\) binary classifiers, as illustrated in Fig. 2. In this case, the \(i\)-th classifier corresponds to the \(i\)-th class, i.e., the dataset \(T\) is divided into two classes: samples from the \(i\)-th class, and the second class consists of samples that do not belong to the \(i\)-th class. This division is applied for each of the \(k\) classes separately. Then, \(k\) binary classifiers were trained on such data, and the \(i\)-th classifier provided \(p^{\prime}_{i}\), which is the probability prediction that a test sample belongs to the \(i\)-th class. In the rest of the paper, the first method will be referred to as the _single-classifier method_ and the second as the _multi-classifier method_. The reason why we considered both methods is that the performance of these methods Figure 1: Classification probabilities prediction \((p_{1},\ldots,p_{k})\) from a multiclass classifier. Figure 2: Classification probabilities prediction \((p^{\prime}_{1},\ldots,p^{\prime}_{k})\) from the \(k\) binary classifiers. varies depending on the data structure. The _multi-classifier method_, where we trained separate classifiers for each class, can be suitable if the classes have different characteristics. However, it may also lead to redundancy in the learned features. In addition, training \(k\) binary classifiers slows down the training process compared to training one multiclass classifier. Streaming data samples \(s_{i}\) are divided into two chunks according to the classification probabilities prediction. In both methods, maximal probability \(\max_{1\leq j\leq k}p_{j}\), resp. maximal probability \(\max_{1\leq j\leq k}p_{j}^{\prime}\) is compared to some threshold parameter \(t\), resp. \(t^{\prime}\). A test sample for which this maximal probability is greater or equal to the threshold is called _high-confidence sample_. On the other hand, _low-confidence samples_ are samples where the maximal probability from the classification probabilities prediction vector is lower than a given threshold. In the _second phase_, _high-confidence samples_ are classified into the known malware families and _low-confidence samples_ proceed into the online clustering algorithm. The same feature set extracted using PCA in the _first phase_ was used for classification and clustering. The threshold \(t\), resp. \(t^{\prime}\) is a parameter of our approach, and it determines the amount of stream data that will be classified or clustered. The proposed architecture is depicted in Fig. 3. Testing various clustering algorithms to find the best clustering is essential since online clustering methods may exhibit varying performance traits based on the dataset. The main difference between our approach and existing works regarding malware family classification is that our method processes the streaming data in real-time, while some other works rely on batch processing. Both streaming data processing and batch processing have their advantages and disadvantages. While streaming data processing can provide a faster decision to samples as they occur, on the other hand, processing in large batches may be more efficient since it can be parallelized. ## 5 Experimental Setup This section presents the dataset used in the experimental part, and the metrics for evaluating the classification and clustering results are explained. The implementation of our proposed model and methods for evaluating classification and clustering results are based on scikit-learn2 and PyClustering3 libraries. All experiments in this work were executed on a single computer platform having two processors (Intel Xeon Gold 6136, 3.0GHz, 12 cores each), with 64 GB of RAM running the Ubuntu server 18.04 LTS operating system. Footnote 2: [https://scikit-learn.org](https://scikit-learn.org) Footnote 3: [https://pyclustering.github.io](https://pyclustering.github.io) ### Dataset We worked with the EMBER dataset [7] that contains features from portable executable files extracted using static analysis, which aims at searching for information about the file structure without running a program. The features were extracted using the LIEF open source package [25] and includes metadata from portable executable file format [26], strings, byte and entropy histograms. The feature set consists of 2,381 features that are described in [7]. The EMBER dataset contains 400,000 labeled malware samples divided into a training set (300,000 samples) and a test set (100,000 samples) according to the following date. Samples that appeared until October 2018 are included in the training set, while samples appeared between November and December 2018 are included in the test set. The training set contains samples from more than 3,000 malware families. However, we focus primarily on the four most prevalent malware families: Xtrat, Zbot, Ramnit, and Sality. The training dataset \(T\) used in our model consists of samples from the EMBER training set with labels corresponding to these four malware families. The streaming data \(S\) used in our model consists of samples from the EMBER test data set with labels corresponding to these four malware families and three additional malware families: Emotet, Ursnif, and Sivis. We considered three new families to get closer to the real situation when new malware families are constantly being created. One of our goals is to verify whether our proposed model can identify new families using online clustering. Table 1 summarizes the number of samples used in the experimental part, arranged in descending order of sample count for each of the seven prevalent malware families from the EMBER dataset. The following is a brief description of the malware families. More information about malware families and technical details can be found in [27]. The Xtrat malware family is able to steal sensitive data from infected devices, including login passwords, keystrokes, and information from online forms. Zbot, also known as Zeus, is a Trojan horse frequently used to steal financial data, including credit card numbers and login information for online banking. The Ramnit is a worm that has the ability to steal login passwords, financial information, and other sensitive data. It is also capable of downloading additional malware onto compromised devices. Sality is malware that has the ability to replicate itself and propagate over networks. It can infect executable files and change the code within to avoid detection. Emotet is a modular malware that mainly targets affected computers to steal sensitive data. It is usually spread through phishing emails and can use social engineering tactics to deceive users into downloading and installing the malware. Urnsif is a banking Trojan that can steal private data such as usernames, passwords, and credit card numbers. Typical infection vectors are phishing emails or drive-by downloads. Sivis is a backdoor Trojan that belongs among more recent malware families. Sivis often spreads via phishing emails or by taking advantage of vulnerabilities in outdated software. Once Sivis is activated, attackers may utilize the victim's computer to carry out orders, steal data, or launch more attacks. ### Evaluation Metrics Our dataset contains samples from seven classes that have different sizes. We used balanced accuracy (BAC) to evaluate the imbalanced testing set for the multiclass classification problem. The balanced accuracy score is defined as the average of true positive rates (recalls) across all \(k\) classes: \[BAC=\frac{1}{k}\sum_{i=1}^{k}TPR_{i},\] where \(TPR_{i}\) is the true positive rate for class \(C_{i}\). The balanced accuracy helps identify whether the classifier performs well in all classes or is biased towards a particular class. In the clustering part, we evaluated the quality of clusters using two standard measures: purity and silhouette coefficient (SC). Let the purity of cluster \(C_{j}\) be defined as \(\text{Purity}(C_{j})=\max_{i}p_{ij}\), where \(p_{ij}\) is the probability that a randomly selected sample from cluster \(C_{j}\) belongs to class \(i\). The overall purity is the weighted sum of \begin{table} \begin{tabular}{|l|c|c|c|} \hline Malware Family & \(|D|\) & \(|S|\) & Size \\ \hline Xtrat & 16,689 & 19,280 & 35,969 \\ Zbot & 10,782 & 13,293 & 24,075 \\ Ramnit & 10,275 & 10,320 & 20,595 \\ Sality & 9,522 & 9,050 & 18,572 \\ Ursnif & 0 & 5,733 & 5,733 \\ Emotet & 0 & 4,904 & 4,904 \\ Sivis & 0 & 2,803 & 2,803 \\ \hline \end{tabular} \end{table} Table 1: The size of training labeled data set \(D\), size of streaming unlabeled data set \(S\), and the overall dataset size, i.e., \(|D|+|S|\). Figure 3: The architecture of our proposed model for processing zero-day malware to malware families. individual purities and is given as follows: \[\text{Purity}=\frac{1}{n}\sum_{j=1}^{k}|C_{j}|\text{Purity}(C_{j}).\] where \(n\) is the size of a dataset. While purity uses labels when evaluating the quality of clusters, the silhouette coefficient does not depend on labels. It can therefore be used in the validation phase to determine the number of clusters. The average silhouette coefficient [28] for each cluster is defined as follows. Consider \(n\) samples \(x_{1},\ldots,x_{n}\) that have been divided into the \(k\) clusters \(C_{1},\ldots,C_{k}\). Average distance between \(x_{i}\in C_{j}\) to all other samples in cluster \(C_{j}\) is given by \[a(x_{i})=\frac{1}{|C_{j}|-1}\sum_{\begin{subarray}{c}y\in C_{j}\\ y\neq x_{i}\end{subarray}}d(x_{i},y).\] Let \(b_{k}(x_{i})\) be the average distance from the sample \(x_{i}\in C_{j}\) to all samples in the cluster \(C_{k}\) not containing \(x_{i}\) : \[b_{k}(x_{i})=\frac{1}{|C_{k}|}\sum_{y\in C_{k}}d(x_{i},y).\] Let \(b(x_{i})\) be the minimum of \(b_{k}(x_{i})\) for all clusters \(C_{k}\), where \(k\neq j\). The silhouette coefficient of \(x_{i}\) is given by combining \(a(x_{i})\) and \(b(x_{i})\) as follows: \[s(x_{i})=\frac{b(x_{i})-a(x_{i})}{\max(a(x_{i}),b(x_{i}))}.\] The silhouette coefficient \(s(x_{i})\) ranges from -1 to 1, with higher scores indicating better performance. Finally, the average silhouette coefficient for a given dataset is defined as the average value of \(s(x_{i})\) over all samples in the dataset. The choice of metric for evaluating the quality of clusters depends on the information we have about the samples. Some antivirus companies may receive hundreds of thousands of new samples daily, but it is not known, immediately after their appearance, whether they are malicious. However, these samples are analyzed (manually or through automated processes based on machine learning), and the corresponding labels are created. For this reason, we also assume in our work that we also have the labels available for evaluating clusters, i.e., respective malware families. ## 6 Experimental Results This section contains a description of individual experiments. For both methods, i.e., for the _single-classifier method_ with one multiclass classifier and the _multi-classifier method_ with four binary classifiers, we considered the following three classifiers: Multilayer perceptron (MLP), Random forest (RF), and \(k\)-nearest neighbors (KNN). First, we performed feature extraction and hyper-parameters tuning of these three classifiers. Then, the relationship between BAC and the percentage of classified samples (i.e., number of _high-confidence samples_ divided by \(|S|\) times 100%) is presented for both methods for calculating the classification probabilities prediction vector. Finally, for the _single-classifier method_ only, we present the relationship between the number of clusters and the quality of the clusters given in terms of purity and average silhouette coefficient. ### Preprocessing The standard score and PCA algorithm were applied to the data set \(T\) containing the labeled samples. The standard score, or z-score, converts a value \(x\) to a standard score \(z\) via \(z=(x-\bar{x})/s\), where \(\bar{x}\) is the mean and \(s\) is the standard deviation. The PCA [29] is an unsupervised learning algorithm used for dimensionality reduction. We used the PCA to extract new, uncorrelated features that are linear combinations of the original features given by the EMBER dataset described in Section 5.1. The same preprocessing methods, i.e., the standard score for data normalization and PCA for feature extraction, were also applied to unlabeled streaming data \(S\). In this experiment, we considered the options for the optimal number of features from the interval \(\{20,30,40,\ldots,200\}\). Table 2 shows the optimal number of features and the balanced accuracy achieved on the training data \(D\) for the multiclass classifier and four binary classifiers. ### Classifiers selection In the _single-classifier method_ and the _multi-classifier method_, we considered the following three classifiers: MLP, RF, and KNN. We tuned the hyper-parameters of the MLP, RF, and KNN classifiers using the grid search that exhaustively considered all parameter combinations. The following searching grid parameters were explored for MLP: * hidden layer sizes: (100,0), (200,0), (400,0), (100,50), (200,100), (400,100), (400,200) * activation function: relu, tanh, logistic * solver for weight optimization: lbfgs, adam * alpha: 0.0001, 0.001, 0.01 The parameter alpha controls the strength of regularization applied to the neural network's weights. The names of the activation functions and the solvers are taken from neural_network.MLPclassifier class from the scikit-learn library, which was used in the experiments. For random forest, we explored the number of trees in the forest, the maximal depth of trees, and the criterion that measure the quality of a split: * number of estimators: 100, 500, 1000 * maximal depth: 7, 8, 9, 10 * criterion: gini, entropy The names of the criteria are taken from ensemble.RandomForestClassifier class from the scikit-learn library, which was used in the experiments. Finally, for the KNN, we considered the following numbers of nearest neighbors, \(k\): 1,3,5,7,9,11. The selected values of the hyperparameters for the MLP, RF, and KNN models are given Table 3. According to the experimental results described in Table 2, the MLP achieved the highest classification accuracy for the multiclass classifier and for all binary classifiers. In the following experiments, we will use MLP to determine which stream data samples to classify and which to cluster. For a test sample, the output of the MLP with the softmax activation is a probability distribution over the possible classes. The predicted class for a test sample is then the highest probable class. ### Data Stream Splitting At the end of the _first phase_ of our model, streaming data is divided into the _high-confidence samples_ and the _low-confidence samples_ according to the classification probabilities prediction vector. Fig. 4 shows the relation between the balanced accuracy and the percentage of classified samples for various thresholds \(t\). Specifically, we experimented with the following values of the parameter \(t\): 0.1, 0.2,..., 0.9, 0.99, 0.999,..., 0.99999999. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline classifiers & \multicolumn{2}{c|}{MLP} & \multicolumn{2}{c|}{RF} & \multicolumn{2}{c|}{KNN} \\ \hline classes & \# features & BAC & \# features & BAC & \# features & BAC \\ \hline class\_all & 170 & 96.8\% & 180 & 93.20\% & 190 & 94.65\% \\ class\_Xtrat & 180 & 99.63\% & 130 & 99.52\% & 160 & 99.61\% \\ class\_Zbot & 160 & 97.46\% & 150 & 92.40\% & 180 & 97.46\% \\ class\_Ramnit & 160 & 96.46\% & 190 & 92.19\% & 140 & 93.77\% \\ class\_Sality & 110 & 95.47\% & 160 & 90.41\% & 190 & 94.44\% \\ \hline \end{tabular} \end{table} Table 2: An optimal number of features extracted using PCA and the balanced accuracy for the multiclass classifier (class_all) and four binary classifiers (class_family) trained for the corresponding malware families. Figure 4: Relation between the percentage of classified samples and the balanced accuracy. The _single-classifier method_ achieved the highest BAC, 98.60%, for the threshold \(t=0.99999\), classifying 67.97% of the samples. While the _multi-classifier method_ achieved the highest BAC, 96.74%, for the threshold \(t^{\prime}=0.9999\), classifying 67.58% of the samples. The results show that the _single-classifier method_, where one multiclass classifier was used to determine the data to be clustered, outperforms the _multi-classifier method_ based on four binary classifiers. For this reason, in the following section, we will present the clustering results only using the _single-classifier method_. A threshold \(t\) is the parameter of our model and can be used to influence the BAC. However, we do not know the optimal number of clusters in advance for the _low-confidence samples_. One way to determine the number of clusters is based on the silhouette coefficient, where labels are not required for its computation. Specifically, we may cluster incoming _low-confidence samples_ simultaneously for several numbers of clusters. Based on these silhouette coefficient time series, we may predict future silhouette coefficient values for different numbers of clusters. Then we can select the number of clusters for which the highest silhouette coefficient is expected. Since the optimal value of the parameter \(t\) is not known in advance, therefore, in the following experiments, we considered only two extreme cases: * \(t=0.6\), when almost all streaming data is classified (specifically, it was approximately 98%), * \(t=0.9999999\), when approximately half of the streaming data was classified (specifically, it was approximately 55%). ### Clustering For various numbers of clusters, we conducted experiments where three online clustering algorithms were applied to the _low-confidence samples_. We used the elbow method to determine the optimal number of clusters. Fig. 5 for different values of the parameter \(t\) show the relationship between the number of clusters and Within-Cluster Sum of Square (WCSS), which is the sum of the squared distance between each point of the cluster and its centroid. Since the plots do not exhibit clear elbow points, we present clustering results for clusters between four to ten. The number of clusters determined the number of output neurons in SOM and the maximum number of clusters for BSAS. At BSAS, we experimented with different values of the dissimilarity threshold \(\Theta\). The highest average silhouette coefficients and purities of clusters were achieved for the default value of \(\Theta=1\). The relation between the number of clusters and the purity of clusters, respectively, the silhouette coefficient, is depicted in Fig. 6. This relation corresponds to the parameter \(t=0.6\) for which the _single-classifier method_ achieved the BAC, 95.33%, classifying 97.21% of the samples from \(S\). The results show that SOM online clustering algorithm outperformed the other two algorithms except in one case where OKM achieved higher purity for the number of clusters equal to five. While Fig. 6 for the parameter \(t=0.6\) represents the case where 97.21% of streaming data \(S\) were classified, on the other hand, Fig. 7 for the parameter \(t=0.9999999\) represents the case when only 55.44% of the samples were classified, achieving a BAC of 99.14%. The results from Fig. 7 show that SOM online clustering algorithm outperformed the other two algorithms in terms of silhouette coefficient in all cases. For all numbers of clusters, SOM and OKM algorithms achieved significantly higher purities \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline classifiers & \multicolumn{3}{c|}{MLP} & \multicolumn{3}{c|}{RF} & KNN \\ \hline parameters & hidden\_layer\_sizes & activation & solver & alpha & criterion & max\_depth & n\_estimators & \(k\) \\ \hline class\_all & (400, 200) & relu & adam & 0.001 & entropy & 10 & 500 & 1 \\ class\_Strat & (400, 200) & relu & adam & 0.001 & entropy & 10 & 500 & 5 \\ class\_Zbot & (200, 0) & relu & adam & 0.001 & entropy & 10 & 100 & 1 \\ class\_Ramnit & (400, 200) & relu & adam & 0.0001 & entropy & 10 & 1000 & 1 \\ class\_Sality & (400, 200) & relu & lbfgs & 0.0001 & gini & 10 & 1000 & 1 \\ \hline \end{tabular} \end{table} Table 3: Hyperparameter tuning for the multiclass MLP (class_all) and four binary MLPs (class_family) trained for the corresponding malware families. than BSAS algorithm. Note that all the online clustering algorithms achieved higher purities of clusters for \(t=0.6\) for almost all numbers of clusters compared to the purities achieved for the parameter \(t=0.99999999\). To summarize the results, we classified 97.21% of streaming data with a balanced accuracy of 95.33% and clustered the remaining data using SOM online clustering algorithm, achieving an purity from 47.61% for four clusters to 77.68% for ten clusters. These results indicate that our approach has the potential to be applied to the classification and clustering of zero-day malware into malware families. ### Computational times This section focuses on the computational times of classification and clustering of malware families. We run our proposed approach ten times, and the results of the classification part are reported in the form of mean and standard deviation, while the results of the clustering part are shown as boxplot graphs. The dataset \(D\) of size 47,268 samples was used for training the MLP classifier, and the computational times for the classification and clustering parts were obtained for the processing of streaming data \(S\) of size 65,383 samples. The training time of the MLP took 81.80 seconds on average, with a standard deviation of 24.48 seconds. The computation times of the classification and clustering parts depend on the parameter \(t\), which is used in dividing the streaming data into those to be classified and those to be clustered. For the parameter, \(t=0.9999999\), the MLP classification took 0.33 seconds on average, with a standard deviation of 0.02 seconds, while for the parameter \(t=0.6\), the MLP classification took 0.38 seconds on average, with a standard deviation of 0.01 seconds. The Figures 8 and 9 show the computational times of individual clustering algorithms for the parameter \(t=0.9999999\) and \(t=0.6\), respectively. The differences in the computational times of individual clustering algorithms for different values of the parameter \(t\) are because the parameter \(t\) affects the size of the data to be clustered. The parameter \(t=0.9999999\) was chosen so that roughly half of the used streaming data (more precisely, 55% on average for the considered ten experiments) was clustered, while for the parameter \(t=0.6\) only approximately 2% of the streaming data were clustered. Based on the given computational times, we can estimate that the implementation of our proposed approach can process more than 3,000 samples per second, which is sufficient to process 560,000 samples, which according to the AV-Test Institute [1] are detected on average per day. ## 7 Conclusions Our approach can play a useful role for malware researchers in classifying and clustering malware Figure 5: The relation between the number of clusters and the WCSS for the parameter \(t=0.9999999\) (a), respectively, the parameter \(t=0.6\) (b). into families and studying how the families evolve over time. The proposed model was designed in an online form to provide decisions immediately as samples occurred. In our work, the training data were strictly separated from the test data based on the date of appearance of malware samples. In addition, the test data contained new malware families not presented in the training set, corresponding to the emergence of new malware families. Following these conditions that align with the real world, we classified zero-day malware with a balanced accuracy of 95.33% and clustered with a purity of up to 77.68%. Experimental results indicate that the proposed model can accurately classify and cluster malware into families. A paper's direct extension is to process streaming data containing malicious and benign samples. This is a more challenging problem since the _low-confidence samples_ also consist of benign files that can break the structure of the clusters. Future work may also focus on the prediction of the optimal threshold \(t\), based on which it is determined which zero-day malware should be classified and which should be clustered. The optimal threshold is the value at which we obtain the highest overall accuracy of the classification and clustering of stream data. This task is challenging since the Figure 6: The relation between the number of clusters and the purity of clusters (a), respectively, the average silhouette coefficient (b). For the parameter \(t=0.6\), \(2.79\%\) of the samples from \(S\) were clustered. Figure 7: The relation between the number of clusters and the purity of clusters (a), respectively, the average silhouette coefficient (b). For the parameter \(t=0.9999999\), \(44.56\%\) of the samples from \(S\) were clustered. optimal threshold is related to the number of new malware families, which may be hard to predict. AcknowledgmentsThis work was supported by the OP VVV MEYS funded project CZ.02.1.01/0.0/0.0/16_019/0000765 "Research Center for Informatics" and by the Grant Agency of the CTU in Prague, grant No. SGS23/211/CHK3/3T/18 funded by the MEYS of the Czech Republic. ## Declarations The authors have no relevant financial or non-financial interests to disclose.
2303.13291
Extremal Black Holes as Relativistic Systems with Kepler Dynamics
The recent detection of gravitational waves emanating from inspiralling black hole binaries has triggered a renewed interest in the dynamics of relativistic two-body systems. The conservative part of the latter are given by Hamiltonian systems obtained from so called post-Newtonian expansions of the general relativistic description of black hole binaries. In this paper we study the general question of whether there exist relativistic binaries that display Kepler-like dynamics with elliptical orbits. We show that an orbital equivalence to the Kepler problem indeed exists for relativistic systems with a Hamiltonian of a Kepler-like form. This form is realised by extremal black holes with electric charge and scalar hair to at least first order in the post-Newtonian expansion for arbitrary mass ratios and to all orders in the post-Newtonian expansion in the test-mass limit of the binary. Moreover, to fifth post-Newtonian order, we show that Hamiltonians of the Kepler-like form can be related explicitly through a canonical transformation and time reparametrization to the Kepler problem, and that all Hamiltonians conserving a Laplace-Runge-Lenz-like vector are related in this way to Kepler.
Dijs de Neeling, Diederik Roest, Marcello Seri, Holger Waalkens
2023-03-23T14:18:37Z
http://arxiv.org/abs/2303.13291v2
# Extremal Black Holes as ###### Abstract The recent detection of gravitational waves emanating from inspiralling black hole binaries has triggered a renewed interest in the dynamics of relativistic two-body systems. The conservative part of the latter are given by Hamiltonian systems obtained from so called post-Newtonian expansions of the general relativistic description of black hole binaries. In this paper we study the general question of whether there exist relativistic binaries that display Kepler-like dynamics with elliptical orbits. We show that an orbital equivalence to the Kepler problem indeed exists for relativistic systems with a Hamiltonian of a Kepler-like form. This form is realised by extremal black holes with electric charge and scalar hair to at least first order in the post-Newtonian expansion for arbitrary mass ratios and to all orders in the post-Newtonian expansion in the test-mass limit of the binary. Moreover, to fifth post-Newtonian order, we show that Hamiltonians of the Kepler-like form can be related explicitly through a canonical transformation and time reparametrization to the Kepler problem, and that all Hamiltonians conserving a Laplace-Runge-Lenz-like vector are related in this way to Kepler. **Keywords: Einstein-Maxwell-dilaton, extremal black holes, integrable systems, Kepler problem, orbital equivalence** **MSC classes: 37J06, 70H15, 83C22, 83C57** ###### Contents * 1 Introduction * 2 Relativistic Systems with Kepler Dynamics * 2.1 The post-Newtonian expansion * 2.2 On-shell equivalence to Kepler dynamics * 2.3 Off-shell equivalence to Kepler dynamics * 2.4 Hidden symmetries require Kepler dynamics * 3 Dilaton-Coupled Einstein-Maxwell Theory * 3.1 Black holes with dilaton hair * 3.2 The extremal case * 3.3 Skeletonisation * 4 The Two-Body System of Extremal Black Holes * 4.1 Kepler dynamics at 1PN * 4.2 Kepler dynamics in the test-mass limit * 5 Conclusion Introduction From 2015 onward, the observatories LIGO, VIRGO and KAGRA have detected many instances of gravitational waves originating from binaries of neutron stars or black holes [13, 14]. With more observing runs [14] and the space based telescope LISA upcoming, the dawn of the gravitational wave era provides a strong motivation for precision study of binary dynamics, particularly of the earlier stage of the merger [1, 15]. The early inspiral stage is usually approached with analytical tools [16], such as the post-Newtonian (PN) expansion. Though there is a good understanding of the PN expansion and other approximations in the context of binary systems, both of the conservative and radiation part to high order, future experiments that are very sensitive to a long inspiral phase stimulate the further development of analytical tools to limit the demand on computational resources. This paper is therefore dedicated to the identification of specific relativistic systems for which the PN expansion results in a system of a much more manageable form. Often, systems with this kind of simplifications possess more symmetries and conservation laws, reducing the number of effective degrees of freedom. This is famously the case in the classical analogue of the relativistic binary systems; we will investigate to what extent the same holds for certain relativistic systems. On the non-relativistic i.e. classical level, the two-body problem divides up nicely into the motion of a free particle (the total mass located at the center of mass) and the motion of a particle with reduced mass \(\mu=\frac{m_{1}m_{2}}{m_{1}+m_{2}}\) in the stationary potential generated by the total mass \(M=m_{1}+m_{2}\). The solutions to this problem then are the same ellipses as in the classical Kepler problem in celestial mechanics. The latter possesses, next to the expected spherical symmetry \(SO(3)\) yielding the conservation of angular momentum, an additional symmetry which gives the conservation of the Laplace-Runge-Lenz (LRL) vector. Since this symmetry is not immediately obvious on the level of the Lagrangian it is often referred to as a hidden symmetry. The three components of the angular momentum vector, the three components of the LRL vector and the total energy form seven conserved scalar quantities. As the length of the LRL vector is determined by angular momentum and energy, and the angular momentum vector is perpendicular to the LRL vector, only five of the scalar conserved quantities are independent. The joint levelsets of these five constants of motion in the six-dimensional phase space are hence one-dimensional. As a consequence, bounded orbits must be periodic and take the form of the famous elliptical orbits found in Kepler's model of the Solar System (while in General Relativity of course, the symmetry is broken and the perihelion - the point of closest approach - precesses). For central force systems, there is a strong link between closing orbits and enhanced symmetry, in the form of Bertrand's theorem. This states that the only two central forces whose bounded orbits are all closed curves are the Kepler potential and the isotropic harmonic oscillator [17], which are in fact related (see e.g. [18]) and known for their large symmetry groups, \(SO(4)\) and \(SU(3)\), respectively. Since we know symmetries make problems more tractable and the non-relativistic problem possesses additional symmetry, it is natural to attempt to restore the non-relativistic symmetry in relativistic systems. While the closing of bounded orbits is not a sufficient condition for conservation of a LRL vector, it is a necessary condition that is satisfied very rarely by relativistic theories. The closure of orbits can therefore be a useful tool for diagnosis of theories when looking for additional spacetime symmetries, as demonstrated by e.g. [10]. There has been previous work done on identifying relativistic systems that have exclusively closed bounded orbits. For example, Perlick has identified all spacetimes in General Relativity with that property, a sort of relativistic Bertrand theorem [19]. He considered all spherically symmetric spacetimes that have bounded timelike geodesics with a perihelion shift equal to \(\frac{\pi}{\beta}\), with \(\beta\) rational. The cases \(\beta=1\) and \(\beta=2\), corresponding to relativistic versions of the Kepler problem and harmonic oscillator, are the only ones admitting an additional symmetry. However, Perlick's theorem is only taking into account gravity, without allowing other forces to be present. Additionally, there is the hydrogen-like system in \(\mathcal{N}=4\) super Yang-Mills theory, which has an additional conserved vector, coinciding with the classical LRL vector in the non-relativistic limit [12, 13]. Interesting follow-up results were derived in \(\mathcal{N}=8\) supergravity, where the two-body problem was shown to have a LRL vector to at least order 1PN and a vanishing perihelion shift to third order in the post-Minkowskian (PM) expansion1[10, 18]. However, at 3PM there appears to be a hint that the quantum energy level degeneracy linked to the LRL vector and present at 1PN might be lost. This suggests an interesting break in the bond between closed bounded orbits and hidden symmetry, which is present classically. Additionally, it was shown in [10] that the test-mass limit in \(\mathcal{N}=8\) supergravity has a zero perihelion shift to all orders in velocity. Although relativistic corrections of the Kepler problem generically break the symmetry associated with the LRL vector, it follows from the above that specific systems manage to preserve it in some sense. These systems then, one might wonder, are perhaps not truly relativistic in some sense, as their dynamics is still constrained by the same symmetries, giving rise to strictly periodic orbits in phase space (at least for bounded orbits). We will study a class of such systems and demonstrate that they are orbital equivalent to the Kepler system on a levelset of the Hamiltonian in phase space. Their full Hamiltonians are implicitly defined by \[f(H(q,p))=\frac{p^{2}}{2}-\frac{g(H(q,p))}{r(q)}\,, \tag{1}\] for smooth functions \(f,g:\mathbb{R}\to\mathbb{R}\). As we will see, such systems give rise to a phase space which can be thought of as being foliated by energy surfaces of Kepler problems where for each value of \(H\) the motion is parallel to that of a Kepler problem with a different coefficient for the gravitational potential; in other words, with a different gravitational constant. The global structure of the phase space is therefore tied to the specific properties of the function \(g(H)\). We will show that the above class of Hamiltonians (1) naturally arises in Einstein-Maxwell-dilaton (EMD) theory, where one considers two extremal black holes with opposite charge for a specific value of the dilaton coupling (cf. equation (33)). For this case, we derive a functional relation of the form (1) to first order in the post-Newtonian expansion of the two-body system and all orders in the test-mass limit. As a physical aside, it is an interesting question how one would observationally distinguish the above Hamiltonians from the Kepler one, e.g. in the solar system. When studying planets orbiting a (much heavier) Sun described by \(H(q,p)\) as opposed to the ordinary Kepler problem, the first two laws of Kepler still hold: the bounded orbits are ellipses and the trajectories conserve angular momentum. However, the period of an orbit becomes \[T=2\pi\sqrt{\frac{s^{3}}{GMg(E)}},\] with \(s\) the semi-major axis of the ellipse, where for the sake of clarity we included the mass of the sun \(M\) and the gravitational constant \(G\). This differs from the usual Newtonian period \(T_{N}=2\pi\sqrt{s^{3}/(GM)}\). Therefore the third law2 is violated: different orbits will have different energies, causing the ratio \(T^{2}/s^{3}\) to no longer be the same for all orbiting objects. Footnote 2: For all bodies orbiting the Sun, the square of the period is proportional to the third power of the semi-major axis of the orbit, _with the same proportionality constant_[12]. To what extent, then, are these Hamiltonians equivalent to the Kepler problem? We will prove they describe the same dynamics at least on the energy shell, so for a fixed \(H=E\), in the sense that their flows are parallel. Moreover, there can exist a transformation mapping \(H\) to the Kepler Hamiltonian where we do not need to restrict to the energy surface (at least locally, in a small neighbourhood of the energy surface). This transformation is shown to exist as an energy redefinition and canonical transformation at least up to and including 5PN. Whether it can be extended to all orders, and whether it exists globally, remains a topic of future research. Another question is whether relativistic systems canonically conjugate to Kepler up to time reparametrization are _the only_ ones with a conserved LRL-type vector and the corresponding symmetry. We show this is the case at least to 5PN order. This paper is organised as follows. In Section 2, we discuss the set of Kepler-like Hamiltonians and their relation to the Kepler problem. Here we show the on-shell equivalence to Kepler in Subsection 2.2, a construction that yields an explicit off-shell transformation to the Kepler problem in Subsection 2.3 and the equivalence of all LRL-preserving Hamiltonians of a certain kind to Kepler in Subsection 2.4. After a physical intermezzo in Section 3 introducing Einstein-Maxwell-dilaton black holes, we show in Section 4 that a particular tuning of the parameters in this theory allows one to write the 1PN two body and all-order test-mass limit as a Kepler-type Hamiltonian, providing an interesting example of relativistic systems with classical dynamics. **Notational conventions**. In what follows, unless differently specified, we will use the following notational conventions. We will use \((q,p)\) to denote canonical coordinates in \(T^{*}\mathbb{R}^{3}\cong\mathbb{R}^{3}\times\mathbb{R}^{3}\). The radial momentum is denoted \(p_{r}\), i.e. \(p_{r}=\frac{(p\cdot q)}{r}\) where \(r=r(q)=|q|\). We will use upper indices to denote vector components, therefore, \(V^{i}\) denotes the \(i\)th component of a vector \(V\). Indices for relativistic objects are denoted by \(\mu\) and \(\nu\). Throughout, we will assume Einstein notation and omit explicit sums. For instance, 4-vectors are denoted by \(x^{\mu}\), the Lorentzian metric is denoted \(g_{\mu\nu}\) and, therefore, the inner product of tangent vectors with respect to \(g\) is given by \(g_{\mu\nu}\dot{x}^{\mu}\dot{y}^{\nu}\). For the metric we assume the signature \((-++++)\). For convenience, throughout the paper we use units such that the speed of light, \(c\), and the gravitational constant, \(G\), are equal to \(1\). ## 2 Relativistic Systems with Kepler Dynamics In this section we will discuss Hamiltonians of type (1) central to this work. Although they appear naturally from relativistic problems, see Section 4, they end up being equivalent (in the ways mentioned in the introduction) to the classical Kepler system. We will first review'relativistic' corrections within the post-Newtonian (PN) expansion that we will employ. Subsequently we will prove the equivalence on the energy surface (on-shell) of the Kepler-like Hamiltonians to Kepler problems and, later, how these can be explicitly related (up to fifth PN order) to the Kepler problem through canonical transformations and a non-linear energy redefinition (off-shell). Lastly, we show that all Hamiltonians preserving a LRL-like vector are related to Kepler in this way, also up to 5PN. ### The post-Newtonian expansion The post-Newtonian expansion involves a power-series expansion in the small parameter \(\frac{1}{c^{2}}\), which physically amounts to an asymptotic expansion both in slow motion and weak field. Thus the 0PN order accounts for the non-relativistic terms and the highest-order in an \(n\)PN expansion comprises terms proportional to \(\left(\frac{1}{c^{2}}\right)^{n}\). This ordering has been in use for binary systems for a long time, going back to Einstein in calculating the anomalous precession of Mercury [2]. A general two-body Hamiltonian has translational and rotation symmetry. The reduction of the translational symmetry can be accomplished by choosing a center-of-mass frame. In a center-of-mass frame a general two-body Hamiltonian can then be written solely in terms of the \(SO(3)\) invariants \(p^{2}\), \(q^{2}\) and \((p\cdot q)\). Knowing the PN orders of these terms individually would then give us a way to count the orders of all terms in a general two-body Hamiltonian. We can infer these orders by inspection of some simple cases. A relativistic free particle, for example, has Hamiltonian \[H_{\rm fp}=mc^{2}\sqrt{1+\frac{p^{2}}{m^{2}c^{2}}}=mc^{2}\left(1+\frac{p^{2}} {2m^{2}c^{2}}-\frac{p^{4}}{8m^{4}c^{4}}+\mathcal{O}\big{(}\frac{p^{6}}{m^{6} c^{6}}\big{)}\right)\,.\] Note that each momentum appears only in the dimensionless combination \(\frac{p^{2}}{m^{2}c^{2}}\), and that the leading term in this expansion gives the rest-mass energy \(mc^{2}\). When including the gravitational pull of a large mass \(M\) on a test body \(m\), similar considerations show that the radius \(r=|q|\) only appears in the combination \(\frac{2GM}{rc^{2}}\). The third type of term that can be present in relativistic systems with spherical symmetry is the inner product \((p\cdot q)\), appearing only in the radial momentum \(p_{r}=\frac{(p\cdot q)}{r}\), which carries a \(\frac{1}{c}\) to be consistent with the total momentum. These considerations give us an order counting system for terms occurring in a relativistic Hamiltonian. Such a Hamiltonian is given by \[H_{\rm rel}=\frac{1}{\epsilon}\left(1+\sum_{j=1}^{\infty}e^{j}\Lambda_{j}( \alpha)\right),\qquad\Lambda_{j}(\alpha)=\sum_{\begin{subarray}{c}(l,m,n) \in\mathbb{N}^{3}\\ l+m+n=j\end{subarray}}\alpha_{l,m,n}\frac{(p^{2})^{l}(p_{r}^{2})^{n}}{r^{m}}\,, \tag{2}\] where we introduced \(\epsilon=\frac{1}{c^{2}}\) as a bookkeeping parameter to easily keep track of the PN orders. We set \(m=1\). Because of the rest-mass term, clearly the PN orders of \(\Lambda_{j}(\alpha)\) will be shifted down by one. As the constant mass term does not influence the dynamics, we will drop it from now on, but remember the effect the overall factor \(\frac{1}{\epsilon}\) in (2) when we discuss PN orders. We note that we will not discuss any systems where spin plays a role. ### On-shell equivalence to Kepler dynamics Let us consider the following family of Hamiltonians \(H=H_{f,g}:T^{*}\mathbb{R}^{d}\simeq\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\), implicitly defined by the functional relation \[f(H(q,p))=\frac{p^{2}}{2}-\frac{g(H(q,p))}{r(q)}\,, \tag{3}\] with \(f,g:\mathbb{R}\to\mathbb{R}\) smooth functions, that can be written as powers series in the form \[f(x)=x+f_{1}x^{2}+f_{2}x^{3}+\dots\,,\qquad g(x)=1+g_{1}x+g_{2}x^{2}+\dots\,, \tag{4}\] where \(f_{i},g_{i}\) are real numbers. In other words, we assume that their Taylor-Maclaurin expansions have their first coefficients fixed by \(f_{-1}=0\), \(g_{0}=1\) and \(f_{0}=1\) (with labels related to PN orders as will become more clear below). The reason the constant term in \(f(H(q,p))\) with coefficient \(f_{-1}\) is absent, is that we want to disregard rest-mass terms and (3) to yield the Kepler system at lowest order in the PN expansion below. As discussed in the introduction, this is directly motivated by the Hamiltonian of a binary Einstein-Maxwell-dilaton system in the test-mass limit. In Section 4, we will see this system has a Hamiltonian that is implicitly defined (at all PN orders) by the above relation with \[f(x)=x+\tfrac{1}{2}x^{2},\qquad g(x)=1+x+\tfrac{1}{4}x^{2}\,.\] Note that we set the test mass to unity in the identifications, to exactly match the description of the above Hamiltonian family. Since we already know explicit Hamiltonian functions solving (3), we will not pursue the question of sufficient conditions for existence3. For the time being, we assume that a solution \(H(q,p)\) exists for the given functions \(f,g\) and describe some of its properties in relation with the Kepler Hamiltonian. Therefore, let \(H:T^{*}\mathbb{R}^{d}\to\mathbb{R}\) be a smooth Hamiltonian function satisfying the relation (3). For convenience, we define the new Hamiltonian function Footnote 3: This is a rather interesting technical problem on its own, and we refer the interested reader to [13, 1] for the current state of the art. \[K(q,p):=f(H(q,p))=\frac{p^{2}}{2}-\frac{g(H(q,p))}{r(q)}\,. \tag{5}\] Since \(K\) is by definition a function of \(H\), it is also an integral of motion of \(H\), that is, \(K\) is constant on the flow of \(H\). If \(E\in\mathbb{R}\) is a regular value of \(H\), this implies that on the energy levels \(H^{-1}(E)\) \[K|_{H^{-1}(E)}(q,p)=\frac{p^{2}}{2}-\frac{g(E)}{r(q)}\,,\] which is a Kepler-type Hamiltonian with gravitational constant \(g(E)\). In fact, the flow generated by \(K\) on all its regular energy surfaces turns out to be parallel to the flow of a Kepler Hamiltonian. **Theorem 1**.: _Consider \(M=T^{*}\mathbb{R}^{d}\simeq\mathbb{R}^{d}\times\mathbb{R}^{d}\) with standard symplectic form \(\omega=\sum_{k}\mathrm{d}p_{k}\wedge\mathrm{d}q_{k}\). Assume that there is a function \(f\in C^{2}(\mathbb{R})\) such that the Hamiltonian \(H:M\to\mathbb{R}\) satisfies_ \[K(q,p):=f(H(q,p))=\frac{p^{2}}{2}-\frac{g(H(q,p))}{r(q)}\,.\] _Then, for any regular energy value \(H=E\) the vector fields \(X_{K}|_{H^{-1}(E)}\) and \(X_{H}|_{H^{-1}(E)}\) of \(K\) and \(H\) are parallel on the energy surface \(H^{-1}(E)\). Moreover, if_ \[\mathcal{E}:=\{E\in\mathbb{R}\mid f^{\prime}(E)\neq 0\text{ and }E\text{ is regular value of }H\} \tag{6}\] _and_ \[J:M\times\mathcal{E}\to\mathbb{R},\qquad J(q,p,E):=J_{E}(q,p):=\frac{p^{2}}{2 }-\frac{g(E)}{r(q)}\,,\] _then for all \(E\in\mathcal{E}\), the Hamiltonian vector fields \(X_{J_{E}}|_{H^{-1}(E)}\) and \(X_{K}|_{H^{-1}(E)}\) are parallel on the energy level \(H^{-1}(E)\)._ Proof.: For the Hamiltonian vector fields \(X_{K}\) and \(X_{H}\) of \(K\) and \(H\), respectively, we have \[X_{K}=-\frac{\partial K}{\partial q}\frac{\partial}{\partial p}+\frac{\partial K }{\partial p}\frac{\partial}{\partial q}=f^{\prime}(H)\left(-\frac{\partial H }{\partial q}\frac{\partial}{\partial p}+\frac{\partial H}{\partial p}\frac{ \partial}{\partial q}\right)=f^{\prime}(H)X_{H}\,. \tag{7}\] The vector fields are hence parallel. For the second part, let \(E\in\mathcal{E}\). The Hamiltonian vector field of \(J_{E}\) at a point \((p,q)\) is \[X_{J_{E}}(p,q)=-\frac{\partial J(p,q,E)}{\partial q}\frac{\partial}{\partial p }+\frac{\partial J(p,q,E)}{\partial p}\frac{\partial}{\partial q}\,.\] Using \(K(H(p,q))=J(p,q,H(p,q))\) we have \[X_{K} =-\left(\frac{\partial J}{\partial q}+\frac{\partial J}{\partial H }\frac{\partial H}{\partial q}\right)\frac{\partial}{\partial p}+\left(\frac {\partial J}{\partial p}+\frac{\partial J}{\partial H}\frac{\partial H}{ \partial p}\right)\frac{\partial}{\partial q}\] \[=\left(-\frac{\partial J}{\partial q}\frac{\partial}{\partial p }+\frac{\partial J}{\partial p}\frac{\partial}{\partial q}\right)+\frac{ \partial J}{\partial H}\left(-\frac{\partial H}{\partial q}\frac{\partial}{ \partial p}+\frac{\partial H}{\partial p}\frac{\partial}{\partial q}\right)\,.\] Therefore \[X_{K}|_{H^{-1}(E)}=X_{J_{E}}|_{H^{-1}(E)}+\frac{\frac{\partial J}{\partial E}| _{H^{-1}(E)}}{f^{\prime}(E)}X_{K}|_{H^{-1}(E)}\,,\] where we used (7) for the second term. Solving for \(X_{J_{E}}\) we get that on \(H^{-1}(E)\) \[\left(1-\frac{\frac{\partial J}{\partial E}}{f^{\prime}(H)}\right)X_{K}=X_{J_ {E}}\,. \tag{8}\] This means that the evolution of Hamiltonians satisfying (3) is equivalent to the evolution of a classical Kepler problem - more precisely, for each energy, the trajectories are equivalent to that of a Kepler problem with a specific energy-dependent value of the coupling with the potential up to possibly a time-rescaling. In particular, all bounded orbits are ellipses in the configuration space. This relation is somewhat reminiscent of the Maupertuis-Jacobi transformation, in which the trajectories of a natural Hamiltonian are described via a time reparametrization as geodesics of a metric [15, 16]. The fact that the above Hamiltonians are equivalent to the Kepler problem and, in particular, the fact that all trajectories are closed, hints at the existence of an associated conserved Laplace-Runge-Lenz (LRL) vector on the energy levels. An obvious candidate would be the vector \[A^{i}(q,p)=(p\crosscross L)^{i}(q,p)-g(H(q,p))\frac{q^{i}}{r(q)}\,, \tag{9}\] since it is simply the classical LRL vector, with an additional coefficient corresponding to the coefficient of the potential energy in (5). **Theorem 2**.: _Let \(\mathcal{E}\) be defined by (6) and \(E\in\mathcal{E}\). On the set_ \[\left\{(q,p)\in H^{-1}(E)\;\big{|}\;1-\frac{\frac{\partial J(q,p,E)}{\partial E }}{f^{\prime}(E)}\neq 0\right\},\] _the Hamiltonian \(K(q,p)\) defined in (5) is in involution with all components of the Laplace-Runge-Lenz vector \(A^{i}(q,p)\) defined in (9), and hence these are integrals of motion of the dynamics generated by \(K\)._ Proof.: Fix \(E\in\mathcal{E}\). Since on \(H^{-1}(E)\) the flow of \(J_{E}(q,p)\) is parallel to that of \(K(q,p)\), we know for the Lie derivatives of the functions \(A^{i}(q,p)\) with respect to \(X_{K}\) \[\{A^{i},K\}=\mathcal{L}_{X_{K}}(A^{i})=\mathcal{L}_{\lambda^{-1}X_{J_{E}}}(A^{i })=\lambda^{-1}\mathcal{L}_{X_{J_{E}}}(A^{i})=\lambda^{-1}\{A^{i},J_{E}\}\,, \tag{10}\] where all functions are evaluated on \(H^{-1}(E)\) and \[\lambda(q,p):=\left(1-\frac{\frac{\partial J(q,p,H)}{\partial E}}{f^{\prime}( H)}\right)\] is the proportionality factor between the vector fields of \(J_{E}(q,p)\) and \(K(q,p)\) from (8), which we assume to be regular and nonvanishing. With (10), we reduced ourselves to check whether \(J_{E}(q,p)\) commutes with the components of the LRL vector on \(H^{-1}(E)\). Namely, \[\begin{split}\{A^{i},J_{E}(q,p)\}&=\frac{\partial A ^{i}}{\partial q}\frac{\partial J_{E}}{\partial p}-\frac{\partial A^{i}}{ \partial p}\frac{\partial J_{E}}{\partial q}\\ &=\left[\frac{\partial}{\partial q}(p\cross L)^{i}-\left(\frac{ \partial}{\partial q}\frac{q^{i}}{r}\right)g(H)\right]\frac{\partial J_{E}}{ \partial p}-\left[\frac{\partial}{\partial p}(p\cross L)^{i}-\left(\frac{ \partial}{\partial p}\frac{q^{i}}{r}\right)g(H)\right]\frac{\partial J_{E}}{ \partial q}\\ &\quad+\left(-\frac{q^{i}}{r}\right)\left[\frac{\partial g(H)}{ \partial q}\frac{\partial J_{E}}{\partial p}-\frac{\partial g(H)}{\partial p }\frac{\partial J_{E}}{\partial q}\right]\,.\end{split} \tag{11}\] Observe now that on \(H^{-1}(E)\), \(A^{i}=A^{i}_{E}:=(p\cross L)^{i}-g(E)\frac{q^{i}}{r}\). So the first two terms combine into the Poisson bracket \[\left\{A^{i}_{E}(q,p),J_{E}(q,p)\right\}=\left\{(p\cross L)^{i}-g(E)\frac{q^{i }}{r},\frac{p^{2}}{2}-g(E)\frac{1}{r}\right\}\] which vanishes as the Poisson bracket between a standard Kepler Hamiltonian and its LRL vector. The square bracket that form the last term in (11) amounts to \(\{g(H),J_{E}\}\) evaluated on \(H^{-1}(E)\). This also vanishes due to \(g(H)=g(f^{-1}(K))\) and application of Theorem 1. While in this section we proved that for each fixed value of the energy the family of Hamiltonians satisfying (3) has a flow which is parallel to the Kepler flow and admits a LRL vector, we do not know the regularity of the dependence of these objects on the energy itself nor how to relate (3) and a Kepler Hamiltonian beyond the energy surface. In the following section, we will consider this problem, looking for an energy-independent way to relate Kepler problems and the implicitly defined Hamiltonians (3). What we can immediately observe is that, while the shape of orbits is the same in both (3) and in a Kepler problem, the energy levelsets \(H^{-1}_{\mathrm{Kep}}(E)\) and \(H^{-1}(E)\) foliate the phase space in a different way. The Hamiltonian \(K(q,p)\), and therefore also the implicitly defined Hamiltonians (3), induce a bundle of non-equivalent Kepler orbits, the global structure of which is determined by \(g(H)\). ### Off-shell equivalence to Kepler dynamics While the on-shell equivalence discussed in the previous subsection explains why the Hamiltonians implicitly defined by (3) have an additional constant of motion and hence closed orbits, it does not address the violation of Kepler's third law: the fact that Keplerian energy surfaces can be stacked differently in the Kepler bundle. We now turn to this issue, and address the question whether one can also map families of orbits with different energies onto a fixed Kepler system. Since we would like to avoid issues of singularities and/or topology, we will restrict ourselves to a local construction. In other words, we now aim to generalise the on-shell (on a fixed energy surface) orbital equivalence to an off-shell equivalence (for a neighbourhood of orbits of possibly different energies). The violation of Kepler's third law demonstrates a physical difference between the Kepler problem and the implicitly defined Hamiltonians on the phase space, so it should not come as a surprise that looking for such a relation will involve a transformation of the phase space itself. The mapping we are looking for therefore involves both a time reparametrization (related to the mapping from \(H\) to \(K\equiv f(H)\)) as well as a canonical transformation, whose composition will (locally) transform the Kepler Hamiltonian to the implicitly defined ones and establish an orbital equivalence in this sense. We will provide evidence for the existence of such a canonical transformation by explicitly constructing it up to fifth PN order. Note that the PN expansion differs from the expansion around an energy surface; even when extending the canonical transformation to all PN orders (or having a closed expression for it), this would still only involve a local equivalence as singularities or topological issues might prevent one to extend the mapping to the whole phase space. Addressing the extension to all PN orders and the question of convergence of the series constructed below (even just in an asymptotic sense) is not a trivial endeavour, as is the question of global existence of the phase space transformation. Therefore, we will leave the all-order analysis for the whole phase space for future research. The goal is to find a solution \(H\) to the functional equation (3) to any desired PN order from the perturbation of the Kepler system. More specifically we will show the folllowing. **Theorem 3**.: _For given \(C^{\infty}\) functions \(f\) and \(g\), the functional relation (3) can be solved to at least PN order 5 by_ \[H=\Phi^{*}\tau(H_{\text{Kep}})\,, \tag{12}\] _where \(\tau:\mathbb{R}\to\mathbb{R}\), \(E\mapsto\tau(E)\) is \(C^{\infty}\) with \(\tau^{\prime}(0)=1\) defines a near-identity time re-parametrization and \(\Phi\) is a near-identity canonical transformation._ The proof will be given by an explicit construction based on Lie transform perturbation theory combined with a rescaling of the energy function. For the construction, it is helpful to explicitly include the PN expansion parameter \(\epsilon=\frac{1}{c^{2}}\) (see the PN counting scheme (2)). Let us introduce the real vector spaces \[W_{j}=\text{span}\left\{\frac{(p^{2})^{l}(p\cdot q)^{n}}{r^{m}}\,\big{|}\,(l,m,n)\in\mathbb{N}^{3},\,l+m-\frac{1}{2}n=j\right\}\,. \tag{13}\] For instance, the Kepler Hamiltonian \[\epsilon H_{\text{Kep}}=\epsilon\left(\frac{p^{2}}{2}-\frac{1}{r}\right)\] is in \(W_{1}\). We will mainly consider \(W_{j}\) with non-negative integer \(j\) resulting from even \(n\) in (13) such that \(F_{j}\in W_{j}\) has PN order \(j-1\) (see the discussion following (2)). But as we will see below, also half-integer \(j\) resulting from odd \(n\) in (13) can be important. Note that for \(F_{i}\in W_{i}\) and \(F_{j}\in W_{j}\), \[\{F_{i},F_{j}\}\in W_{i+j+\frac{1}{2}}\,, \tag{14}\] implying that \[W=\bigoplus_{k\in\mathbb{N}}W_{k/2}\] is closed under the Poisson bracket. In particular, \[\{p\cdot q,F_{j}\}\in W_{j}\,.\] Let us write the energy rescaling \(\tau\) in (12) in a power series as \[\tau(E)=\sum_{n=0}^{\infty}\delta_{n}E^{n+1}\,, \tag{15}\] with \(\delta_{0}=1\). For counting the PN orders of \(\tau\) applied to some Hamiltonian function \(H\) it is important to note that for \(F_{i}\in W_{i}\) and \(F_{j}\in W_{j}\), \[F_{i}\,F_{j}\in W_{i+j}\,,\] which implies that \(W\) is also closed under multiplication. We will consider a succession of near-identity canonical transformations each of which is obtained from the flow of the Hamiltonian vector field generated by a suitable function \(G\). To describe the construction it is useful to introduce the adjoint operator \[\text{ad}_{G}(\cdot):=\{G,\cdot\}\,.\] For convenience, we define \(n>0\) repeated iterations of the adjoint operator by \[[\text{ad}_{G}]^{n}:=[\text{ad}_{G}]^{n-1}\circ\text{ad}_{G},\qquad[\text{ad}_ {G}]^{0}:=\text{Id}_{C^{\infty}(M)}\,.\] Under the canonical transformation given by the time-one map of the flow generated by the Hamiltonian \(G\) a function \(F\) transforms according to \[F\mapsto\Phi^{*}F=\sum_{m=0}^{\infty}\frac{1}{m!}[\text{ad}_{G}]^{m}F. \tag{16}\] From (14) we get that for \(F_{i}\in W_{i}\) and \(F_{j}\in W_{j}\), \[\text{ad}_{F_{i}}(F_{j})\in W_{i+j+\frac{1}{2}} \tag{17}\] The idea now is to solve the functional relation (3) order by order with \(H_{\mathrm{Kep}}\) through a succession of canonical transformations generated by functions \(G_{i}\) and an energy rescaling of the form (15). To this end let us first inspect the functional relation in terms of the power series for \(f\) and \(g\) in (4) which gives \[H+f_{1}H^{2}+f_{2}H^{3}+\dots=\epsilon\frac{p^{2}}{2}-\left(1+g_{1}H+g_{2}H^{2 }+\dots\right)\frac{\epsilon}{r}\,. \tag{18}\] In order to find solutions for integer PN orders we will need terms \(\mathrm{ad}_{G_{i}}(H_{\mathrm{Kep}})\) in the canonical transformations to yield integer order and hence the \(G_{i}\) to be have half-integer order (see (17)). It turns out that this can be achieved by the ansatz \[G_{i-\frac{1}{2}}(q,p)=(p\cdot q)\Lambda_{i}(a)(q,p)\,, \tag{19}\] where \(\Lambda_{i}(a)\) again denotes a general function of order \(\epsilon^{i}\) with coefficients \(a_{l,m,n}\) as defined in (2). Each such \(G_{i-\frac{1}{2}}\) generates a canonical transformation \(\Phi_{i}\) and will be determined such that \[H:=\Phi_{n}^{\star}\dots\Phi_{2}^{\star}\Phi_{1}^{\star}\tau(H_{\mathrm{Kep}})\,, \tag{20}\] with suitable \(\delta_{i}\) in (15) defining the energy rescaling \(\tau\) solves the functional relation (18) to order \(n\). **Lemma 1**.: _For positive integers \(k\) and \(i_{1}\leq i_{2}\leq\ldots\leq i_{m}\), let \(I=(i_{m},\dots,i_{2},i_{1})\)_ \[\mathrm{ad}_{G_{l}}^{I}:=\mathrm{ad}_{G_{i_{m}-\frac{1}{2}}}\dots\mathrm{ad}_{ G_{i_{2}-\frac{1}{2}}}\,\mathrm{ad}_{G_{i_{1}-\frac{1}{2}}}\] _with \(G_{i_{k}-\frac{1}{2}}\in W_{i_{k}-\frac{1}{2}}\), \(k=1,\dots,m\), and \(|I|=\sum_{k=1}^{m}i_{k}\). Let \(H\) be defined as in (20). Then the PN expansion of \(H\) to order \(N\) (disregarding the overall \(\frac{1}{\epsilon}\)) is given by_ \[\sum_{j=1}^{N}\epsilon^{j}H_{j}\,,\] _where_ \[H_{j}:=\sum_{n=0}^{j-1}\sum_{k=0}^{j-n-1}\sum_{\begin{subarray}{c}I\in\mathbb{ N}_{+}^{+}\\ |I|=j-n-1\end{subarray}}^{\prime}\frac{\delta_{n}}{k!}\,\mathrm{ad}_{G}^{I} \left(H_{\mathrm{Kep}}^{n+1}\right)\] _is in \(W_{j}\). Here the prime in the third sum denotes that the summation is restricted to tuples of ordered integers \(I=(i_{k},\dots,i_{2},i_{1})\in\mathbb{N}_{+}^{k}\) with \(i_{1}\leq i_{2}\leq\ldots\leq i_{k}\)._ Proof.: The result follows immediately from ordering the terms in (20) taking into account (16) and (17). We now come to the proof of Theorem 3. Proof.: The proof is done by explicit computation. From Lemma 1 we get \[\begin{split} H=&\quad\epsilon H_{\mathrm{Kep}}\\ &+\epsilon^{2}\big{(}\{G_{1-\frac{1}{2}},H_{\mathrm{Kep}}\}+ \delta_{1}H_{\mathrm{Kep}}^{2}\big{)}\\ &+\epsilon^{3}\big{(}\delta_{2}H_{\mathrm{Kep}}^{3}+\{G_{2-\frac{1 }{2}},H_{\mathrm{Kep}}\}+\{G_{1-\frac{1}{2}},\delta_{1}H_{\mathrm{Kep}}^{2} \}+\frac{1}{2}\{G_{1-\frac{1}{2}},\{G_{1-\frac{1}{2}},H_{\mathrm{Kep}}\}\} \big{)}\\ &+O(\epsilon^{4})\,.\end{split} \tag{21}\] A fast way to proceed is to rewrite the functional relation (18) as \[H=\epsilon\frac{p^{2}}{2}-\left(1+g_{1}H+g_{2}H^{2}+\dots\right)\frac{\epsilon }{r}-f_{1}H^{2}-f_{2}H^{3}-\dots \tag{22}\] Equating the right hand sides of (21) and (22) at order \(\epsilon\) gives \[H_{0}=\left(\frac{p^{2}}{2}-\frac{1}{r}\right)\,.\] Plugging this \(H_{0}\) into the right hand side of (22), reading off the terms of order \(\epsilon^{2}\) and equating with the order \(\epsilon^{2}\) in (21) gives \[-f_{1}H_{\text{Kep}}^{2}-g_{1}H_{\text{Kep}}\frac{1}{r}=\delta_{1}H_{\text{Kep}} ^{2}+\left\{G_{1-\frac{1}{2}},H_{\text{Kep}}\right\}.\] This is solved by choosing the coefficient of the energy redefinition as \[\delta_{1}=2g_{1}-f_{1}\] and the generating function \[G_{1-\frac{1}{2}}=-g_{1}(p\cdot q)H_{\text{Kep}}\,.\] Filling in the \(\epsilon H_{0}+\epsilon^{2}H_{1}\) into the right hand side of (22), reading off the terms of order \(\epsilon^{3}\) and equating with the order \(\epsilon^{3}\) in (21) gives \[(2f_{1}^{2}-f_{2})H_{\text{Kep}}^{3}+(3f_{1}g_{1}-g_{2})\frac{H_ {\text{Kep}}^{2}}{r}+g_{1}^{2}\frac{H_{\text{Kep}}}{r^{2}} =\delta_{2}H_{\text{Kep}}^{3}+\left\{G_{2-\frac{1}{2}},H_{\text{ Kep}}\right\}\] \[+\left\{G_{1-\frac{1}{2}},\delta_{1}H_{\text{Kep}}^{2}\right\}+ \frac{1}{2}\{G_{1-\frac{1}{2}},\{G_{1-\frac{1}{2}},H_{\text{Kep}}\}\}\,.\] This can be solved by choosing the next coefficient in the energy rescaling as \[\delta_{2}=5g_{1}^{2}-6g_{1}f_{1}+2g_{2}+2f_{1}^{2}-f_{2}\] and the generating function \[G_{2-\frac{1}{2}}=(p\cdot q)\left(\frac{1}{2}\left(-g_{1}^{2}+2g_{1}f_{1}-2g_{ 2}\right)H_{\text{Kep}}^{2}+\frac{\frac{g_{2}^{2}}{2}H_{\text{Kep}}}{r}\right)\,.\] We have carried out the computation to order 5PN (\(\epsilon^{6}\)) with the help of Mathematica and present the computations and results in ancillary files [10]. We note that, assuming the particular form (9) of the LRL vector, one can show this is conserved off-shell as well, to at least order 5PN. ### Hidden symmetries require Kepler dynamics In the previous sections we have described a class of relativistic Hamiltonians that turns out to be equivalent to a classical Kepler problem, either being parallel to it on its energy levels or using an approximate canonical transformation and time reparametrization. In both cases, we also constructed the modified LRL vector. In this section, we aim to investigate a more general question: what is the largest class of relativistic two-body Hamiltonians (within a certain set of plausible Hamiltonians) that share the symmetries of the Kepler problem? And secondly, is this class related to the Kepler system through canonical transformation and time reparametrization? This would in effect generalise an aspect of Bertrand's Theorem, as all Hamiltonians obeying the symmetries would be equivalent to the Kepler problem - just like in the classical context the only system obeying the symmetries (which requires a vanishing perihelion shift) is Kepler. **Theorem 4**.: _Let a spherically symmetric class of relativistic two-body Hamiltonians be given by_ \[H=\frac{1}{\epsilon}\left(\epsilon H_{0}+\epsilon^{2}H_{1}+\epsilon^{3}H_{2}+ \dots\right)\,,\] _where_ \[H_{0}=\frac{p^{2}}{2}-\frac{1}{r}\qquad\text{and, for }j\geq 1,\qquad H_{j}= \Lambda_{j+1}(c)\,. \tag{23}\] _At least up to and including 5PN, these Hamiltonians are canonically conjugate to Kepler up to time reparametrization if and only if they conserve a relativistic version of the LRL vector, which to leading order is given by_ \[A_{0}^{i}(q,p)=(p\cross L)^{i}-\frac{q^{i}}{r}\,,\] _and may contain (and in general will contain) corrections at higher orders._ **Remark 1**.: _There are two notions we can use to constrain the number of coefficients \(c_{l,m,n}\) in the Hamiltonian (23) (see also (2)). Firstly, if a particle is far away from any gravitational body, one expects the attraction to become negligible and the Hamiltonian to approach the special relativistic Hamiltonian, being an expansion only in \(p^{2}\). Therefore, all momentum-only terms must be solely dependent on the regular momentum and independent of the radial momentum \(p_{r}\). This removes all terms with coefficients \(c_{l,0,n}\) where \(n\neq 0\). Secondly, the first order Hamiltonian is known to possess an ambiguity, allowing one to shift the radius such that the term \(\sim p_{r}^{2}/r\) vanishes [10, 2, 3]. This kind of ambiguity can be expected also at higher orders, but finding these is not a trivial task and not necessary for our analysis._ Proof.: While the second part of Theorem 4 of course is a tautology - all systems conjugate to Kepler problems conserve the symmetries of Kepler problems -, the first part is not at all obvious. We have checked the statement up to fifth post-Newtonian order, and will show here the first two. To prove the equivalence, we want to show that the Hamiltonian of a candidate symmetric system can be related through a time reparametrization and a canonical transformation to the Kepler system. Since the Hamiltonians are divided up in separate orders, we can demand this transformation exists on each order individually. We find the symmetric Hamiltonian and its associated LRL vector by taking an Ansatz for the vector and its corrections and requiring they commute up to an order. That gives a set of relations among both the coefficients \(c_{l,m,n}\) of the Hamiltonian (23) and \(\alpha_{l,m,n}\), \(\beta_{l,m,n}\) of the LRL Ansatz, of which the \(e^{j}\)-order term is given by \[A_{j}^{i}=\Lambda_{j}(\alpha)(p\cdot q)p^{i}+\Lambda_{j+1}(\beta)q^{i}\,,\] where we assume (2). The Hamiltonian given in terms of the remaining free variables can then be matched to the time-reparametrized, canonically transformed Kepler Hamiltonian, constructed in the same way as in the previous subsection. At first non-leading order for example, the terms proportional to \(\epsilon\) take the form \[\{H,A^{i}\}=\{H_{0},\epsilon A_{1}^{i}\}+\{\epsilon H_{1},A_{0}^{i}\}=0\,,\] which results in relations among the 3 coefficients \(c_{l,m,n}\) of the Hamiltonian (see Remark 1) and the 9 coefficients of the Ansatz for the LRL vector at first order. The existence of a LRL vector up to this order requires it to take the following form \[\beta_{1,1,0}=3\alpha_{1,0,0}+\beta_{1,0,0}c_{1,1,0}+4\beta_{1,0,0 }c_{2,0,0}\,, \beta_{2,0,0}=-\alpha_{1,0,0}\,,\] \[\beta_{0,2,0}=-2\left(\alpha_{1,0,0}+\beta_{1,0,0}c_{1,1,0}+4 \beta_{1,0,0}c_{2,0,0}\right)\,, \alpha_{0,1,0}=-2\alpha_{1,0,0}\,,\] with all other coefficients vanishing because of the choices in Remark 1. As \(\beta_{1,0,0}\) is the parameter determining the overall size of the vector, the only free parameter that is left over is \(\alpha_{1,0,0}\). The term in the LRL corresponding to this parameter turns out to be proportional to \(A_{0}^{i}H_{0}\). These are trivially commuting quantities with \(H_{0}\), so we can set the coefficient to \(0\), yielding an expression completely fixed in terms of two of the coefficients of the Hamiltonian. The corrected vector is then a conserved quantity only provided we constrain the Hamiltonian with \[c_{0,2,0}=-2\left(c_{1,1,0}+2c_{2,0,0}\right)\,.\] Using the general generating function (19) we can then obtain the transformations needed to produce the above Hamiltonian from the Kepler Hamiltonian. These transformations are defined by \[\delta_{1}=-4\left(c_{1,1,0}+3c_{2,0,0}\right)\,,\ \ a_{1,0,0}=c_{1,1,0}+4c_{2,0,0}\,,\] \[a_{0,1,0}=-2\left(c_{1,1,0}+4c_{2,0,0}\right)\,,\ \ a_{0,0,1}=0\,.\] Following the same procedure at second order, the equation that needs to be satisfied is \[\{H,A^{i}\}=\{H_{0},\epsilon^{2}A_{2}^{i}\}+\{\epsilon H_{1},\epsilon A_{1}^{ i}\}+\{\epsilon^{2}H_{2},A_{0}^{i}\}=0\,.\] Now there are 7 coefficients for the Hamiltonian and 16 for the LRL. This leads to 15 constraints on the coefficients of the LRL vector, given in the ancillary files [11]. Once more, the remaining degree of freedom is proportional to a vector trivially commuting with \(H_{0}\), that is \(A_{0}^{i}H_{0}^{2}\), and we can set the corresponding coefficient to \(0\). The transformation then yields a conserved quantity provided we impose the two constraints: \[c_{0,2,1}=-2c_{1,1,0}^{2}-16c_{2,0,0}c_{1,1,0}-32c_{2,0,0}^{2}-3c _{0,3,0}-c_{1,1,1}-5c_{1,2,0}-8c_{2,1,0}-12c_{3,0,0}\,,\] \[c_{0,1,2}=2c_{1,1,0}^{2}+16c_{2,0,0}c_{1,1,0}+32c_{2,0,0}^{2}+c_ {0,3,0}-c_{1,1,1}+c_{1,2,0}-4c_{3,0,0}\,,\] and thus five free parameters at this order remain in the Hamiltonian. To relate this to Kepler, we need the transformations given by \[\delta_{2} =-4\left(-c_{1,1,0}^{2}-8c_{2,0,0}c_{1,1,0}-16c_{2,0,0}^{2}+2c_{0,3,0 }+2c_{1,2,0}+2c_{2,1,0}+2c_{3,0,0}\right)\,,\] \[a_{2,0,0} =\frac{1}{2}\left(3c_{1,1,0}^{2}+16c_{2,0,0}c_{1,1,0}+16c_{2,0,0}^ {2}+2c_{0,3,0}+2c_{1,2,0}+2c_{2,1,0}+4c_{3,0,0}\right)\,,\] \[a_{1,1,0} =-7c_{1,1,0}^{2}-40c_{2,0,0}c_{1,1,0}-48c_{2,0,0}^{2}-5c_{0,3,0}-5c _{1,2,0}-4c_{2,1,0}-4c_{3,0,0}\,,\] \[a_{0,2,0} =8c_{1,1,0}^{2}+48c_{2,0,0}c_{1,1,0}+64c_{2,0,0}^{2}+7c_{0,3,0}+8c _{1,2,0}+8c_{2,1,0}+8c_{3,0,0}\,,\] \[a_{0,1,1} =\frac{1}{3}\left(-2c_{1,1,0}^{2}-16c_{2,0,0}c_{1,1,0}-32c_{2,0,0 }^{2}-c_{0,3,0}+c_{1,1,1}-c_{1,2,0}+4c_{3,0,0}\right)\,.\] Similar calculations confirm up to and including fifth PN order that general Hamiltonians of the type described above conserving a LRL vector are related to Kepler via a canonical transformation and time reparametrization. The ancillary files [10] include a Mathematica notebook with the higher order computations and the resulting Hamiltonians and transformations. In other words, this theorem indicates that there are no free lunches: only systems that are canonically conjugate to Kepler up to time reparametrization have the same extension of the spatial rotation group with hidden symmetries. ## 3 Dilaton-Coupled Einstein-Maxwell Theory This section is included as a physics-oriented intermezzo, presenting some background on the relevant physical system to be discussed in Section 4. It may be skipped without much harm to the understanding of our main conclusions. We will focus on the Einstein-Maxwell-dilaton (EMD) theory; a generalisation of general relativity that is of interest due to its general nature of forces, comprising spin-0,1,2 background fields, as well as its ability to circumvent the "no-hair theorem". This states that a black hole cannot be described by properties other than its mass, charge, and angular momentum. EMD escapes this prohibition by introducing a non-trivial scalar field and charge4. As we will see, these features will lead to interesting aspects in terms of black hole orbits. In this section, we will describe the different black hole solutions as well as their source terms. Footnote 4: This scalar field gives the black hole what is called _secondary_ hair, as the scalar charge is completely determined in terms of mass and charge within a given theory, i.e. for a given value of the scalar coupling constant [12]. ### Black holes with dilaton hair The fields present in EMD theory are the metric \(g_{\mu\nu}\), the four-potential \(A_{\mu}\) and the dilaton field \(\phi\), exponentially coupled (through coupling constant \(a\)) to the electromagnetic field strength. Mathematically, these are respectively a pseudo-metric tensor, a covector and a function on the spacetime manifold \(\mathbb{R}^{4}\). A solution of the corresponding Einstein-Maxwell's equation is then given by the critical tensors of the action (see also e.g. [13, 14]) \[S[g_{\mu\nu},A_{\mu},\phi]=\frac{1}{16\pi}\int\;\mathrm{d}^{4}x\sqrt{-g}\left( R-2(\partial\phi)^{2}-e^{-2a\phi}F^{2}\right)\,,\] where \(g=\det(g_{\mu\nu})\) is the determinant of the pesudometric-tensor matrix, \(R\) is the Ricci scalar and \(F=dA\) is Maxwell's field. In addition to diffeomorphism invariance and gauge symmetry, this action has a global symmetry that shifts the dilaton while rescaling the gauge vector. In what follows we will omit further mathematical details and focus on a more physical description. Starting with the special case \(a=0\), the static and spherically symmetric solutions of this theory are given by the well-known Schwarzschild and Reissner-Nordstrom black holes, with possibly non-vanishing electric charge. In the electrically neutral case, the introduction of the dilaton does not introduce additional solutions; scalar-gravity is known to satisfy the no-hair theorem and hence cannot carry scalar charge [1]. In contrast, when introducing the dilaton (i.e. \(a\neq 0\)) in the charged case, the solution becomes more interesting and reads [11, 12] \[\mathrm{d}s^{2}=-\lambda^{2}\mathrm{d}t^{2}+\lambda^{-2}\mathrm{d}r^{2}+r^{2} \kappa^{2}\mathrm{d}\Omega^{2}\,,\qquad F_{tr}=\frac{e^{2a\phi_{0}}Q}{r^{2} \kappa^{2}}\,,\qquad e^{2a\phi}=e^{2a\phi_{0}}\left(1-\frac{r_{-}}{r}\right)^{ \frac{2a^{2}}{1+a^{2}}}\,,\] where \[\kappa^{2}=\left(1-\frac{r_{-}}{r}\right)^{\frac{2a^{2}}{1+a^{2}}}\,, \qquad\lambda^{2}=\left(1-\frac{r_{+}}{r}\right)\left(1-\frac{r_{-}}{r}\right)^ {\frac{1-a^{2}}{1+a^{2}}}\,.\] Note that one can set \(\phi_{0}=0\) by the shift symmetry of the dilaton, which we will subsequently do. This most general solution is parametrised by the locations of the inner and outer horizons \(r_{\pm}\). These are related to the mass and charge of the object by \[r_{+}=M+\sqrt{M^{2}+Q^{2}(a^{2}-1)}\,,\qquad r_{-}=\left(\frac{a^{2}+1}{a^{2}- 1}\right)\left(-M+\sqrt{M^{2}+Q^{2}(a^{2}-1)}\right)\,, \tag{24}\] Importantly, the 'horizons' labelled by the minus sign are singular for all \(a>0\) (i.e., the scalar curvature diverges at this point), whereas the ones labelled by the plus signs are not. As mentioned above, this solution carries scalar charge, given by a simple integration over a spherical shell surrounding it [10]: \[D=\lim_{\rho\to\infty}\frac{1}{4\pi}\oint\ \nabla^{\mu}\phi\ \mathrm{d}^{2}\sigma_{\mu}=\frac{a}{a^{2}-1}\left(-M+\sqrt{M^{2}+Q^{2}(a^{2}- 1)}\right)\,. \tag{25}\] In order to have non-vanishing dilaton charge, one therefore needs both electric charge \(Q\neq 0\) as well as non-vanishing scalar coupling \(a\neq 0\). For a given theory and hence value of \(a\), the mass and charge determine the dilaton charge, which is therefore not an independent parameter. For completeness, we would like to mention that for the same set of charges \((M,Q,D)\), a second solution exists, given by the above fields but with parameters \[\tilde{r}_{+}=M-\sqrt{M^{2}+Q^{2}(a^{2}-1)}\,,\qquad\tilde{r}_{-}=\left(\frac {a^{2}+1}{a^{2}-1}\right)\left(-M-\sqrt{M^{2}+Q^{2}(a^{2}-1)}\right)\,,\] where we have added a tilde to avoid confusion with the solutions that form our main interest. In the neutral case, these solutions are the Janis-Newman-Winicour solution for Einstein minimally coupled to a scalar field [12]. Note that they are in general different from Schwarzschild (when choosing \(a\neq 0\)); however, in this case the solution develops a naked singularity (that is, a non-removable singularity not cloaked by an event horizon). This amounts to the statement that scalar-gravity does not have any black hole solutions other than Schwarzschild. The introduction of the electric charge does not qualitatively change this singular property. For these reasons we will not consider this solution any further. ### The extremal case We now turn to the extremal case of the hairy black hole solutions (24). To this end, it is convenient to rewrite the relation (25) between the three charges as the quadratic relation \[(D-aM)^{2}=a^{2}(M^{2}+D^{2}-Q^{2})\,. \tag{26}\] The importance of the expression on the right-hand side lies in the extremality of the black hole. Imagine two such black holes; when this combination vanishes, attractive spin-0,2 forces between two such black holes (proportional to \(M^{2}+D^{2}\)) would exactly cancel the repulsive spin-1 force (proportional to \(Q^{2}\)). The dimensionless parameter \[\chi^{2}\equiv\frac{M^{2}+D^{2}-Q^{2}}{M^{2}}\,,\] is therefore a measure of extremality, and interpolates between 0 and 1. When \(\chi=1\), this corresponds to a neutral black hole (i.e. the Schwarzschild solution). In contrast, the case \(\chi=0\) corresponds to an extremal black hole: in this case, the two sides of (26) vanish separately, and the black hole has extremal charges \[D_{\rm extr}=aM\,,\qquad Q_{\rm extr}=\pm\sqrt{1+a^{2}}M\,, \tag{27}\] that are both linearly proportional to the mass. For all values \(a\neq 0\), the solutions will be singular in the extremal limit [11]. Moreover, the thermodynamics of such extremal objects are fundamentally different for \(a\gtrless 1\) - in fact, it has been argued [14] that they resemble elementary particles more than black holes for \(a>1\). As this will be of no consequence for the dynamics, which is our concern here, we will still refer to these objects as black holes. Due the cancellation of forces between extremal black holes, one can also construct multi-center solutions. For the Einstein-Maxwell case, these are the Majumdar-Papapetrou solutions [13, 14], while the solutions with a non-minimally coupled dilaton field added in have been discussed in [15]. The line element in this case is given by \[\mathrm{d}s^{2}=-U^{-2/(1+a^{2})}\mathrm{d}t^{2}+U^{2/(1+a^{2})}\mathrm{d}q \cdot\mathrm{d}q\,, \tag{28}\] with \[U(q)=1+(1+a^{2})\sum_{n}\frac{M_{n}}{|q-q_{n}|}\,,\] where the sum is over the extremal black holes with mass \(M_{n}\) and positions \(q_{n}\) of which there may be arbitrarily many. The no-force condition implies that all centers carry electric and dilaton charges (27) that are proportional to their masses. For a single charge, this solution corresponds to the extremal case of the general Einstein-Maxwell-dilaton metric. This can be seen by noting the two horizons of the EMD merge into \(r_{\pm}=M(1+a^{2})\) and switching to the isotropic radius \(\rho=r(1-\frac{r_{\pm}}{r})\). ### Skeletonisation To make our dynamical systems pertain to dilaton-charged black holes, simply taking a point particle with a mass and electric charge while keeping the universal dilaton coupling \(a\) nonzero does not suffice. For self gravitating objects, even in the zero size limit, there will be a dependence on the background scalar field of the way the object couples to it. One can see this by considering the black hole presented in the first subsection: both the dilaton charge and electric charge depend on the background scalar field, while the electric charge is conserved by a \(U(1)\) symmetry. In general then, one can describe a particle by its conserved charge \(Q_{p}\) and a mass function \(\mathfrak{m}(\phi)\), absorbing the dependence on the dilaton field. It will show up in the Lagrangian describing the dynamics of the point particle, reading \[L_{pp}=\mathfrak{m}(\phi)\sqrt{-g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}}+Q_{p}A_{ \mu}\dot{x}^{\mu}. \tag{29}\] We can compare the field generated by a particle in this parametrisation to the field we know belongs to a certain object, in order to find the mass function belonging to the zero size limit of the particular object. Taking a black hole as example, this leads to the matching condition [16, 17, 18] \[\frac{\mathrm{d}\mathfrak{m}(\phi)}{\mathrm{d}\phi}=\frac{a}{a^{2}-1}\left(- \mathfrak{m}(\phi)+\sqrt{\mathfrak{m}(\phi)^{2}+Q_{p}^{2}e^{2a\phi}(a^{2}-1)} \right)\,. \tag{30}\] For every value of the dilaton coupling \(a\), the solution to this equation will depend on the charge and an integration constant, determined by the mass \(m\) and charge. Note that the above ODE is fully analogous to the expression for the dilaton charge (25), with the identifications \[(M,D,Q)\simeq(\mathfrak{m}(\phi),\frac{d\mathfrak{m}(\phi)}{d\phi},Q_{p}e^{a \phi})\,.\] Indeed, one should think of the latter as the background-dependent charges, which go to their asymptotic values for \(\phi\to 0\). The mass function therefore determines more than only the masses. Its first derivative corresponds to the dilaton charge. Moreover, its second derivative is closely related to the extremality combination: \[\frac{d^{2}}{d\phi^{2}}\log\mathfrak{m}(\phi)=\frac{a^{2}Q_{p}^{2}e^{2a\phi}} {\mathfrak{m}^{2}(\phi)}\frac{(a-\frac{\mathrm{d}}{\mathrm{d}\phi})\mathfrak{ m}(\phi)}{a\mathfrak{m}(\phi)+(a^{2}-1)\frac{\mathrm{d}\mathfrak{m}(\phi)}{ \mathrm{d}\phi}}\,,\] which is evaluated on the background to be \[\beta:=\frac{d^{2}}{d\phi^{2}}\log\mathfrak{m}(\phi)|_{\phi=0}=\frac{a^{2}Q^{ 2}}{m^{2}}\frac{\chi}{\chi+\frac{aD}{m}}\,,\] clearly vanishing for extremal black holes. In the extremal case, therefore, the coupling of the particle to the dilaton field is simply through an exponential \(\mathfrak{m}(\phi)=me^{a\phi}\). Looking at the Lagrangian (29), this shows the extremal particle couples like a particle without self-gravitation to the metric and dilaton field. In retrospect this is not surprising, since, if the extremal particle does not experience a net force from other extremal particles stationary with respect to it, why would it experience any force generated by itself? Equivalently, the extremal particle can be seen to couple to a metric given by \[\tilde{g}_{\mu\nu}=e^{2a\phi}g_{\mu\nu}\,.\] If we make the transformation to the tilde metric, we switch from the Einstein frame to the Jordan frame, in which the bulk action takes the form [14, 15] \[S_{\text{Jordan}}=\int\;\mathrm{d}^{4}\mathbf{x}\sqrt{-\tilde{g}}\ e^{-2a \phi}\left(\tilde{R}+\left(3a^{2}-2\right)\left(\partial\phi\right)^{2}-F^{2} \right)\,.\] In this frame, the extremal particle does not couple to the dilaton field at all, making its mass constant **Remark 2**.: _In general, for non-extremal cases, (30) has no simple closed-form expression. An exception is the case \(a=1\), for which it is solved by \(\mathfrak{m}(\phi)^{2}=\mu^{2}+\frac{1}{2}q^{2}e^{2\phi}\), where the integration constant \(\mu\) is given by \(\mu^{2}=m^{2}-\frac{Q_{p}^{2}}{2}\), showing it is a measure of deviation from extremality, since for \(a=1\) the particle is extremal when \(Q_{p}^{2}=2m\) (setting the background field to zero)._ ## 4 The Two-Body System of Extremal Black Holes Following the physics intermezzo, we now return to the main theme of this paper - the analysis and understanding of Hamiltonians with Kepler-like dynamics - and employ the dynamics of black holes in Einstein-Maxwell-dilaton gravity as an example. We will consider a pair of non-spinning black holes in EMD theory, carrying both electric and dilatonic charge besides their mass. In the first part of this section, we restrict ourselves to the first order in the post-Newtonian expansion, i.e. at 1PN. As we will see, for a specific case of the dilaton coupling \(a\) and extremal charges, this system coincides with a Kepler-like system. In the second part, we show how the same equivalence to Kepler dynamics arises in a different region in parameter space: instead of 1PN for arbitrary mass ratio, we now focus on the test-mass limit with a vanishing mass ratio, or \(m_{1}\ll m_{2}\). This corresponds to the motion of a charged particle in a given background as outlined in Section 3, and can be studied at all orders in the post-Newtonian expansion. Prompted by the two-body discussion, we will focus specifically on extremal black holes with opposite charges. ### Kepler dynamics at 1PN For the two-body system with arbitrary masses \(m_{1,2}\), electric charges \(Q_{1,2}\) and dilaton charges \(D_{1,2}\) (subject to the relation (25)), the 0PN Hamiltonian in center-of-mass coordinates reads \[H_{0PN}= \frac{p^{2}}{2\mu}-\frac{G_{12}M\mu}{r}\,,\] where the effective Newton's constant is given by the interplay between attractive and repulsive forces, \[G_{12}=\frac{1}{m_{1}m_{2}}(m_{1}m_{2}+D_{1}D_{2}-Q_{1}Q_{2})\,.\] Moreover, we introduce the total mass, reduced mass and symmetric mass ratio given by \[M=m_{1}+m_{2}\,,\qquad\mu=\frac{m_{1}m_{2}}{M}\,,\qquad\nu= \frac{\mu}{M}\,,\] in the usual way. The 1PN Hamiltonian can be found in e.g. [15] and can be written in terms of three terms5 Footnote 5: Note that this in general will also have an additional \(p_{p}^{2}/r\) term, proportional to the radial momentum only. By means of a constant shift of the radial coordinate, one can set the coefficient of this term to zero, see e.g. [1]. We will do so in order to facilitate the comparison to Section 2. \[H_{\text{1PN}}=h_{1}\frac{p^{4}}{4\mu^{3}}+h_{2}\frac{\gamma}{ \mu^{2}}\frac{p^{2}}{r}+h_{3}\frac{\gamma^{2}}{\mu r^{2}}\,,\] writing \(\gamma=G_{12}M\mu\) and with dimensionless coefficients given by \[h_{1} =-\frac{1}{2}(1-3\nu)\,,\quad h_{2}=-\frac{1}{2}\left(\frac{3-D_{1} D_{2}}{G_{12}}\right)-\nu\,,\] \[h_{3} =\frac{\nu}{2}+\frac{1}{2G_{12}^{2}}\left[(1+D_{1}D_{2})^{2}-2Q_{ 1}Q_{2}+\left\{\frac{m_{1}}{M}(D_{1}^{2}\beta_{2}+Q_{1}^{2}(1+aD_{2})-2Q_{1}Q_ {2}aD_{1})+(1\leftrightarrow 2)\right\}\right]\,.\] All quantities here are asymptotic values, as measured far away from any dilaton charge. Moreover, note that we introduce a slight abuse in notation in the above and hereafter to switch to charges and dilaton charges per unit mass, as in \(\tilde{Q}_{1,2}=Q_{1,2}/m_{1,2}\) but dropped the tilde to avoid cumbersome expressions. A comparison to the 1PN Kepler-type Hamiltonians discussed in Section 2 demonstrates that these have two free parameters at every order (including 1PN), while the two-body system here has three terms. For general values, this system will therefore not be related to Kepler via a symplectic transformation. More precisely, the linear combination6 Footnote 6: This corresponds to the combination \(A+2B+C+D\) in the conventions of [14; 21]. \[\Delta= h_{1}+2h_{2}+h_{3}\,,\] \[= -\frac{1}{2G_{12}^{2}}\bigg{(}6(1-Q_{1}Q_{2})+Q_{1}^{2}Q_{2}^{2} +2D_{1}D_{2}(2-D_{1}D_{2})\] \[+\left\{\frac{m_{1}}{M}\left(-D_{1}^{2}\beta_{2}-Q_{1}^{2}(1+aD_{ 2})+2Q_{1}Q_{2}aD_{1}\right)+(1\leftrightarrow 2)\right\}\bigg{)}\;,\] quantifies the deviation away from Kepler-like dynamics: * When \(\Delta\) vanishes, the Hamiltonian can be written in the form (3) (up to 1PN order), identifying \[f_{1}=-h_{1},\qquad g_{1}=-2(h_{1}+h_{2})\,.\] In order to see this explicitly, one needs to set \(\mu=1\) and scale the quantity \(GM\) with \(G\) Newton's constant to \(\frac{1}{8}\)7. Hence there exists a canonical transformation to Kepler and the system has a LRL vector. The form of both the canonical transformation and the conserved charge follow from the discussion in Section 2. Footnote 7: This is because the effective Newton’s constant \(G_{12}\) is eight times larger than the usual gravitational constant. This corresponds to the findings of [21], who also found this in their supergravity system. * In contrast, when \(\Delta\) is non-vanishing, the relativistic corrections of this system are not of the Kepler-like form and the corresponding dynamical system differs from Kepler. The same quantity also determines whether or not bound states have closed orbits: in general they will not, with a perihelion precession given by8 Footnote 8: This result has been derived before in [14], though with a different mass function in the sense of Section 3.3, such that the results only coincide for extremal black holes. \[\delta\phi_{\text{1PN,EMD}}=-\frac{2\pi\gamma^{2}}{L^{2}}\Delta\,,\] as also stressed by [21]. As a consistency check, let us point out that the GR limit, where all parameters except \(m_{1},m_{2}\) and \(L\) vanish, reduces to \[\delta\phi_{\text{1PN,GR}}=6\pi\frac{M^{2}\mu^{2}}{L^{2}}\,,\] as already found by Einstein. Also, the perihelion in Einstein-Maxwell theory, so with dilaton vanishing, becomes \[\delta\phi_{\text{1PN,EM}}=\pi\frac{M^{2}\mu^{2}}{L^{2}}\left(6(1-Q_{1}Q_{2})+ Q_{1}^{2}Q_{2}^{2}-\frac{\left(m_{1}Q_{1}^{2}+m_{2}Q_{2}^{2}\right)}{M}\right)\,,\] which in the limit that one mass is much larger than the other agrees with [1]. At this point it might seem that the introduction of the dilaton complicates the expression for the deviation from Kepler enormously. However, there is a massive simplification in the case where the charges are extremal, whose special nature was also highlighted in Section 3. In the present case of a two-body system, we will have to take both charges extremal and of opposite sign, see (27) - when taking the same extremal sign the static forces cancel out and the effective Newton's constant \(G_{12}\) vanishes. Instead, when taking opposite signs, all forces are attractive and hence add up in the 0PN Hamiltonian. Furthermore, in the 1PN Hamiltonian, the parameters \(\beta_{1,2}\) vanish entirely, leading to the simple result \[\delta\phi_{\text{1PN,EMD}}|_{\text{ext.}}=\pi\frac{4(1+a^{2})M^{2}\mu^{2}}{L^{ 2}}(3-a^{2})\,.\] We therefore find that at \(a^{2}=3\), this relativistic system of extremal black holes becomes equivalent to Kepler9. It has a LRL vector and therefore \(SO(4)\) hidden symmetry. Moreover, the orbit closes as the perihelion precession vanishes. Footnote 9: This value coincides with the Kaluza-Klein reduction of gravity in 5 dimensions [13]. This result is closely related to the findings for extremal black holes in maximal supergravity [14]. The role of the \(SU(8)\) charge vector misalignment in maximal supergravity, needed in order to create a nonzero force between the extremal objects other than velocity dependent forces, is played in our case by the opposite nature of the charges10. In contrast to the rigid nature of maximal supergravity, enforced by the \(N=8\) supersymmetry, we have the freedom to tune the dilaton coupling, finding that the two-body systems of extremal and anti-extremal black holes always are a special case with a particularly simple expression for \(\Delta\), but that this only corresponds to Kepler-dynamics for a particular dilaton coupling. Footnote 10: One could further extend our considerations and include magnetic charges as well. We expect the dyonic charges to span a \(U(1)\) charge vector playing a completely analogous role to the \(SU(8)\) charge vector of [14]. ### Kepler dynamics in the test-mass limit Above, we have shown that the dynamics of the first relativistic correction of a system with comparably-sized masses in EMD theory behaves just like the classical Kepler problem. Now, we wish to extend our analysis to higher orders and will consider another tractable limit: that of the test-mass limit (\(m_{1}\ll m_{2}\)). Again, we can show the equivalence of this system with opposite and extremal charges in EMD with \(a=\sqrt{3}\) to a Kepler-like system. However, in this system we can include all relativistic corrections. We will focus immediately on the case with extremal charges. The general (scalar) charged black hole metric simplifies significantly in the extremal limit, and it will be convenient to use the Majumdar-Papapetrou solution in the isotropic coordinate system (28) [13] \[\text{d}s^{2}=-U^{-2/(1+a^{2})}\text{d}t^{2}+U^{2/(1+a^{2})}(\text{d}r^{2}+r^{ 2}\text{d}\theta^{2})\,,\] with a single center: \[U(r,\theta)=1+(1+a^{2})\frac{m_{2}}{r}\,.\] In the extremal case, the scalar field and vector are given by11 Footnote 11: In terms of the Schwarzschild radial coordinate, this choice of gauge corresponds to \(A_{0}=\frac{1}{\sqrt{1+a^{2}}}+\frac{m_{2}O_{2}}{r}\). After the change \(r\to r+r_{\pm}\), we find the above. \[e^{a\phi}=U^{-a^{2}/(1+a^{2})}\,,\qquad Q_{1}A_{0}=U^{-1}\,,\] where we have chosen static gauge for the latter. The Lagrangian for a point particle with charge \(Q_{1}\) reads \[L_{pp}=m_{1}e^{a\phi}\sqrt{-\dot{x}_{\mu}\dot{x}^{\mu}}+m_{1}Q_{1}A_{\mu}\dot{ x}^{\mu}\,. \tag{31}\] The above is an extended Lagrangian, where the time coordinate can be seen as another dimension in the space; the Lagrangian is defined on the tangent space \(T\bar{M}\) of a \(d+1\) dimensional manifold \(\bar{M}=\mathbb{R}\crosscross M\), the extended configuration manifold. All coordinates and velocities (denoted as a dot) are parametrised by a time-like variable \(s\). Writing the Lagrangian in terms of the harmonic function we have \[L_{pp}=m_{1}U^{-1}\left(\sqrt{1-U^{4/(1+a^{2})}|\dot{q}|^{2}}+1 \right)\dot{t}\,.\] Since \(t\) is a cyclic variable, \(\dot{t}\) is conserved and we can identify the time \(t\) with the fictitious time \(s\). Note that solutions to the Euler-Lagrange equations following from this action will not be unique, as different choices of time parametrisation will correspond to the same physical solution. Nevertheless, this fact allows one to reduce the Hamiltonian of the system to an autonomous Hamiltonian on \(T^{*}M\) instead, which is different from the one related by Legendre transform to the Lagrangian above [10], being it defined on \(T^{*}\bar{M}\). This, in turns, leads directly to the relation to the classical Kepler problem. The Legendre transform results in \[H(q,\dot{q})=\frac{\partial L_{pp}}{\partial\dot{q}}\cdot\dot{q}-L_{pp}=-m_{1}U ^{-1}\left(\frac{1}{\sqrt{1-U^{4/(1+a^{2})}|\dot{q}|^{2}}}+1\right)\dot{t}\,,\] where \(t=x^{0}\) is the real time and \(q=(x^{1},x^{2},x^{3})\) the position. For the reasons discussed above, we can choose the simple time parametrisation \(\dot{t}=-1\) to get rid of explicit time-dependence. Solving for the momenta conjugate to the positions, \[p_{i}=m_{1}\frac{\partial L_{pp}}{\partial\dot{q}^{i}}=\frac{U^{(3-a^{2})/(1+a ^{2})}\dot{q}_{i}}{\sqrt{1-U^{4/(1+a^{2})}|\dot{q}|^{2}}}\,,\] then leads to the Hamiltonian in phase space \[H(q,p)=m_{1}U^{-1}\left(\sqrt{1+U^{2(a^{2}-1)/(1+a^{2})}\frac{\left|p\right|^{ 2}}{m_{1}^{2}}}+1\right)\,.\] Note that the rest-mass energy is equal to \(2m_{1}c^{2}\); this differs from the usual \(m_{1}c^{2}\) due to the specific gauge choice that we have made for the gauge vector. There is a number of interesting subcases to consider. First of all, the case \(a=1\) leads to a Hamiltonian that is conformal to the special relativistic case, \[H(q,p)=m_{1}U^{-1}(q)\left(\sqrt{1+\frac{p^{2}}{m_{1}^{2}}}+1\right)\,.\] Instead, our main interest will be the case \(a^{2}=3\) again. In this case we have \[H(q,p)=m_{1}U^{-1}(q)\left(\sqrt{1+U(q)\frac{p^{2}}{m_{1}^{2}}}+1\right)\,.\] Remarkably, this Hamiltonian satisfies the interesting relation \[\frac{1}{2}\left(\frac{H^{2}(q,p)}{m_{1}}-2H(q,p)\right)=\frac{p^{2}}{2m_{1}} -\frac{2m_{2}H^{2}(q,p)}{m_{1}r(q)}\,, \tag{32}\] where \(r(q)=|q|\). Shifting the Hamiltonian by the rest-mass energy and rescaling the distance by a factor \(8\), one obtains (in terms of the new Hamiltonian) \[H(q,p)+\frac{1}{2}\frac{H^{2}(q,p)}{m_{1}}=\frac{p^{2}}{2m_{1}}-m_{2}\frac{m_ {1}+H(q,p)+\frac{1}{4m_{1}}H^{2}(q,p)}{r(q)}\,. \tag{33}\] This specific form of the Hamiltonian shows that, following the arguments of Section 2, the extremal EMD 1-centre system with \(a=\sqrt{3}\) is equivalent to the classical Kepler problem. It therefore also has a hidden LRL symmetry as well as closed orbits. The same special behaviour can also be seen from the perspective of the equations of motion. Adopting the parametrisation \(\dot{x}_{\mu}\dot{x}^{\mu}=-1\), there are two conserved quantities from the Lagrangian (31) \[L=m_{1}U^{(2-a^{2})/(1+a^{2})}r^{2}\dot{\theta}\,,\qquad E=m_{1}U^{(-2-a^{2})/ (1+a^{2})}\dot{t}+m_{1}Q_{1}A_{0}\,,\] as angular momentum and energy. Using again \(\dot{x}^{2}=-1\) we can state \[-U^{2}\left(\frac{E}{m_{1}}-U^{-1}\right)^{2}+U^{2/(1+a^{2})}\dot{r}^{2}+\frac {L^{2}}{m_{1}^{2}r^{2}}U^{(2a^{2}-2)/(1+a^{2})}=-1\,.\] It is useful to now take the Binet variable \(u\equiv\frac{1}{r}\), with \(u^{\prime}\) as its derivative with respect to \(\theta\) so that \[\dot{r}=-u^{\prime}\frac{L}{m_{1}}U^{(a^{2}-2)/(1+a^{2})}\,,\] and we find for the equation of motion \[(u^{\prime})^{2}+u^{2}-U^{4/(1+a^{2})}\frac{1}{L^{2}}(E^{2}-2Em_{1}U^{-1})=0\,.\] The last term here in principle provides an infinite expansion in increasing orders of \(u\) (and its accompanying powers of \(\frac{1}{c^{2}}\)). However, if we now choose \(a^{2}=3\), the powers of the harmonic function simplify and (restoring the gravitational constant) we have \[(u^{\prime})^{2}+\left(u-2\frac{Gm_{2}E^{2}}{L^{2}}\right)^{2}=\frac{\left(E^{ 2}-2m_{1}E\right)}{L^{2}}+\frac{4G^{2}m_{2}^{2}E^{4}}{L^{4}}\,.\] Compare this to the classical equation of motion (see e.g. [Ton]) \[(u^{\prime})^{2}+\left(u-\frac{Gm_{2}m_{1}^{2}}{L^{2}}\right)^{2}=\frac{2E_{N} m_{1}}{L^{2}}+\frac{G^{2}m_{2}^{2}m_{1}^{4}}{L^{4}}\,,\] where \(E_{N}\) is the Newtonian energy. We see the only difference resides in the modification of the gravitational constant by a function \(g(E)=2\frac{E^{2}}{m_{1}^{2}}\). Accordingly, the Hamiltonian giving the Kepler-like structure in (32) here coincides exactly with the role of the Newtonian energy. The orbits will therefore be the same up to the above modification of the gravitational constant. ## 5 Conclusion This paper studies relativistic systems of gravitating bodies, with dynamics equivalent to the classical Kepler problem. In particular, we have shown a class of seemingly relativistic Hamiltonians to have parallel flow to the Keper Hamiltonian on a levelset and we provided the accompanying Laplace-Runge-Lenz vector. Moreover, to fifth order in the PN expansion, we were able to construct the symplectic transformations and energy redefinitions needed to transform the Kepler Hamiltonian into such Kepler-type Hamiltonians explicitly, beyond the levelset equivalence. Additionally, a conjecture was put forth that all relativistic systems of a certain kind, i.e. Kepler at zeroth order and PN corrections of the form \[c_{n,m,l}\frac{(p^{2})^{n}(p_{r}^{2})^{l}}{r^{m}}\,,\] that conserve a (relativistic version of a) Laplace-Runge-Lenz vector are canonically conjugate up to time reparametrization to the Kepler system. This conjecture was also shown to hold at least to fifth PN order. Remarkably, this type of Hamiltonians is not merely a mathematical possibility, but it is actually realised in a comparatively simple and interesting physical theory. The Einstein-Maxwell-dilaton theory, when considering two extremal black holes with opposite signs of the charges and dilaton coupling tuned to the Kaluza-Klein reduction value (\(a=\sqrt{3}\)), has Hamiltonians of exactly this form in both the test-mass limit and the 1PN expansion of the two-body system. We therefore have established an interesting link between relativistic Hamiltonians, the ordinary Kepler problem and an explicit realisation. Several directions for further exploration present themselves. Firstly, exploring the conditions for local and global existence of the implicit, Kepler-type Hamiltonians and studying the geometry of the corresponding phase space would make for an intriguing investigation. Secondly, as the equivalence to Kepler for the discussed Hamiltonians is only shown on a levelset, the full phase space will in general look different from the Kepler phase space. Roughly put, the constant energy surfaces are'stacked' in a different way in the Kepler-type systems as compared to the original Kepler system. This raises the question whether one can always find a symplectic transformation from one to the other, as we have shown explicitly to a limited order. While we expect the normal-form-like construction of canonical transformations to extend to higher orders, perhaps even arbitrarily high orders, there is no guarantee this procedure will converge. However, it would be very appealing, if possible, to construct the asymptotic series of the transformations. Thirdly, in the non-relativistic Kepler problem, the geometrical origin of the \(SO(4)\) symmetry of 3-dimensional Kepler is known to stem from a mapping to the motion of a free particle on a three-sphere, as derived by Fock [Foc35] in 1935. In the context of the EMD system, we have a natural way of perturbing the Kepler problem, by allowing for example the dilaton coupling to deviate from \(a=\sqrt{3}\). This allows one to investigate which elements of this geometric construction would survive such a perturbation in the mapping to the three-sphere. Can the motion still be described by free motion on some hypersurface? Also related to the larger-than-expected symmetry group of the EMD 1-centre system is the Kaluza-Klein reduction of 5-dimensional Einstein-Hilbert gravity, yielding EMD with the special dilaton coupling. Can we understand the origin of the hidden symmetry from the higher-dimensional origin of its theory? After all, while an \(SO(4)\) symmetry in 3 dimensions might surprise the reader unfamiliar with the Kepler problem, this is simply the group of spatial rotations in 5D. It would be interesting to investigate this correspondence and possible relation further. Closely connected to the latter point is the more involved theory of \(\mathcal{N}=8\) supergravity, which can be obtained as the dimensional reduction of supergravity from 11 to 4 dimensions; many of our EMD findings were already highlighted in this setting from the perspective of vanishing periastron precession [10]. Moreover, extremal black holes in the \(\mathcal{N}=8\) theory have vanishing periastron precession to third post-Minkowskian order [14], at least leaving open the possibility of conserving a LRL vector to higher order and relating to Kepler. It is not clear that this also applies to the higher order two-body Hamiltonians of the extremal EMD with \(a=\sqrt{3}\); we leave this interesting question open for future study. ## Acknowledgments We are grateful to Andreas Knauf, Tomas Ortin and Cedric Deffayet for stimulating discussions. D.N. is supported by the Fundamentals of the Universe research program within the University of Groningen. M.S. is supported by the NWO project 613.009.10.
2306.11258
Deep Learning of Dynamical System Parameters from Return Maps as Images
We present a novel approach to system identification (SI) using deep learning techniques. Focusing on parametric system identification (PSI), we use a supervised learning approach for estimating the parameters of discrete and continuous-time dynamical systems, irrespective of chaos. To accomplish this, we transform collections of state-space trajectory observations into image-like data to retain the state-space topology of trajectories from dynamical systems and train convolutional neural networks to estimate the parameters of dynamical systems from these images. We demonstrate that our approach can learn parameter estimation functions for various dynamical systems, and by using training-time data augmentation, we are able to learn estimation functions whose parameter estimates are robust to changes in the sample fidelity of their inputs. Once trained, these estimation models return parameter estimations for new systems with negligible time and computation costs.
Connor James Stephens, Emmanuel Blazquez
2023-06-20T03:23:32Z
http://arxiv.org/abs/2306.11258v1
# Deep Learning of Dynamical System Parameters from Return Maps as Images ###### Abstract We present a novel approach to system identification (SI) using deep learning techniques. Focusing on parametric system identification (PSI), we use a supervised learning approach for estimating the parameters of discrete and continuous-time dynamical systems, irrespective of chaos. To accomplish this, we transform collections of state-space trajectory observations into image-like data to retain the state-space topology of trajectories from dynamical systems and train convolutional neural networks to estimate the parameters of dynamical systems from these images. We demonstrate that our approach can learn parameter estimation functions for various dynamical systems, and by using training-time data augmentation, we are able to learn estimation functions whose parameter estimates are robust to changes in the sample fidelity of their inputs. Once trained, these estimation models return parameter estimations for new systems with negligible time and computation costs. dynamical systems, system identification, machine learning, deep learning, chaos ## 1 Introduction The natural sciences are experiencing a watershed moment with the recent success of sophisticated, data-driven methods for approaching problems that have resisted traditional analytical and optimization-based methods such as AlphaFold for protein structure prediction [1], graph-based neural network models for weather forecasting [2] and the control of tokamak plasmas using deep reinforcement learning [3]. These recent successes have been driven by innovations in large-scale data-driven modeling and rapid advancements in specialized computer hardware that have propelled the rise of deep learning methods alongside domain-specific expertise and techniques from scientific sub-fields. In line with this trend, we present a novel method for solving parametric system identification (PSI) by learning a regression function from return maps to system parameters. System identification (SI) is the process of modeling and analyzing the behavior of dynamical systems based on observed input and output data[4]. Accurately characterizing and even predicting the behavior of a dynamical system is an invaluable tool in various areas spanning engineering, physics, biology, and economics. It is particularly valuable in control applications where accurate modeling of a system can result in dramatic performance and robustness improvements [5; 6; 7; 8]. This work focuses on parametric system identification, which is concerned with identifying the underlying parameters of the target system given some parametrized class of dynamical systems believed to contain, or at least closely approximate, the target system [9]. To understand how and why we solve PSI as a regression problem, it is helpful to place this approach in opposition to the more traditional setting of PSI as an optimization problem to be solved by means of meta-heuristic algorithms such as genetic and evolutionary algorithms[10; 11; 12]. Given a parametric class of dynamical systems and data from a target system, in the optimization setting one has a way to compute simulated trajectories given parameter estimates, and the goal is to find parameter estimates which match those of the target system. To accomplish this, one typically combines a loss function which acts as a 'dissimilarity' score between trajectories generated using candidate system parameters and observed trajectories. An optimization scheme is then used to select new candidate parameters for the system. This process is iterated until some computational budget has been exhausted or the loss converges. The observed losses of parameter estimates are then used to select a final estimate, e.g. the parameter estimate with the lowest loss observed in the optimization process. One of the challenges of this approach is choosing the loss function that is used to evaluate the quality of a given set of parameter estimates for this solution method. Given equal-length simulated and observed trajectories, one of the most common approaches is to minimize the average error between the pairs of simulated and observed trajectory points [10; 11; 12; 13; 14]. This can cause issues when dealing with chaotic systems, which have the characteristic property that trajectories that are initially neighboring in state space can diverge exponentially over time. This means that such a loss function can be highly sensitive to small measurement errors in the initial conditions of the target system. To address this issue, recent work has developed loss functions that compare trajectories as signals at the state space, as opposed to the time-domain level. where the trajectories of chaotic systems tend to take a more structured form [15; 16]. These works have proposed several heuristic loss functions for comparing the similarity of trajectories in state space [15; 16; 17; 18; 19], however, there is currently no clear choice as to which state-space loss function is best suited for parameter estimation. Even working in state space, current methods for PSI solve an optimization problem using an iterative procedure that involves repeatedly simulating trajectories using system parameter estimates, which for continuous-time systems can be a time and computation-consuming process. Worse still, for simple discrete maps, it is often feasible to compute the gradients of the loss function with respect to the system parameters [20], however, a numerical integrator is typically required for continuous-time dynamical systems. This makes evaluating or estimating gradients significantly more challenging. Due in part to this, most previous work has viewed their choice of loss function as a non-differentiable function of the candidate parameters and used various zeroth-order optimization methods such as particle-based methods and genetic algorithms. See [9] for a review of methods based on these computational intelligence approaches. These potentially expensive iterations of simulation and optimization limit the applicability of these methods for real-time settings or processing large collections of systems. This work explores an alternative approach to PSI for chaotic systems, framing it as a supervised learning problem of mapping state-space data to system parameters. We were initially motivated by the observation that after seeing sufficiently many examples of return maps, a human expert can begin to identify patterns that hint about the parameters of the underlying dynamical systems. Following this intuition, we transform collections of state-space trajectory observations into image-like data to retain the state-space topology of trajectories from dynamical systems and use these collections to train convolutional neural networks to estimate the parameters of dynamical systems from these images. In our approach, the regression loss _is_ the parameter estimation error, avoiding the use of heuristic loss functions. Additionally, with almost all of the time and computational cost occurring during data collection and model training, our estimation models return parameter estimations for new samples with negligible time and computation costs at inference time, neatly complementing the drawbacks of optimization-based approaches. In the remainder of this paper, we formalize the parametric system identification problem as it has conventionally been approached and follow this by presenting our supervised learning problem in Section 2. We then detail our solution method for this new problem formulation, including generating trajectory datasets, extracting useful features from them, and using them to train a parameter estimation model in Section 3. We present the results of applying our method to both discrete- and continuous-time dynamical systems in Section 4 and discuss our findings in Section 5. We conclude by discussing the opportunities and challenges which are introduced with this new approach to analyzing dynamical systems in Section 6. ## 2 Background ### Dynamical Systems We consider the problem of estimating the parameters of a \(p^{\text{th}}\)-order autonomous dynamical system, defined on some state-space \(\mathcal{X}\subseteq\mathbb{R}^{p}\) with \(p\geq 1\) by parametric dynamics \(F(\cdot\ ;\ \theta):\mathbb{R}^{p}\rightarrow\mathbb{R}^{p}\) with \(\theta\in\Theta\subseteq\mathbb{R}^{d}\) for some \(d\geq 1\) by the state equation \[\begin{cases}\dot{\mathbf{x}}(t)=F(\mathbf{x}(t);\ \theta),t\in\mathbb{R}& \text{for a continuous flow,}\\ \mathbf{x}_{k+1}=F(\mathbf{x}_{k};\ \theta),k\in\mathbb{Z}&\text{for a discrete map.}\end{cases} \tag{1}\] Dynamical systems frequently appear in the study of classical mechanics, with the Hamiltonian of some physical system defining the dynamics \(F\) on some configuration manifold. For our purposes, the dynamics \(F\) can be any function of the state \(\mathbf{x}\) and the parameters \(\theta\) which characterizes the evolution of the state of the system in time according to equation (1). In this section, we restrict our discussion of parameter estimation to discrete dynamical systems since our approach for handling continuous-time systems is to use Poincare sections and their respective Poincare maps to represent them with discrete-time systems for analysis[21]. #### 2.1.1 Poincare Maps A Poincare map is defined by a dynamical flow along with an oriented hypersurface in state space, referred to as a Poincare section. Some references remove the requirement that the section is oriented, resulting in a so-called 'two-sided' Poincare map. This choice does not have significant consequences for our purposes. A Poincare map is the unique map associated with the original dynamical system, which takes a point \(P\) in the Poincare section to the next point \(P^{\prime}\) in the Poincare section where the flow of the original dynamical system, originating from \(P\) crosses the oriented section in the same direction as at \(P\). For this reason, Poincare maps are sometimes called 'first recurrence', or'return maps'. In this work, when we refer to return maps, we refer either to a collection of state-space observations from a discrete map or from the discrete map induced by a continuous time dynamical system along with a particular choice of Poincare section. Poincare maps allow us to present a unified method for parametric system identification on both discrete maps and continuous time dynamical systems. ### Parametric System Estimation In the parametric system identification problem, we are given the parametric form of \(F\) and are tasked with constructing an estimate \(\hat{\theta}^{*}\) on the basis of a collection of \(n\) observed trajectories \(\mathbf{T}=(\tau_{1},\tau_{2},\dots,\tau_{n})\) where each trajectory \(\tau_{i}=(\mathbf{x}_{1}^{i},\mathbf{x}_{2}^{i},\dots)\) is an indexed collection of elements from \(\mathcal{X}\) which follows \(\mathbf{x}_{k+1}^{i}=F(\mathbf{x}_{k}^{i};\ \theta^{*})\). Most previous work on PSI considers the case when \(n=1\), but is straightforward to generalize for the case of multiple sample trajectories. The optimization approach to estimating \(\theta^{*}\) from a collection of observed trajectories of the target system, \(\mathbf{T}\) is by solving an optimization problem of the form \[\hat{\theta}^{*}=\underset{\theta\in\Theta}{\arg\min}\ \ell(\theta,\mathbf{T}), \tag{2}\] where \(\ell:\Theta\times\mathcal{X}^{n\times m}\rightarrow\mathbb{R}^{+}\) is a loss function that maps parameters estimates to positive real numbers, given observed trajectories \(\mathbf{T}\), and where we assume for simplicity that each of the \(n\) observed trajectories in \(\mathbf{T}\) consist of \(m\) points. The value of \(\theta^{*}\) influences \(\mathbf{T}\) through the dynamics \(F(\cdot;\ \theta^{*})\) imposed on the trajectories. For this reason, previous work has typically defined \(\ell\) by composing a loss function \(f\) that compares sets of trajectories with a numerical solver that is used to produce simulated trajectories corresponding to a parameter estimate. Concretely, to evaluate the loss of a parameter estimate \(\hat{\theta}^{*}\) against the true parameter \(\theta^{*}\), \(\ell(\hat{\theta}^{*},\theta^{*})\) practitioners use a numerical solver to create a collection of trajectories \(\mathbf{T}(\hat{\theta}^{*})=(\hat{\tau}_{1},\hat{\tau}_{1},\ldots,\hat{\tau}_ {n})\) with \(\tau_{i}=(\hat{\mathbf{x}}_{1}^{i},\hat{\mathbf{x}}_{2}^{i},\ldots,\hat{ \mathbf{x}}_{m}^{i}),\hat{\mathbf{x}}_{k}^{i}\in\mathcal{X}\) and \(\hat{\mathbf{x}}_{k+1}^{i}=F(\hat{\mathbf{x}}_{k}^{i};\ \hat{\theta}^{*})\). They then evaluate a loss that assigns values to pairs of collections of trajectories \(f(\mathbf{T}^{\prime},\mathbf{T})\). This composition of a numerical solver and loss function over sets of trajectories defines an implicit loss function on \(\Theta\) given a collection of observed trajectories \(\mathbf{T}\), \[\ell(\hat{\theta},\mathbf{T})\coloneqq f(\mathbf{T}(\hat{\theta}),\mathbf{T}). \tag{3}\] The method proposed in [20] does not follow this description, though as mentioned earlier, this method is not applicable when numerical methods are required to approximate trajectories, e.g., when they correspond to a Poincare map. #### State-space Representations of Trajectories Parameter estimation is especially challenging when dealing with dynamical systems which exhibit chaotic behavior. This is because of the characteristic sensitivity of these systems to their initial conditions. Even if the exact parameters of a chaotic system are known, any measurement error in the system's initial conditions can result in a simulated trajectory that diverges exponentially from the target observation set over time. Due to these issues, choices of \(f(\hat{\mathbf{T}}(\hat{\theta}),\mathbf{T})\) which are constructed around pairwise differences of \(\hat{\mathbf{x}}_{k}^{i}\) and \(\mathbf{x}_{k}^{i}\) in the time-domain are often poorly-conditioned with respect to \(\hat{\theta}\) as well as any measurement error in the initial conditions \(\mathbf{x}_{1}^{1},\mathbf{x}_{1}^{2},\ldots,\mathbf{x}_{1}^{m}\) of the target system. A choice of \(f\) which appears frequently in the literature is the temporal mean squared error (MSE) \[f_{\text{MSE}}(\hat{\mathbf{T}}(\hat{\theta}),\mathbf{T})\coloneqq\frac{1}{mn} \sum_{i=1}^{n}\sum_{k=1}^{m}\|\hat{\mathbf{x}}_{k}^{i}-\mathbf{x}_{k}^{i}\|^{2}. \tag{4}\] To address these issues, recent work such as [15; 16] has investigated the use of state-space representations of trajectories to compare simulated trajectories to the target data. The basic idea of this method is outlined by Jafari et al. [15]. While chaotic systems appear to have highly disordered behavior when considering trajectories in the time domain, they are known to have a more structured topology in state-space, which for our purposes refers to considering trajectories as unordered sets of points in state-space as opposed to time-indexed points. Chaotic attractors manifest this phenomenon, wherein collections of dynamical system trajectories that appear stochastic in the time domain are constrained to a low-dimensional manifold in state space. This property of chaotic systems makes using state-space representations of trajectories from dynamical systems appealing for designing the function \(f\) in the right-hand side of equation (3). The main contribution of [15] was the construction of a loss function between _return maps_. Their loss function considers \(\mathbf{T}(\hat{\theta})\) and \(\mathbf{T}\) as structureless sets of points in \(\mathcal{X}\), and measures the average of the Euclidean distances between each point in the simulated data with those of its nearest neighbor in the target data, and vice versa. ### Supervised Machine Learning for Parametric System Identification So far, we have only discussed the optimization approach to parametric system identification (equation (2)). Our novel solution for PSI instead frames parameter estimation as a supervised machine-learning problem (see e.g., [22]). In our approach, we aim to identify a single function that maps from observations of trajectories from a discrete dynamical system to an estimate of the system's parameter \(g:\mathcal{X}^{n\times m}\rightarrow\mathbb{R}^{d}\), where \(g\in\mathcal{F}\) is an element of some function class \(\mathcal{F}\). We select \(g\) through a data-driven optimization process that attempts to minimize the parameter estimation error of \(g\), assuming access to a large dataset of input-output pairs \(\mathcal{D}=(\mathbf{T}_{i},\theta_{i})_{i=1}^{N}\) with a _wide range_ of values \(\theta_{i}\in\Theta\). The \(i^{\text{th}}\) pair in this set consists of a collection of trajectories \(\mathbf{T}_{i}\) along with the parameter, \(\theta_{i}\), which generated them. Concretely, given a parameterized class of dynamical systems, we use real or simulated observations from systems with known parameters to construct a dataset, \(\mathcal{D}\). We then use \(\mathcal{D}\) to estimate an optimal parameter estimation function, \(g^{*}\), defined to minimize the mean squared prediction error, or another suitable choice of loss function, over some joint probability distribution \(\mathbb{P}\) of possible values of \(\theta^{*}\) and the resulting observations \(\mathbf{T}(\theta^{*})\): \[g^{*}=\underset{g\in\mathcal{F}}{\text{arg min}}\ \mathbb{E}_{(\theta,\mathbf{T}) \sim\mathbb{P}}\left[\|\theta-g(\mathbf{T}(\theta))\|^{2}\right]. \tag{5}\] For example, \(\mathbb{P}\) could be the joint distribution corresponding to \(\theta\) drawn uniformly over a bounded subset of \(\mathbb{R}^{d}\) with trajectories \(\mathbf{T}(\theta)\) generated from \(\theta\) in some consistent but possibly random fashion, e.g., applying a deterministic numerical integration software to randomly sampled initial states. Given an estimate of the minimizer of equation (5), \(\hat{g}^{*}\), and a new collection of trajectories \(\mathbf{T}(\theta^{\prime})\) corresponding to parameter \(\theta^{\prime}\), we take \(\hat{g}^{*}(\mathbf{T}(\theta^{\prime}))\) to be our estimate \(\hat{\theta}^{\prime}\). We have purposefully left the form of the function class \(\mathcal{F}\) vague in this section. These details are clarified in Section 2.3 when we outline our supervised machine learning approach. ## 3 Parameter Estimation via Deep Learning on Return Maps Estimating system parameters directly from observed trajectories is a challenging problem for the same reasons outlined in Section 2.2.1. Drawing from the lessons of earlier work on PSI for chaotic systems, rather than working with trajectories as time-series data, we restrict our attention to functions \(g\) which only consider the state-space information of the trajectory collections \(\mathbf{T}_{i}\) that are passed as input. Further, we transform this data into a form that allows us to make use of existing CNN architectures to construct a parameter estimation function. Specifically, we discretize regions of state space into pixels and use this discretization scheme to transform return maps into coarse, single-channel images, and use these as the input 'features' for our models. Figure 1 conveys the central intuition for our work. Different parameter values for the Henon map result in visually distinct return maps. This observation suggests that it might be possible to learn a regression function from these images to parameter estimates. Our experiments show that it is possible to learn such a regression function from data. The main contribution of this work is the demonstration that our method of predicting system parameters from images of return maps is viable, introducing the possibility of further applications of deep learning on return maps. For simplicity, we consider dynamical systems with two-dimensional return maps in this work. However, conceptually, our method generalizes to higher dimensions at the cost of the requirement of additional computation and data requirements commensurate with the curse of dimensionality [22]. ### Dataset Generation Developing a supervised machine learning model for predicting a system's parameters necessitates a dataset to train the model. Dataset generation is straightforward after specifying a parameterized dynamical system and the range of parameter values of interest. To ensure that the trained model can perform well on the entire range of parameter values, we first selected a large collection of evenly spaced parameter values. For the \(i^{\text{th}}\) parameter value \(\theta_{i}\), we sampled \(n\) initial states conditions and iterated the dynamical map \(m-1\) times in Figure 1: Return maps generated by sampling states from the uniform distribution over the region \(x,y\in[-2,2]\times[-2,2]\) and plotting 250 iterations of the Henon Map (equation (7)) from each initial state. The different colors correspond to trajectories for different initial states. the discrete case, or made use of Heyoka numerical integration software [23] to evaluate a total of \(m\) crossing points for a specified crossing section. The result of this generation process was a collection of input-output pairs, \(\mathbf{T}_{i}\) where the 'input' is \(\theta_{i}\), and the 'output' is a collection of discrete trajectories corresponding to the \(N_{\text{traj}}\) trajectories of length \(m\). In the case of dynamical flows, it can be simpler to propagate the system forward until a fixed system time. This can lead to the trajectories starting from the different initial states having different numbers of crossings through the section. This detail does not introduce any issues for our method, but it is a consideration in practice. ### Feature Processing As discussed in the previous section, there are inherent benefits to considering state-space representations of trajectories. For each input-output pair \((\mathbf{T}_{i},\theta_{i})_{i=1}^{N}\) in our dataset, we take the trajectories \(\mathbf{T}_{i}\) and 'flatten' them into a state-space representation, essentially overlaying coarse scatter-plots of each trajectory in state-space. We define this transformation by selecting a state-space region and splitting it into 'pixels', or a higher-dimensional discretization of the space in a more general setting, e.g. 'voxels' in three dimensions. Our experiments were performed with an axis-aligned uniform grid of \(128\times 128\) pixels. Given the \(i^{\text{th}}\) collection of trajectories \(\mathbf{T}_{i}\), the pixels were'shaded' by defining the value of the pixel in the \((h,w)\in\{0,1,...127\}^{2}\) position in the \(i^{\text{th}}\) sample by \[\mathbf{P}_{i,h,w}\coloneqq\alpha^{n_{i,h,w}}, \tag{6}\] where \(n_{i,h,w}\) is the number of points in \(\mathbf{T}_{i}\) which lie in the region of space ascribed to the pixel at position \((h,w)\) and \(\alpha\in(0,1]\) is a transformation parameter which determines the exponential base that the pixels 'darken' with. We will use \(\mathbf{P}_{i}\) to refer to the collection of pixels \(\mathbf{P}_{i,h,w}\), or 'pixelized return map'. The input-output pairs after this transformation are then \((\theta_{i},\mathbf{P}_{i})_{i=1}^{N}\). The result of this process is shown in the rightmost column of Figure 2. We performed all experiments with \(\alpha=7/10\). #### Data Augmentation We experimented with an additional step during model training before creating each pixelized return map. Data augmentation [24] is a common practice in training machine learning models in which carefully chosen random modifications are made to training samples to improve the generalizability of models to new data. In short, data augmentation can reduce the sensitivity of the estimation performance of trained models to certain changes to their input. An example of this method in image recognition tasks is making small random changes to the cropping of input photos. The rationale is that, for example, the subject of a photo with small differences in cropping is the same, and we would like our models to respect this invariance. An additional benefit of data augmentation is that it acts as a regularization mechanism that reduces the tendency of trained models to overfit to patterns in the training dataset at the expense of worse performance on new data. In our setting, we performed data augmentation to make our model robust to changes in the number and length of the trajectories we flattened to form the images. Specifically, given the \(i^{\text{th}}\) input-output pair, we first sampled \(N_{\text{traj}}\sim\text{Uniform}(10,n)\). We Figure 2: Transforming subsampled return maps into images. Each row shows the process of processing a return map for input to the model. The left column shows return maps sampled in the same fashion as in Figure 1. The middle column shows the output of our random data augmentation scheme, i.e. taking a random subset of the trajectories and truncating them to a random length. The right-most column shows examples of the single-channel images fed input to the model. The images are obtained by discretizing a region of state-space, here the region \(\mathbf{x}\in[-4,4]^{2}\), into pixels and darkening each pixel following equation (6). then selected \(N_{\text{traj}}\) of the \(n\) trajectories in \(\mathbf{T}_{i}\) uniformly at random. Next, we sampled \(N_{\text{steps}}\sim\text{Uniform}(10,m)\), and for each of the \(N_{\text{traj}}\) trajectories we took only the first \(N_{\text{steps}}\) points. The resulting collection of \(N_{\text{traj}}\), length \(N_{\text{steps}}\) trajectories was then used to create a pixelized return map as discussed in Section 3.2. Figure 2 shows the process of creating pixelized return maps with this data augmentation method. The choice of 10 as the lower limit on the number of trajectories and steps in the augmentation step is another hyperparameter. Although we do not explore other choices for this value in this work, one should consider that choosing a value too close to \(m\) or \(n\) limits the diversity of the training data; this may reduce the regularization effect. On the other hand, training the model on samples with too few trajectories or with too few steps may result in a trained model which fails to generalize to more densely-sampled return maps. ### Parameter Estimation Model Our experiments were all performed with a ResNet18 neural network [25], a small model from a widely used family of deep-learning models which feature convolutional layers and residual connections designed for computer-vision tasks. The structure of the network is depicted in Figure 3. We selected this architecture as the model's residual connections and batch-normalization layers [26] lead to relatively easy training of the model with current methods. In the context of the formalism that we introduced in Section 2.3, we consider estimation functions \(g\in\mathcal{F}\) which are the composition of our transformation from collections of trajectories to images of return maps, with a ResNet18 neural network with learnable parameters. Selecting a specific function \(g\) amounts to setting the learnable parameters of the ResNet18, and possibly the parameters which describe the transformation from trajectories to images. #### Model Training Given a dataset \(\mathcal{D}\) of \(N\) input-output pairs, we partitioned the pairs into training (\(\mathcal{D}_{\text{train}}\)), validation (\(\mathcal{D}_{\text{validation}}\)) and testing (\(\mathcal{D}_{\text{test}}\)) datasets containing respectively 65%, 15% and 20% of the pairs in \(\mathcal{D}\). This split was performed in a manner that ensured that each partition contained a representative mix of samples from the full range of parameter values \(\Theta\). As is standard in training supervised machine learning models, \(\mathcal{D}_{\text{train}}\) and \(\mathcal{D}_{\text{validation}}\) were used to optimize the parameters of each model. Specifically, \(\mathcal{D}_{\text{train}}\) was used to optimize the learnable parameters of the ResNet18 model using stochastic mini-batch gradient descent with the Adam optimizer [29] as our optimization routine. We used default parameters for the optimizer other than the learning rate and value of the weight decay parameter. When using data augmentation, we performed fresh random augmentation on each pass over the training set, i.e., the model essentially never saw the same sample twice in the training routine. When applicable, the validation set \(\mathcal{D}_{\text{validation}}\) had random data augmentation applied only once at the start of training. We fixed the validation set augmentation to provide a consistent set to evaluate the performance of each model, reducing the variance of the validation error. We used the model's error on \(\mathcal{D}_{\text{validation}}\) to optimize the hyper-parameters of our optimization routine. We focused on tuning the Adam optimizer's learning rate and weight decay parameters, as well as the number of training samples used to evaluate each step of the optimizer (commonly referred to as the training batch size). After model training, we opted to select the model version with the smallest validation loss observed during the training process. The resulting model's performance was then evaluated on \(\mathcal{D}_{\text{test}}\), providing an unbiased estimation of model performance as the test set consists of samples not used in the model training or model selection process. ### Loss Function By framing parametric system identification as a supervised machine learning problem, we can develop an estimation model that directly optimizes a loss function on the estimation error. Still, the appropriate loss function to optimize depends on the intended purpose of the parameter estimates. For example, simply choosing the mean squared error may be sufficient for systems with a single parameter. However, for systems with parameters \(\theta\in\mathbb{R}^{d}\) for \(d>1\), we may require greater precision when estimating some coordinates of \(\theta\) than others. One way to encode this information is in the loss function used during model training. For example, in Section 4.1 we use a weighted loss function to ensure that the relative errors of our model's parameter estimates are roughly equal for both Henon map parameters. Figure 4 outlines the entire process of creating a dataset from a dynamical system, training a supervised machine learning model, and deploying the model for parameter estimation. ## 4 Experiments We trained parameter estimation models on continuous- and discrete-time dynamical systems and performed experiments to understand the sensitivity of our method to the amount of available training data, as well as the impact of our data augmentation process. We show results for two systems in this work. In order to build intuition for our proposed method, we begin with the simpler case of parameter estimation for a discrete map before showing experiments for a continuous-time dynamical system. In what follows we will refer to models that are trained with or without augmented data as _augmented_ and _non-augmented_ models respectively. All of our experiments were carried out using a single NVIDIA(r) GeForce(r) RTX 2080 Ti GPU and two Intel(r) Xeon(r) E5-2650L v4 CPUs with a clock rate of 1.70GHz and 251GB of RAM. ### Henon Map In our first experiment, we estimate the parameters of the Henon Map [30], a well-studied discrete dynamical system that maps pairs of points \((x_{k},y_{k})\in\mathbb{R}^{2}\) to \((x_{k+1},y_{k+1})\) according to \[\begin{cases}x_{k+1}=1-ax_{k}^{2}+y_{k}\\ y_{k+1}=bx_{k},\end{cases}\quad 6 \tag{7}\] Figure 3: A network architecture diagram for the ResNet18 neural network [25]. The convolutional (conv), batch-norm (BN) [26], and linear layers contain ‘learnable’ parameters, which we optimize during the model training process using stochastic gradient descent. The network’s name comes from the fact that each input image passes through 17 convolutional layers and one linear layer. We used the PyTorch [27] TorchVision [28] library’s implementation of ResNet18, only modifying the first convolutional layer to take single channel images as input. where \(a,b\in\mathbb{R}\) are the parameters which characterize the system. For our experiments, we considered the Henon map parameters \(a\in[0.05,.45],b\in[-1.1,1.1]\). We constructed a database of input-output pairs by taking a large number of \(a,b\) points spaced evenly in \([0.05,.45]\times[-1.1,1.1]\). For the \(i^{\text{th}}\)\(a,b\) pair, we generated trajectories by applying the Henon Map with parameters \(a,b\)\(250\) times over a uniform grid of \(225(=15^{2})\) initial \(x,y\) points over the region \([-4,4]\times[-4,4]\). With this collection of parameters and trajectories \(\mathcal{D}\), we processed features as outlined in Section 3.2 to obtain image-like data, which we then used to train a ResNet18 neural network to make parameter estimates in \(\mathbb{R}^{2}\). Given the different scales of the \(a\) and \(b\) parameters in this task, we chose to measure our parameter prediction performance according to a weighted mean squared error loss \[\ell\left((\hat{a},\hat{b}),(a,b)\right)=\frac{1}{2}\left(\sigma_{a}^{2}(\hat {a}-a)^{2}+\sigma_{b}^{2}(\hat{b}-b)^{2}\right), \tag{8}\] where \(\sigma_{a},\sigma_{b}=(25,4.\overline{54})\) were chosen so that \(\sigma_{a}\cdot(a-a_{\text{mid}})\) and \(\sigma_{b}\cdot(b-b_{\text{mid}})\) take values in the interval \([-5,5]\), where \((a_{\text{mid}},b_{\text{mid}})=(-.25,0)\) is the center of the parameter region \([0.05,.45]\times[-1.1,1.1]\) #### 4.1.1 Results Figure 5 shows input and output examples from a test set for a trained parameter estimation model. As well as displaying the output parameter estimates of the network, the figure shows return maps that correspond to these parameter estimates, providing a qualitative assessment of our model's estimation accuracy. The inputs to Figure 4: Schematic diagram showing the process of generating training data, training the parameter prediction model, and finally using the trained model for parameter estimation. the estimation model were noisily sampled in the same fashion as in our data augmentation process (Section 3.2.1). We also show more refined return maps, generated without this noisy sampling, that correspond to the true parameter values for a more direct comparison with the return maps corresponding to the estimation model's outputs. For a perfect estimation model, the return maps in the middle and right columns would be essentially identical. The example in the third row of Figure 5 illustrates the challenge of estimating parameters from the sparsely sampled return maps that our data augmentation process can create. While the 'ground truth' return map and the return map corresponding to the parameter estimates visually differ near the center of the image but are otherwise similar. The input to the network lacks samples in this central region, which makes the model's estimate plausible given its input. Figure 6 summarizes the results of an experiment investigating the impact of training dataset size and our data augmentation process on parameter estimation error on the Henon map. We emphasize two aspects of this figure, the first of which is that the test error of our estimation models decreases in an approximately power-law fashion with the size of the training set. A second aspect of Figure 6 is that the augmented models seem to have lower test error than non-augmented models on smaller numbers of training samples until this trend reverses on larger training datasets. Figure 7 further explores the generalization gains observed from applying data augmentation to samples during model training. Both models evaluated in the figure appear to generalize across inputs with trajectories with lengths ranging from 200 to 1000 samples, achieving a roughly constant error across different trajectory lengths. The clearest difference between models in Figure 7 is that the estimation error of the non-augmented model (diamond markers) grows up to 100 times larger on randomly augmented test inputs compared with the performance on inputs without this randomization (the dashed and solid lines respectively). In comparison, the augmented model displays a significantly smaller gap in performance on augmented versus non-augmented test data and, notably, demonstrated improved performance when tested on inputs without the Figure 5: Inputs and outputs from a trained parameter estimation model for the Hénon map taking randomly augmented return map images as input. The left column shows the input to the network, as well as the ground truth values of \(a,b\). The center column shows high-quality return map images, not input to the network, corresponding to the system’s parameters, obtained by iterating the map for a uniform grid of initial conditions using actual values of \(a,b\). The right column shows the model outputs and a return map obtained in the same fashion as the middle column but using the model’s _estimated values \(\hat{a},\hat{b}\)_. Figure 6: Test mean squared error of augmented and non-augmented models trained on datasets of increasing numbers of sample pairs from the Hénon map. Each model was trained for 25,000 optimization steps with the same configuration of the Adam optimizer, i.e. the same learning rate, weight decay, and batch size parameters. The five model versions with the lowest validation error observed during training were selected for each dataset size. We then evaluated these models on a test set. We evaluated model errors on inputs that matched their respective training distributions. I.e., we only applied data augmentation to the test samples for augmented models. Error bars represent bootstrapped 95% confidence intervals for models trained with three random seeds. random augmentation that the model was trained with. Finally, we examined the limits of the generalization abilities of augmented models. Figure 8 shows the results of an experiment in which we tested the estimation performance of such a model on test pairs with between 50 and 1000 length trajectories and between 25 and 400 trajectories per sample. As in Figure 7, we observed that the model generalized well to short and long trajectory lengths, including ones longer than the maximum length of 225 samples in the training set. The model's performance was more significantly affected by the number of trajectories per sample, with performance improving as the number of trajectories increased from 25 to 225 before a significant deterioration in performance on samples with 324 and 400 trajectories in each sample. ### The Swinging Atwood's Machine The swinging Atwood's machine (SAM) [31] is a Hamiltonian system with nonlinear dynamics, consisting of an Atwood's machine in which one of the mechanical bobs is allowed to swing from its pulley along the two-dimensional plane containing the bobs and their pulleys. The configuration manifold of the system is two-dimensional, consisting of the distance between the swinging pendulum and its pulley \(r\in(0,\infty)\), and the angle formed between the swinging bob and vertical, \(\phi\in(-\pi,\pi]\). This system exhibits chaotic motion for nearly all values of the mass ratio \(\mu\in(0,\infty)\) between the stationary and the swinging pendulum, taken as the parameter of the system [32]. For our experiments, we set the gravitational acceleration \(g\), and the system's mechanical energy \(E\) to unity, as is common when studying this system. In our experiments, we used examples with \(\mu\in[1.5,15]\). For ease of comparison with our Henon map results, we used a rescaled loss function \[\ell\left(\hat{\mu},\mu\right)=\frac{1}{2}\sigma_{\mu}^{2}(\hat{\mu}-\mu)^{2}, \tag{9}\] where \(\sigma_{\mu}=.7\overline{40}\) was chosen so that \(\sigma_{\mu}\cdot(\mu-\mu_{\text{mid}})\) takes values in the interval \([-5,5]\), where \(\mu_{\text{mid}}=8.25\) is the center of the parameter interval \([1.5,15]\). #### 4.2.1 Choice of Poincare Section For our experiments we used a common Poincare section for the SAM: the section where the state of Figure 8: A heatmap (darker is better) displaying the logarithm of the test error of the Hénon map augmented model model for 25,000 optimizer steps Figure 7 (dashed orange curve), on test datasets in which the number of trajectories is varied, as well as the length of trajectories. Figure 7: Test error for a pair of augmented and non-augmented models, trained on a Hénon map dataset of 8172 input-output pairs for 25,000 optimizer steps. The non-augmented model was trained on trajectories with length \(m=250\), whereas the augmented train data contained trajectories varying in length from \(m=10\) to \(m=250\). We then evaluated each model on test datasets which consisted of fixed-length trajectories, with and without test-time augmentation. For this experiment, the testing data augmentation consisted of taking a random subset of \(n\in\{10,11,\dots,225\}\) trajectories for each input-output pair. Error bands represent bootstrapped 95% confidence intervals for models trained with three random seeds. the system \((r,\phi,\dot{r},\dot{\phi})\) passes through \(\phi=0\) with \(\dot{\phi}>0\). Figure 9 shows return maps for this section corresponding to four different values of \(\mu\), including \(\mu=3\) for which the system is non-chaotic [31]. Given that our choice of Poincare section constrains \(\phi=0\) and the Hamiltonian of the system is time-independent, the system's state at each section crossing is uniquely determined by the values of \(r\) and its conjugate momentum \(p_{r}\). We used two-dimensional return maps that only consider these values, e.g. the return maps in Figure 9. #### 4.2.2 Data Generation To create the dataset \(\mathcal{D}\) for these experiments, we used 4,000 evenly spaced values for \(\mu\in[1.5,15]\). Because the SAM is a Hamiltonian system and we considered crossing points for which \(\phi=0\), one can determine that all crossing points of the system lie within fixed values of \(r,\phi\) for each value of the value of mass ratio parameter \(\mu\) at a fixed mechanical energy of 1. The roughly triangular boundary is visible in the different portraits in Figure 9. Using this fact, for each value of \(\mu\) we used Heyoka to integrate 256 initial states randomly selected from the energetically allowed regions of the state space for 1,000 units of system time. For each initial state, we recorded the state of the system each time it crossed through the Poincare section specified in Section 4.2.1. This data generation method resulted in trajectories of varying lengths, depending on how many times each trajectory passed through the section within 1,000 units of time, and so in order to perform data augmentation we cropped each trajectory down from its original length to between 1 and 250 points. #### 4.2.3 Results The experiments in this section closely mirror those for the Henon map system in Section 4.1.1. Figure 10 summarizes the results of an experiment investigating the impacts of training dataset size and our data augmentation process for the swinging Atwood's Machine system. Similar to in Figure 6 we observe that the test error of our estimation models decreases in an approximately power-law fashion with the size of the training set, although we observe that the performance of the augmented model appears to plateau. Prior to this plateau, we see that the augmented models often perform better than non-augmented models. Figure 11 shows the results of our experiment looking at the generalization benefit of applying Figure 10: Test mean squared error of augmented and non-augmented models on datasets of increasing numbers of sample pairs from the swinging Atwood’s Machine system with the. Each model was trained for 25,000 optimization steps using the same configuration of the Adam optimizer (learning rate, weight decay, batch size). For each dataset size, the five model versions with the lowest validation error observed during training were selected. We then evaluated these models on a test set. We evaluated model errors on inputs that matched their respective training distributions. I.e., we only applied data augmentation to the test samples for augmented models. Error bars indicate bootstrapped 95% confidence intervals from models trained with three different random seeds. Figure 9: Return maps for the SAM system obtained using the Poincaré section described in Section 4.2.1. The system was integrated forwards for 1000 units of time for 200 initial conditions, drawn randomly from energetically-allowed initial states. data augmentation to samples during model training. In contrast with Figure 7 we see that the performance of both models is worse on inputs with shorter trajectory lengths. Once again we observe that there is a smaller generalization gap for augmented model as evidenced by the vertical gap between the lines with circular markers versus those with diamond markers. ## 5 Discussion By framing PSI as a supervised machine learning problem, we circumvent the use of loss functions that compare simulated trajectories to observations. Instead, we directly learn estimation functions that minimize parameter estimation errors, which is a natural objective for parametric system identification. In this setting, the ability to obtain effective estimation functions from potentially limited datasets becomes a central concern. In our experiments, we observed an approximately power-law reduction in estimation errors by increasing the number of training samples used. This is a common phenomenon in statistical estimation problems [33] and implies that the availability of training data is an important factor in the success of our method. With that said, we observed that for small sample sizes, there were significant improvements in estimation accuracy when using random data augmentation during model training. While the reason for the empirical success of data augmentation is an open area of research (see e.g. [34]), one explanation for our observations is that data augmentation effectively increases the number and variety of training samples used to optimize the estimation model's parameters, resulting in a model which generalizes better to new samples. On the other hand, we also observed faster improvement in model performance with the size of the training set for non-augmented models, with the test MSE of these models overtaking their augmented counterparts on both dynamical systems in this paper. On this last point, while the test error of non-augmented models on larger training datasets surpassed that of non-augmented models, we would argue that the augmented models still have more favourable characteristics for a practical model, as we discuss next. Figures 7 and 11 provide a more complete characterization of the performance of augmented and non-augmented models trained on the largest datasets shown in Figures 6 and 10 - 8172 and 7680 samples, respectively. In these line plots, we observe that the augmented models have improved performance across input distributions of varying quality, especially on samples with shorter trajectories. This can be seen in both the smaller gap between test errors on augmented and non-augmented test samples and in the observation that the best-performing models on test inputs with the shortest trajectories were those trained with random data augmentation (left-hand sides of Figures 7 and 11). These results indicate that the use of these augmentation methods is currently a useful tool for building practical data-driven estimation models. All of our experiments were performed with \(\alpha=.7\) and \(128\) by \(128\)-pixel images. These 'hyper-parameters', as they are referred to in the machine learning community, can all in principle be optimized on validation data, as we did with the batch size and parameters of the optimization routine, at the cost of increased computation time. A consideration for the method we propose is that we rely on changes to system parameter values translating into changes to the resulting return maps. If large changes to parameter values result Figure 11: Test error for a pair of augmented and non-augmented models, trained for 25,000 optimizer steps on the swinging Atwood’s Machine consisting of 7680 input-output pairs. Both the non-augmented and augmented training trajectories varied in length from \(1\) to \(250\) points. We evaluated each model on test datasets which consisted of fixed-length trajectories, with and without test-time augmentation, which, for this experiment, consisted of taking a random subset of the trajectories for each input-output pair. the testing data augmentation consisted of taking a random subset of \(n\in\{10,11,\ldots,225\}\) trajectories for each input-output pair. Error bands represent bootstrapped 95% confidence intervals from models trained with three different random seeds. in only small changes to return maps then we would expect our method to have a large estimation error. This behavior should also occur with all previous methods that rely on loss functions that compare observed and simulated trajectories in state space. In addition, in most scenarios, this effect is likely benign, given that in such a scenario a large estimation error may be acceptable since trajectory predictions and system analysis relying on estimated parameters would remain reasonably accurate due to the low sensitivity of the dynamical system to changes in its parameter values. Continuous-time dynamical systems pose an additional challenge, in that if the chosen Poincare section fails to adequately capture key features of trajectories, then by unfortunate coincidence two different parameter values may result in different dynamics but share the same Poincare map. This issue is separate from a possible issue with the collected data in which all parameter values for a system have distinct Poincare maps for a given Poincare section, but there are not enough data points collected for each parameter value to disambiguate between parameter values in some cases. Instead, we refer to a scenario where for distinct parameter values \(\theta\) and \(\theta^{\prime}\), the dynamics of the system result in the same Poincare map but different trajectories away from the Poincare section, in which case no amount of data collected on such a Poincare section can distinguish between systems parameterized by \(\theta\) versus \(\theta^{\prime}\). This consideration means that care should be taken to ensure that the choice of Poincare section used to study a dynamical system captures the dynamics of interest. Our method straightforwardly accommodates using two or more return maps for each input pair, for example, return maps corresponding to different Poincare maps for the same parameter value, by stacking each return map as an image channel before input to the ResNet parameter estimation model. This method effectively provides the estimation model with different 'cross-sections' of trajectory dynamics to use for parameter estimates and may lead to better estimates if one of the Poincare sections is not informative for some parameter values. ## 6 Conclusion In this work, we have introduced a novel solution method for parametric system identification (PSI), framing the problem of mapping sample trajectories to system parameters as a supervised machine learning problem. Combining this idea with recent approaches to PSI, which use state-space representations of trajectories, we show that our approach is effective for parameter identification on chaotic dynamical systems. Since we use return maps as input to a supervised machine learning model, our method generalizes to continuous time dynamical systems using Poincare maps. Although training the estimation models is a compute-intensive process, the resulting models provide fast, inexpensive, and accurate parameter estimates that can be used to process large collections of data or as subcomponents of a more complicated program. While we focused on systems with two-dimensional in this work, our method extends to more dimensions simply by replacing the two-dimensional pixelized transformation and subsequent convolution operations in the ResNet with their higher dimensional analogs. This work opens the door to future exploration of machine learning for PSI. In our current method, we effectively reduce the problem of PSI to a computer vision task, and so there are likely performance improvements, both in terms of estimation accuracy and in terms of reducing the number of samples required to reach high accuracy by more carefully optimizing the estimation architecture used. One possible avenue is to use the wide availability of large, pre-trained deep learning models trained on image tasks with more layers than our ResNet18 model and only optimize the final layers of the network for the parameter estimation task. This 'transfer learning' [35] method can potentially reduce the number of samples required to train a new estimation model. A second, more open direction is to investigate more sophisticated methods for structuring return maps as input to a deep learning model. Our approach of flattening these maps into single-channel images introduces several extraneous parameters. In addition, pixelating return maps potentially removes some of the finer spatial structure present in the return map when using coarser pixelation schemes. An alternative approach could be to explicitly represent return maps as unstructured collections of points and use a graph neural network [36] as the parameter estimation model. Deep learning and other data-driven modeling and optimization methods such as reinforcement learning are becoming increasingly useful tools in the physical sciences. While these approaches can be powerful, there are often significant challenges associated with developing useful applications in different problem domains. In this work, we take a step in the right direction, and we invite the research community to explore the possibilities of using data-driven methods and return maps to analyze complex dynamical systems. **Acknowledgments.** ## Declarations ### Funding The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. ### Conflict of interest/Competing interests The authors have no relevant financial or non-financial interests to disclose. ### Ethics approval Not applicable. ### Consent to participate Not applicable. ### Consent for publication Not applicable. ### Availability of data and materials See the subsection 'Code Availability'. ### Code availability We have included the code to replicate our experiments here. The code repository contains the configuration files used to generate the SAM dataset files, as well as the configuration files we used to train the models on the SAM and Henon map systems shown in this paper. ### Authors' contributions Emmanuel Blazquez provided project supervision and editorial support in writing the manuscript. Connor James Stephens developed the code used to run the experiments in this paper and wrote the manuscript.
2302.05902
Tracing the orbitals of the quantum permutation group
Using a suitably noncommutative flat matrix model, it is shown that the quantum permutation group has free orbitals: that is, a monomial in the generators of the algebra of functions can be zero for trivial reasons only. It is shown that any strictly intermediate quantum subgroup between the classical and quantum permutation groups must have free three-orbitals. This is used to give explicit formulae for the Haar state on degree four monomials that hold for such intermediate quantum subgroups as well as the quantum permutation group itself.
J. P. McCarthy
2023-02-12T12:50:45Z
http://arxiv.org/abs/2302.05902v3
# Tracing the orbitals of the quantum permutation group ###### Abstract. Using a suitably non-commutative flat matrix model, it is shown that the quantum permutation group has free orbitals: that is, a monomial in the generators of the algebra of functions can be zero for trivial reasons only. It is shown that any strict intermediate quantum subgroup between the classical and quantum permutation groups must have free three orbitals, and this is used to derive some elementary bounds for the Haar state on degree four monomials in such quantum permutation groups. Key words and phrases:quantum permutations, Haar state 2020 Mathematics Subject Classification: 46L30,46L65 ###### Contents * 1 Introduction & Preliminaries * 2 Free orbitals * 3 The orbitals of exotic quantum permutation groups * 4 Elementary bounds on the Haar state ## 1. Introduction & Preliminaries In 1995 Alain Connes asked the question "What is the quantum automorphism group of a space?" and in 1998 Wang [19] answered the question in the case of finite quantum spaces, i.e. the abstract spectra of finite dimensional C\({}^{*}\)-algebras. This included notably the quantum automorphism group of finite classical spaces, which can be viewed equivalently as the quantum automorphism group of \(\{1,\ldots,N\}\) (that preserves the uniform measure \(\omega\)), or as the quantum permutation group on \(N\) points: \[G^{+}(\{1,\ldots,N\},\omega)=S^{+}_{N}.\] This is all in the language of the compact quantum matrix groups of Woronowicz [20]. **Definition 1.1**.: _If a unital \(\mathrm{C}^{*}\)-algebra \(C(\mathbb{G})\) is:_ 1. _generated by the entries of a unitary matrix_ \(u\in M_{N}(C(\mathbb{G}))\)_, and_ 2. \(u\) _and_ \(u^{t}\) _are invertible, and_ 3. \(\Delta:C(\mathbb{G})\to C(\mathbb{G})\underset{\min}{\otimes}C(\mathbb{G})\)_,_ \(u_{ij}\mapsto\sum_{k=1}^{N}u_{ik}\otimes u_{kj}\) _is a_ \(*\)_-homomorphism,_ _then \(\mathbb{G}\) is a compact matrix quantum group with fundamental representation \(u\in M_{N}(C(\mathbb{G}))\)._ Conventionally, compact quantum matrix groups with noncommutative algebras of functions are spoken about only via their algebra of continuous functions: the quantum group is a so-called _virtual object_. The algebra of continuous functions \(C(S_{N}^{+})\) on the quantum permutation group \(S_{N}^{+}\) is the universal \(\mathrm{C}^{*}\)-algebra generated by the entries of an \(N\times N\) magic unitary \(u\in M_{N}(C(S_{N}^{+}))\), that is a matrix whose rows and columns are partitions of unity, that is they consist of projections, \(u_{ij}=u_{ij}^{*}=u_{ij}^{2}\), that sum to the identity on rows and columns. Using the Gelfand picture, denote the identify of \(C(S_{N}^{+})\) by \(\mathds{1}_{S_{N}^{+}}:=1_{C(S_{N}^{+})}\) and thus: \[\sum_{k=1}^{N}u_{ik}=\mathds{1}_{S_{N}^{+}}=\sum_{k=1}^{N}u_{kj}.\] It can be shown that for \(S_{N}^{+}=S_{N}\) for \(N\leq 3\); however for \(N\geq 4\), the quantum permutation group \(S_{N}^{+}\) is non-classical and infinite in the sense that \(C(S_{N}^{+})\) is noncommutative and infinite dimensional [2]. If \(\mathbb{G}\) is a compact matrix quantum group with magic fundamental representation \(v\in M_{N}(C(\mathbb{G}))\), the universal property of \(C(S_{N}^{+})\) gives \(\pi:C(S_{N}^{+})\to C(\mathbb{G})\) a surjective \(*\)-homomorphism, \(u_{ij}\mapsto v_{ij}\), that respects the comultiplication: \[\Delta_{C(\mathbb{G})}\circ\pi=(\pi\otimes\pi)\circ\Delta_{C(S_{N}^{+})}.\] That is to say that \(\mathbb{G}\subseteq S_{N}^{+}\) is a quantum subgroup. In the sequel, this notation implies a fixed fundamental magic representation \(u\in M_{N}(C(\mathbb{G}))\), and \(u_{ij}\) referring to a generator of \(C(\mathbb{G})\) rather than of \(C(S_{N}^{+})\). The classical permutation group \(S_{N}\) is a compact matrix quantum group and, where \(\mathds{1}_{j\to i}(\sigma):=\delta_{i,\sigma(j)}\), it is a quantum subgroup \(S_{N}\subseteq S_{N}^{+}\) via the magic fundamental representation: \[v=(\mathds{1}_{j\to i})_{i,j=1}^{N}.\] It is the universal classical subgroup in the sense that if \(G\subseteq S_{N}^{+}\) is classical, then \(G\subseteq S_{N}\). Thus it is the maximal classical subgroup of \(S_{N}^{+}\). Banica and Bichon [4], through classifying the quantum subgroups \(\mathbb{G}\subseteq S_{4}^{+}\), noted that \(S_{4}\subset S_{4}^{+}\) is a maximal quantum subgroup, and conjectured that \(S_{N}\subseteq S_{N}^{+}\) is a maximal quantum subgroup at all \(N\). Only recently did Banica [1] use advances in subfactor theory to show that \(S_{5}\subset S_{5}^{+}\) is also a maximal quantum subgroup. Through dint of \(S_{N}^{+}=S_{N}\) for \(N\leq 3\) the current state of art is: **Theorem 1.2**.: _For \(N\leq 5\), the classical permutation group \(S_{N}\) is a maximal quantum subgroup of the quantum permutation group._ An _easy_ compact quantum matrix group is one whose associated tensor category is spanned by partitions [9], while the easiness level is defined in [1] where the second part of this shown (the first part goes back to [6]): **Theorem 1.3**.: _There is no easy nor easiness-level-2 intermediate quantum permutation group_ \[S_{N}\subsetneq\mathbb{G}\subsetneq S_{N}^{+}.\] The literature is sparse beyond these two results, yet the question of maximality of \(S_{N}\subseteq S_{N}^{+}\) remains wide open. This work humbly posits the existence of an _exotic_ intermediate quantum permutation group: \[S_{N}\subsetneq\mathbb{G}\subsetneq S_{N}^{+},\] and studies some of its very basic algebraic properties. If \(\mathbb{G}\subseteq S_{N}^{+}\), the quotient of universal \(C(\mathbb{G})\) by the commutator ideal is the algebra of functions on a finite group \(G\subseteq S_{N}\), the classical version of \(\mathbb{G}\). Denote the quotient map by: \[\pi_{\mathrm{ab}}:C(\mathbb{G})\to C(G);\qquad u_{ij}\mapsto\mathds{1}_{j \to i}.\] The classical version of \(\mathbb{G}\subseteq S_{N}^{+}\) is isomorphic to the set of \(\sigma\in S_{N}\) such that \[\mathrm{ev}_{\sigma}(f):=\pi_{\mathrm{ab}}(f)(\sigma)\qquad(f\in C(\mathbb{G})),\] is a character (the formula gives the zero functional for \(\sigma\) not in the classical version). The following is of central importance in the current work. The convolution of states \(\varphi_{1},\varphi_{2}\) on \(C(\mathbb{G})\) is given by: \[\varphi_{1}\star\varphi_{2}=(\varphi_{1}\otimes\varphi_{2})\Delta,\] and there exists a Haar state \(h\) on \(C(\mathbb{G})\) such that for all states \(\varphi\), \[\varphi\star h=h=h\star\varphi.\] **Proposition 1.4**.: _Suppose \(\mathbb{G}\subseteq S_{N}^{+}\). Then for all states \(\varphi\) on \(C(\mathbb{G})\) and \(\sigma,\tau\in G\), the classical version \(G\subseteq\mathbb{G}\):_ \[(\mathrm{ev}_{\sigma^{-1}}\star\varphi\star\mathrm{ev}_{\tau})(u_{i_{1}j_{1}} \cdots u_{i_{n}j_{n}})=\varphi(u_{\sigma(i_{1})\tau(j_{1})}\cdots u_{\sigma(i_ {n})\tau(j_{n})}).\] Proof.: This is a slight generalisation of (Prop. 6.4, [17]), albeit with the same proof. ## 2. Free orbitals The orbitals of a quantum permutation group \(\mathbb{G}\subseteq S_{N}^{+}\) are related to non-zero monomials: \[u_{i_{1}j_{1}}\cdots u_{i_{m}j_{m}}\neq 0.\] One-orbitals, or rather orbits, are related to non-zero \(u_{ij}\in C(\mathbb{G})\). The spectre of orbits can be seen in the work of Bichon ([10], Prop. 4.1), from which the following is a corollary: **Proposition 2.1**.: _Let \(\mathbb{G}\subseteq S_{N}^{+}\). There exists a permutation matrix \(P\in S_{N}\) such that:_ \[PuP^{-1}=\begin{pmatrix}u^{1}&0&\cdots&0\\ 0&u^{2}&&0\\ \vdots&&\ddots&\vdots\\ 0&0&&u^{k}\end{pmatrix},\] _where the entries of each \(u^{p}\in M_{N_{p}}(C(\mathbb{G}))\) are non-zero._ However the first explicit mention of \(u_{ij}\neq 0\) relating to orbits is in the PhD thesis of Huang [12]. Huang defines the orbits of an action of a compact quantum group on a compact Hausdorff space \(X\), and shows that in the case that \(X\) is a finite space, that \(i,\,j\in\{1,2,\ldots,N\}\) are in the same orbit if and only if \(u_{ij}\neq 0\). The case of finite spaces appeared in a preprint [13], but not in the shorter published version [14]. The first published appearances of \(u_{ij}\neq 0\) relating to orbits appeared around the same time in two papers [7, 15]. However it was Lupini, Mancinska, & Roberson [15] who first defined two-orbitals, or orbitals; and these were extended to higher orbitals by Banica [3]. **Definition 2.2**.: _Let \(\mathbb{G}\subseteq S_{N}^{+}\). Define a relation \(\sim_{m}\) on \(\{1,2,\ldots,N\}^{m}\) by_ \[(i_{1},\ldots,i_{m})\sim_{m}(j_{1},\ldots,j_{m})\iff u_{i_{1}j_{1}}\cdots u_{i _{m}j_{m}}\neq 0.\] _The relation \(\sim_{1}\) is called the orbit relation, \(\sim_{2}\) the orbital relation, and \(\sim_{m}\) the \(m\)-orbital relation._ The \(m\)-orbital relation is reflexive and symmetric. Both \(\sim_{1}\) and \(\sim_{2}\) are equivalence relations, their equivalence classes called orbits and orbitals [15]. The non-trivial part of this business is to demonstrate the transitivity of the orbital relation. The present author gives in [17] a slightly more conceptual version of the proof from [15], as well as a counterexample to the transitivity of \(\sim_{3}\). Let \(u\in M_{N}(\mathcal{A})\) be magic unitary with entries in a C\({}^{*}\)-algebra. As the rows and columns of \(u\) are partitions of unity, if there exists \(1\leq n\leq m-1\) such that \[\delta_{i_{n},i_{n+1}}+\delta_{j_{n},j_{n+1}}=1,\] then \[u_{i_{1}j_{1}}\cdots u_{i_{m}j_{m}}=0.\] In this case the monomial \(u_{i_{1}j_{1}}\cdots u_{i_{m}j_{m}}\) is zero _for trivial reasons_. **Definition 2.3**.: _A quantum permutation group \(\mathbb{G}\subseteq S^{+}_{N}\) has free \(m\) orbitals if_ \[u_{i_{1}j_{1}}\cdots u_{i_{m}j_{m}}=0\] _for trivial reasons only. A quantum permutation group has free orbitals if it has free \(m\) orbitals for all \(m\geq 1\)._ **Proposition 2.4**.: _If \(\mathbb{G}\subseteq S^{+}_{N}\) has free \(m\)-orbitals then \(\sim_{m}\) is an equivalence relation._ Proof.: Assume that \((i_{1},\ldots,i_{m})\sim_{m}(k_{1},\ldots,k_{m})\) and \((k_{1},\ldots,k_{m})\sim_{m}(j_{1},\ldots j_{m})\) but \[(i_{1},\ldots,i_{m})\gamma\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and putting these facts together, if there exists a suitably non-commutative flat matrix model \(v\in M_{N}(M_{N}(\mathbb{C}))\) of \(C(S_{N}^{+})\), one in which: \[[v_{ij},v_{kl}]=0\iff i=k\text{ or }j=l,\] then \(v\) has free orbitals in the sense that: \[v_{i_{1}j_{1}}v_{i_{2}j_{2}}\cdots v_{i_{m}j_{m}}=0\] for trivial reasons only, and thus by (1) so does \(S_{N}^{+}\). **Theorem 2.5**.: \(S_{N}^{+}\) _has free orbitals for \(N\geq 4\)._ **Proposition 2.6**.: _For each \(N\geq 5\), \(C(S_{N}^{+})\) has a suitably non-commutative flat matrix model._ Proof.: Let \(\omega=\exp(2\pi i/N)\). Where \(e_{1},e_{2},\ldots,e_{N}\) are the standard bases vectors of \(\mathbb{C}^{N}\), for \(1\leq i,j\leq N\), define a vector \(\xi_{ij}\in\mathbb{C}^{N}\) according to: \[\langle e_{p},\xi_{ij}\rangle=\begin{cases}\dfrac{1}{\sqrt{N}} \omega^{(n-1)(1-j)},&\text{if }p=1,\\ \dfrac{1}{\sqrt{N}}\omega^{(n-1)(i-1)},&\text{if }p=N,\\ \dfrac{1}{\sqrt{N}}\omega^{p(i-j)},&\text{otherwise}.\end{cases}\] Note that \(\xi_{ij}\) is a unit vector, and \[\langle\xi_{ij},\xi_{kl}\rangle=\dfrac{1}{N}(\omega^{l-j}-1)(1- \omega^{i-k})+\dfrac{1}{N}\sum_{p=0}^{N-1}\left[\omega^{(k-i)+(j-l)}\right]^{ p}.\] Note: \[\langle\xi_{ij},\xi_{kl}\rangle=\begin{cases}1,&\text{if }i=k,j=l\\ 0,&\text{if }i=k,j\neq l,\text{ or }i\neq k,j=l.\end{cases}\] Therefore \(\xi\) is a magic basis. Suppose now that \(i\neq k\), \(j\neq l\). There are two cases. **Case 1:**\((k-i)+(j-l)\equiv 0\mod N\): \[\langle\xi_{ij},\xi_{kl}\rangle=1+\dfrac{1}{N}(\omega^{l-j}-1)(1-\omega^{i-k}).\] However \(l-j\equiv k-i\mod N\) giving: \[\langle\xi_{ij},\xi_{kl}\rangle=1+\dfrac{1}{N}(2\Re(\omega^{k-i})-2)\implies 1- \dfrac{4}{N}\leq\langle\xi_{ij},\xi_{kl}\rangle<1,\] as \(i\neq k\). Note that because \(N>4\), this is non-zero. **Case 1: \((k-i)+(j-1)\not\equiv 0\mod N\)**: \[\langle\xi_{ij},\xi_{kl}\rangle =\frac{1}{N}(\omega^{l-j}-1)(1-\omega^{i-k})\] \[|\langle\xi_{ij},\xi_{kl}\rangle| =\frac{1}{N}|\omega^{l-j}-1||1-\omega^{i-k}|\leq\frac{4}{N}<1.\] Neither is \(\langle\xi_{ij},\xi_{kl}\rangle=0\) because \(i\neq k\), \(l\neq j\). The idea for this flat matrix model comes from Roberson and Schimidt. In the notation of their paper (Definition 4.1, [18]): \[\xi_{ij}=\frac{1}{\sqrt{|\mathbb{Z}_{N}|}}(P^{(1\;N)}\mathcal{C}e_{i})\circ( \overline{\mathcal{C}e_{j}}).\] Here \(P^{(1\,N)}\) is the permutation matrix of the transposition \((1\;N)\), \(\mathcal{C}\) is the character table of \(\mathbb{Z}_{N}\), and \(e_{i}\in\mathbb{C}^{N}\) is a basis element associated with \(i-1\in\mathbb{Z}_{N}\). The Hadamard product \(\circ\) with the conjugate is implementing element-wise division of Hadamard matrices (see [8], Proposition 2.3). It may be the case that this is related to cocyclic models (again, see [8]). As can be seen in the proof, the construction does not work for \(N=4\). A more high-powered proof by Banica in that case is presented in [16], however, after a change of basis, the \[\begin{pmatrix}\frac{1}{\sqrt{3}}e^{\pi i/4}&-\sqrt{\frac{2}{3}}e^{-\pi i/2}\\ \sqrt{\frac{2}{3}}e^{\pi i/2}&\frac{1}{\sqrt{3}}e^{-\pi i/4}\end{pmatrix}\in SU(2)\] fibre of the Pauli representation \(C(S_{4}^{+})\to C(SU(2),M_{4}(\mathbb{C}))\)[5] yields a suitably non-commutative flat matrix model for \(C(S_{4}^{+})\) given by the magic basis: \[\xi=\frac{1}{3}\begin{bmatrix}e_{1}&e_{2}-2e_{3}-2e_{4}&e_{4}-2e_{2}-2e_{3}&e _{3}-2e_{2}-2e_{4}\\ e_{2}&e_{1}-2e_{3}+2e_{4}&e_{3}-2e_{1}+2e_{4}&e_{4}+2e_{1}+2e_{3}\\ e_{3}&e_{4}-2e_{1}+2e_{2}&e_{2}+2e_{1}+2e_{4}&e_{1}+2e_{2}-2e_{4}\\ e_{4}&e_{3}+2e_{1}+2e_{2}&e_{1}-2e_{2}+2e_{3}&e_{2}-2e_{1}+2e_{3}\end{bmatrix}.\] ## 3. The orbitals of exotic quantum permutation groups **Proposition 3.1**.: _Exotic quantum permutation groups \(S_{N}\subsetneq\mathbb{G}\subsetneq S_{N}^{+}\) have free orbits._ Proof.: Recall that if \(S_{N}\subseteq\mathbb{G}\), then the abelianisation \(\pi_{\rm ab}:C(\mathbb{G})\to C(S_{N})\), and each \(\mathds{1}_{j\to i}\neq 0\): \[\pi_{\rm ab}(u_{ij})=\mathds{1}_{j\to i}\implies u_{ij}\neq 0.\] **Lemma 3.2**.: _Let \(p,q\) be projections in a \(\mathrm{C}^{*}\)-algebra:_ \[pq=qp\iff|pq|^{2}=|qp|^{2}.\] **Proposition 3.3**.: _Consider exotic \(S_{N}\subsetneq\mathbb{G}\subsetneq S_{N}^{+}\):_ 1. _entries from_ \(u\in M_{N}(C(\mathbb{G}))\) _pairwise-commute only when they are from the same row or column_ 2. _as a corollary exotic quantum permutations have free orbitals._ Proof.: By assumption \(C(\mathbb{G})\) is non-commutative and therefore there exists a non-commuting pair: \[u_{ab}u_{cd}\neq u_{cd}u_{ab}\implies|u_{ab}u_{cd}|^{2}\neq|u_{cd}u_{ab}|^{2},\] by the preceding lemma. The states on a C\({}^{*}\)-algebra are separating, and therefore there exists \(\varphi_{0}\) on \(C(\mathbb{G})\) such that: \[\varphi_{0}(|u_{cd}u_{ab}|^{2})\neq\varphi_{0}(|u_{ab}u_{cd}|^{2}).\] By assumption, the classical version of \(\mathbb{G}\) is \(S_{N}\). Let \(\sigma\), \(\tau\in S_{N}\) be such that: \[\sigma(i)=c,\,\sigma(k)=a\text{ and }\tau(j)=d,\,\tau(l)=b.\] With \(\varphi:=\operatorname{ev}_{\sigma^{-1}}\star\varphi_{0}\star\operatorname{ ev}_{\tau}\), using Proposition 1.4: \[\varphi(|u_{ij}u_{kl}|^{2})=\varphi_{0}(|u_{cd}u_{ab}|^{2})\neq\varphi_{0}(|u_ {ab}u_{cd}|^{2})=\varphi(|u_{kj}u_{ij}|^{2}).\] Therefore \[|u_{ij}u_{kl}|^{2}\neq|u_{kl}u_{ij}|^{2}\implies u_{ij}u_{kl}\neq u_{kl}u_{ij}.\] That exotic \(S_{N}\subsetneq\mathbb{G}\subsetneq S_{N}^{+}\) has free orbitals is shown more elementarily using: \[\pi_{\operatorname{ab}}(u_{ij}u_{kl})=\mathds{1}_{j\to i}\mathds{1}_{l\to k},\] i.e. the free orbitals of \(S_{N}\subsetneq\mathbb{G}\) implies the free orbitals of exotic \(\mathbb{G}\subsetneq S_{N}^{+}\), but such an argument does not speak to the pair-wise non-commutativity of the generators. **Theorem 3.4**.: _Exotic quantum permutation groups \(S_{N}\subsetneq\mathbb{G}\subsetneq S_{N}^{+}\) have free three orbitals._ Proof.: Let \(1\leq a,b,c,d,e,f\leq N\). Where \(u\in M_{N}(C(\mathbb{G}))\), it is required to show \[\delta_{ac}+\delta_{bd},\delta_{ce}+\delta_{df}\in\{0,2\}\implies u_{ab}u_{cd }u_{ef}\neq 0.\] To capture all the possibilities, consider: \[(r,s,t)=(\delta_{ac}+\delta_{bd},\delta_{ce}+\delta_{df},\delta_{ae}+\delta_{ bf}).\] The only non-trivial case is \((r,s,t)=(0,0,1)\). In this case, where different '\(i\)' and '\(j\)' symbols are distinct, it is \(u_{ab}u_{cd}u_{af}\) or \(u_{ab}u_{cd}u_{eb}\). The first case can be reduced to the second with the use of the antipode. So consider \(u_{ab}u_{cd}u_{eb}\in C(\mathbb{G})\). With \((r,s)=(0,0)\), \(u_{cd}u_{eb}\neq 0\). By Proposition 3.3 \[u_{cd}u_{eb}\neq u_{eb}u_{cd}.\] Represent \(C(\mathbb{G})\) using the universal GNS representation \(\pi_{\operatorname{GNS}}(C(\mathbb{G}))\subset B(\mathsf{H})\). Denote \[p:=\pi_{\operatorname{GNS}}(u_{eb})\text{ and }q:=\pi_{\operatorname{GNS}}(u_{ cd}).\] As \(pq\neq qp\), using Halmos' two projections theory there exists \(x\in\operatorname{ran}p\) orthogonal to both1\(\operatorname{ran}p\cap\operatorname{ran}q\) and \(\operatorname{ran}p\cap\ker q\). Define a state on \(C(\mathbb{G})\): Footnote 1: in the notation of ([11],(1)), \(x\in M_{0}\) \[\varphi_{0}(f)=\langle x,\pi_{\operatorname{GNS}}(f)x\rangle.\] Consider: \[\varphi_{0}(u_{eb}fu_{eb}) =\langle x,\pi_{\operatorname{GNS}}(u_{eb}fu_{eb})x\rangle=\langle x,p\pi_{\operatorname{GNS}}(f)px\rangle \tag{2}\] \[=\langle px,\pi_{\operatorname{GNS}}(f)px\rangle=\langle x,\pi_{ \operatorname{GNS}}(f)x\rangle=\varphi_{0}(f),\] as \(x\in\operatorname{ran}p\). Furthermore, together with \(x\in\operatorname{ran}p\) \[\varphi_{0}(u_{cd}) =\langle x,qx\rangle=1\implies x\in\operatorname{ran}q\] \[\varphi_{0}(u_{cd}) =\langle x,qx\rangle=0\implies x\in\ker q\] but \(x\) is orthogonal to both \(\operatorname{ran}p\cap\operatorname{ran}q\) and \(\operatorname{ran}q\cap\ker q\) thus \[0<\langle x,qx\rangle<1\implies 0<\varphi_{0}(u_{cd})<1.\] Now define a state \[\varphi(f):=\frac{\varphi_{0}(u_{cd}fu_{cd})}{\varphi_{0}(u_{cd})}=\frac{ \langle qx,\pi_{\operatorname{GNS}}(f)qx\rangle}{\langle qx,qx\rangle}.\] In particular \[\varphi(u_{eb})=\frac{\langle qx,pqx\rangle}{\langle qx,qx\rangle}\] Together with \(qx\in\operatorname{ran}q\): \[\varphi(u_{eb})=1\implies qx\in\operatorname{ran}p\] \[\varphi(u_{eb})=0\implies qx\in\ker p\] By Halmos two projections theory, \(qx\) is orthogonal to \(\operatorname{ran}p\cap\operatorname{ran}q\), and \(\ker p\cap\operatorname{ran}q\) and it follows that: \[0<\varphi(u_{eb})<1.\] Therefore there exists \(u_{ab}\neq u_{eb}\) such that: \[\varphi(u_{ab}) >0\] \[\implies\frac{\varphi_{0}(u_{cd}u_{ab}u_{cd})}{\varphi_{0}(u_{cd})} >0\] \[\implies\frac{\varphi_{0}(u_{eb}u_{cd}u_{ab}u_{cd}u_{eb})}{\varphi_ {0}(u_{cd})} >0\] \[\implies\varphi_{0}(|u_{ab}u_{cd}u_{eb}|^{2}) >0\] \[\implies u_{ab}u_{cd}u_{eb} \neq 0.\] The author is not aware of a known example of a quantum permutation group \(\mathbb{G}\subsetneq S^{+}_{N}\) with free three orbitals. ## 4. Elementary bounds on the Haar state The value of the Haar state at monomials is ostensibly important in the theory of quantum permutation groups. The values of the Haar state on degree three monomials in \(C(S^{+}_{N})\), \[h(u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}j_{3}}),\] are well known, but their calculation typically uses representation theory, usually via a study of the fixed point spaces of tensor powers of the fundamental representation [3]. However, using Proposition 1.4, these can be calculated using elementary considerations. Furthermore, while nothing is known about their representation theory, these calculations hold also for exotic quantum permutation groups. The same elementary considerations can be used to provide bounds for degree four monomials: \[h(u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}j_{3}}u_{i_{4}j_{4}}).\] Furthermore, the properties of \(S^{+}_{N}\) used to derive these bounds are also shared by exotic quantum permutation groups. **Proposition 4.1**.: _The Haar state on \(C(S^{+}_{N})\), and exotic \(C(\mathbb{G})\), is tracial, invariant under the antipode:_ \[h(u_{i_{1}j_{1}}\cdots u_{i_{n}j_{n}})=h(u_{j_{n}i_{n}}\cdots u_{j_{1}i_{1}}),\] _and invariant under permutations of the labels, for \(\sigma,\tau\in S_{N}\):_ \[h(u_{i_{1}j_{1}}\cdots u_{i_{n}j_{n}})=h(u_{\sigma(i_{1}),\tau(j_{1})}\cdots u _{\sigma(i_{n}),\tau(j_{n})}).\] Proof.: That the Haar state is tracial, and invariant under the antipode is standard. Apply Proposition 1.4 with \(h=\mathrm{ev}_{\sigma^{-1}}\star h\star\mathrm{ev}_{\tau}\) for invariance under permutations of the labels. Where \(u\) is a magic unitary, a monomial \(f=u_{i_{1}j_{1}}\cdots u_{i_{m}j_{m}}\) is in reduced form if it is zero, or if for all \(1\leq n\leq m-1\): \[\delta_{i_{n},i_{n+1}}+\delta_{j_{n},j_{n+1}}=2,\] that is use the relation \(u_{ij}^{2}=u_{ij}\) and the orthogonality along rows and columns of \(u\) to ensure that \(f\) is of minimal degree. In the below, all monomials are assumed reduced. **Proposition 4.2**.: _For both \(S_{N}^{+}\) and exotic \(\mathbb{G}\subsetneq S_{N}^{+}\)_ \[h(u_{ij}) =\frac{1}{N},\] \[h(u_{ij}u_{kl}) =\frac{1}{N(N-1)},\] _and, if \(|\{i_{1},i_{2},i_{3}\}|=|\{j_{1},j_{2},j_{3}\}|=3\):_ \[h(u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}j_{3}})=\frac{1}{N(N-1)(N-2)}.\] Proof.: By Proposition 4.1, for any \(1\leq j,k\leq N\), \(h(u_{ij})=h(u_{ik})\). Therefore \[h(\mathds{1}_{\mathbb{G}})=h\left(\sum_{k=1}^{N}u_{ik}\right)=\sum_{k=1}^{N}h( u_{ik})=1\implies Nh(u_{ij})=1.\] For the second equation, \[\frac{1}{N}=h(u_{ij})=h(u_{ij}\mathds{1}_{S_{N}^{+}})=h\left(u_{ij}\left(\sum_ {p=1}^{N}u_{pl}\right)\right)=\sum_{p\neq i}h(u_{ij}u_{pl})=(N-1)h(u_{ij}u_{kl}).\] Note that, by traciality \(h(u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{1}j_{1}})=h(u_{i_{1}j_{1}}u_{i_{2}j_{2}})\), and, if \(j_{3}\neq j_{1}\), \(h(u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{1}j_{3}})=0\). Consider \[\frac{1}{N(N-1)} =h(u_{i_{1}j_{1}}u_{i_{2}j_{2}})=h(u_{i_{1}j_{1}}u_{i_{2}j_{2}} \mathds{1}_{S_{N}^{+}})=h\left(u_{i_{1}j_{1}}u_{i_{2}j_{2}}\sum_{p=1}^{N}u_{i_ {3}p}\right)\] \[=\sum_{p\neq i_{1},i_{2}}h(u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}p })=\sum_{p\neq i_{1},i_{2}}h(u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}j_{3}})\] \[\implies h(u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}j_{3}}) =\frac{1}{N(N-1)(N-2)}.\] The proof of the following is just a continuation of the use of these elementary methods along with some careful bookkeeping. The _integral_ of \(f\) refers to \(h(f)\): **Theorem 4.3**.: _The following bounds for the Haar states of both \(S_{N}^{+}\) and exotic \(\mathbb{G}\subsetneq S_{N}^{+}\) hold:_ \[\begin{split} 0&<h(u_{11}u_{22}u_{11}u_{22})<\frac{(N-2)!} {N!}\\ 0&<h(u_{11}u_{22}u_{11}u_{23})<\frac{(N-3)!}{N!}\\ \frac{N-3}{N-2}\cdot\frac{(N-3)!}{N!}&<h(u_{11}u_{ 22}u_{11}u_{33})<\frac{(N-3)!}{N!}\\ -\frac{(N-4)!}{N!}&<h(u_{11}u_{22}u_{13}u_{24})<0\\ -\frac{1}{N-2}\cdot\frac{(N-3)!}{N!}&<h(u_{11}u_{ 22}u_{13}u_{32})<0\\ 0&<h(u_{11}u_{22}u_{13}u_{34})<\frac{1}{N-2}\cdot \frac{(N-4)!}{N!}\\ \frac{(N-4)!}{N!}&<h(u_{11}u_{22}u_{33}u_{44})<\frac{ N-2}{N-3}\cdot\frac{(N-4)!}{N!}\end{split} \tag{3}\] _Furthermore, by Proposition 4.1, the integral of any reduced degree four monomial is zero, or equal to one of these seven._ Proof.: Consider the following reduced degree three monomials: * \(u_{11}u_{22}u_{11}\) with integral \(1/N(N-1)\), * \(u_{11}u_{22}u_{13}\) with integral zero, * \(u_{11}u_{22}u_{31}\) with integral zero, * \(u_{11}u_{22}u_{33}\) with integral \(1/N(N-1)(N-2)\). From the first there are four reduced degree four monomials, namely \[u_{11}u_{22}u_{11}u_{22},\,u_{11}u_{22}u_{11}u_{23},\,u_{11}u_{22}u_{11}u_{32},\,u_{11}u_{22}u_{11}u_{33}.\] By using invariance under the antipode, and traciality: \[h(u_{11}u_{22}u_{11}u_{23})=h(u_{32}u_{11}u_{22}u_{11})=h(u_{11}u_{22}u_{11}u_{ 32})\] Define: * \(\alpha_{1}=h(u_{11}u_{22}u_{11}u_{22})\), * \(\alpha_{2}=h(u_{11}u_{22}u_{11}u_{23})\), * \(\alpha_{3}=h(u_{11}u_{22}u_{11}u_{33})\). Note that classically, in \(C(S_{N})\), \(\alpha_{2}=0\). Consider the next 'root' degree three monomial, the integral zero \(u_{11}u_{22}u_{13}\). There are six reduced degree four monomials from this: * \(u_{11}u_{22}u_{13}u_{21}\) with integral zero, * \(u_{11}u_{22}u_{13}u_{22}\) with integral \(\alpha_{2}^{\prime}\), * \(u_{11}u_{22}u_{13}u_{24}\) with integral \(\alpha_{4}\), * \(u_{11}u_{22}u_{13}u_{31}\) with integral zero, * \(u_{11}u_{22}u_{13}u_{32}\) with integral \(\alpha_{5}\), * \(u_{11}u_{22}u_{13}u_{34}\) with integral \(\alpha_{6}\). By traciality and invariance under permutation of the labels \(\alpha_{2}^{\prime}=\alpha_{2}\). The other zero integral 'root' degree three \(u_{11}u_{22}u_{31}\) yields * \(u_{11}u_{22}u_{31}u_{12}\) with integral zero, * \(u_{11}u_{22}u_{31}u_{13}\) with integral zero, * \(u_{11}u_{22}u_{31}u_{22}\) with integral \(\alpha_{2}^{\prime\prime}\), * \(u_{11}u_{22}u_{31}u_{23}\) with integral \(\alpha_{5}^{\prime\prime}\), * \(u_{11}u_{22}u_{31}u_{42}\) with integral \(\alpha_{4}^{\prime\prime}\), * \(u_{11}u_{22}u_{31}u_{43}\) with integral \(\alpha_{6}^{\prime\prime}\). Using the invariances of the Haar state, it can be shown for \(p=2,4,5,6\) that. \(\alpha_{p}^{\prime\prime}=\alpha_{p}\). From the final 'root' degree three \(u_{11}u_{22}u_{33}\), with integral \(1/N(N-1)(N-2)\), there are more reduced degree four monomials: * \(u_{11}u_{22}u_{33}u_{11}\) with integral \(1/N(N-1)(N-2)\), * \(u_{11}u_{22}u_{33}u_{12}\) with integral zero, * \(u_{11}u_{22}u_{33}u_{14}\) with integral zero, * \(u_{11}u_{22}u_{33}u_{21}\) with integral zero, * \(u_{11}u_{22}u_{33}u_{24}\) with integral \(\alpha_{6}^{\prime\prime}\), * \(u_{11}u_{22}u_{33}u_{41}\) with integral zero, * \(u_{11}u_{22}u_{33}u_{42}\) with integral \(\alpha_{6}^{\prime\prime\prime\prime}\), * \(u_{11}u_{22}u_{33}u_{44}\) with integral \(\alpha_{7}\). The invariances of the Haar state show that \(\alpha_{6}^{\prime\prime\prime\prime}=\alpha_{6}^{\prime\prime\prime}=\alpha_ {6}\). Therefore there are seven basic integrals of degree four monomials. Note that the Haar state is faithful on the *-algebra generated by these generators, and by Theorem 2.5, \(u_{22}u_{11}u_{23}\neq 0\): \[\alpha_{1} =h(u_{11}u_{22}u_{11}u_{22})=h(|u_{22}u_{11}u_{22}|^{2})>0,\] \[\alpha_{2} =h(u_{11}u_{22}u_{11}u_{23})=h(|u_{22}u_{11}u_{23}|^{2})>0,\] \[\alpha_{3} =h(u_{11}u_{22}u_{11}u_{33})=h(|u_{22}u_{11}u_{33}|^{2})>0,\] \[\alpha_{4} =h(u_{11}u_{22}u_{13}u_{24}),\] \[\alpha_{5} =h(u_{11}u_{22}u_{13}u_{32}),\] \[\alpha_{6} =h(u_{11}u_{22}u_{13}u_{34}),\] \[\alpha_{7} =h(u_{11}u_{22}u_{33}u_{44}).\] Liberally using Proposition 4.1, linear relations between these integrals are easily generated. First \[\frac{1}{N(N-1)} =h\left(u_{11}u_{22}u_{11}\sum_{k=1}^{N}u_{2k}\right)=0+\alpha_{1} +(N-2)\alpha_{2}\] \[\implies\alpha_{1}+(N-2)\alpha_{2} =\frac{1}{N(N-1)}.\] Also \[\frac{1}{N(N-1)} =h\left(u_{11}u_{22}u_{11}\sum_{k=1}^{N}u_{3k}\right)=0+\alpha_{2 }+(N-2)\alpha_{3}\] \[\implies\alpha_{2}+(N-2)\alpha_{3} =\frac{1}{N(N-1)}.\] Another \[0 =h\left(u_{11}u_{22}u_{13}\sum_{k=1}^{N}u_{2k}\right)=0+h(u_{11}u_ {22}u_{13}u_{22})+0+(N-3)\alpha_{4}\] \[\implies\alpha_{2}+(N-3)\alpha_{4} =0.\] Another \[0 =h\left(u_{11}u_{22}u_{13}\sum_{k=1}^{N}u_{3k}\right)=0+h(u_{11}u_ {22}u_{13}u_{32})+0+(N-3)\alpha_{5}\] \[\implies\alpha_{5}+(N-3)\alpha_{6} =0.\] Another \[\frac{1}{N(N-1)(N-2)} =h\left(u_{11}u_{22}u_{33}\sum_{k=1}^{N}u_{4k}\right)=0+h(u_{11}u _{22}u_{33}u_{42})+0+(N-3)\alpha_{7}\] \[\implies\alpha_{4}+(N-3)\alpha_{7} =\frac{1}{N(N-1)(N-2)}\] Finally \[0=h\left(u_{11}u_{22}u_{13}\sum_{k=1}^{N}u_{k4}\right)=0+\alpha_{4 }+(N-2)\alpha_{6}\] \[\implies\alpha_{4}+(N-2)\alpha_{6}=0\] This is a rank six linear system in seven variables. Let the parameter be \(\alpha_{4}\): \[\alpha_{1} =\frac{1}{N(N-1)}+(N-2)(N-3)\cdot\alpha_{4}\] \[\alpha_{2} =-(N-3)\alpha_{4}\] \[\alpha_{3} =\frac{1}{N(N-1)(N-2)}+\frac{N-3}{N-2}\cdot\alpha_{4}\] \[\alpha_{5} =\frac{N-3}{N-2}\cdot\alpha_{4}\] \[\alpha_{6} =-\frac{1}{N-2}\cdot\alpha_{4}\] \[\alpha_{7} =\frac{1}{N(N-1)(N-2)(N-3)}-\frac{1}{N-3}\cdot\alpha_{4}.\] In the case of exotic \(\mathbb{G}\) or \(S_{N}^{+}\), \(\alpha_{1},\alpha_{2}>0\), yields \[-\frac{(N-4)!}{N!}<\alpha_{4}<0,\] and the claimed bounds on \(\alpha_{1}\) and \(\alpha_{2}\). With \(\alpha_{3}>0\), and the bounds on \(\alpha_{4}\), the claimed bounds on \(\alpha_{3}\) follow. The other bounds follow in a similar manner. Note that in the classical case \(\alpha_{4}=h(u_{11}u_{22}u_{13}u_{24})=0\), yielding: \[h(u_{11}u_{22}u_{11}u_{23})=h(u_{11}u_{22}u_{13}u_{24})=h(u_{11}u_{22}u_{13}u_ {32})=h(u_{11}u_{22}u_{13}u_{34})=0,\] as expected. **Corollary 4.4**.: _For exotic \(\mathbb{G}\) and \(S_{N}^{+}\), the following large \(N\) asymptotics hold:_ \[h(u_{11}u_{22}u_{11}u_{33})\sim\frac{(N-3)!}{N!},\] \[h(u_{11}u_{22}u_{33}u_{44})\sim\frac{(N-4)!}{N!}.\] _Both asymptotics are equal to the classical integrals over the uniform measure on \(S_{N}\)._ Proof.: The relevant equalities from Theorem 4.3, (3) and (4), squeeze for \(N\gg 0\). ### Acknowledgement Some of this work goes back to discussions with Teo Banica. Thanks to David Roberson for suggesting looking to [18] for magic bases.
2307.11171
Fermionic asymptotic symmetries in massless QED
We consider soft electrons in massless QED at tree-level. The emission amplitude at leading order in the soft electron energy factorizes in a way similar to the soft photon case. We recast the soft electron factorization formula as a Ward identity of an asymptotic charge. This leads to the first example of an asymptotic fermionic symmetry in a theory with no conventional supersymmetry, suggesting that tree-level massless QED may posses an asymptotic supersymmetry algebra. Although our approach does not yet allow us to completely characterize the algebra, it suggests that subleading soft photons should feature in the anticommutator of two fermionic symmetry generators.
Adrián Agriela, Miguel Campiglia
2023-07-20T18:09:20Z
http://arxiv.org/abs/2307.11171v1
# Fermionic asymptotic symmetries in massless QED ###### Abstract We consider soft electrons in massless QED at tree-level. The emission amplitude at leading order in the soft electron energy factorizes in a way similar to the soft photon case. We recast the soft electron factorization formula as a Ward identity of an asymptotic charge. This leads to the first example of an asymptotic fermionic symmetry in a theory with no conventional supersymmetry, suggesting that tree-level massless QED may posses an asymptotic supersymmetry algebra. Although our approach does not yet allow us to completely characterize the algebra, it suggests that subleading soft photons should feature in the anticommutator of two fermionic symmetry generators. ###### Contents * I Introduction * II Preliminaries * II.1 Conventions * II.2 Asymptotic fields at null infinity * II.2.1 Outgoing null coordinates * II.2.2 Fall-offs in \(r\) * II.3 Radiative phase space * II.3.1 Fall-offs in \(u\) * II.3.2 Radiative PBs * II.4 Asymptotic Fock space * II.5 Bosonic (asymptotic) symmetries * II.5.1 Lorentz * II.5.2 Translations * II.5.3 Large \(U(1)\) gauge * II.5.4 Axial rotations * III Soft electrons and fermionic asymptotic charges * IV On fermionic asymptotic symmetries * IV.1 Symmetry action from radiative PBs * IV.2 Algebra relations with bosonic symmetries 1. Lorentz 2. Translations 3. Large \(U(1)\) gauge 4. Axial rotations * V. Outlook * A. Asymptotic vs. momentum-space Fock operators * B. Soft photons and soft electrons 1. Soft photon 2. Soft electron * C. Naive commutator of fermionic charges and subleading soft photons 1. Asymptotic symmetries for subleading soft photon 2. Naive commutator of two fermionic generators ## I Introduction Asymptotic states of massless particles have their natural home at null infinity [1]. At this boundary of spacetime it is possible to unveil symmetries that are otherwise obscure from a bulk perspective [2; 3]. As first shown by Strominger and collaborators [4; 5; 6], these asymptotic symmetries manifest themselves in scattering amplitudes via so-called soft theorems. Conversely, soft theorems in scattering amplitudes can often be interpreted as arising from asymptotic symmetries, see e.g. [7; 8; 9; 10; 11]. In this article we present what may be the simplest example of an asymptotic _fermionic_ symmetry. We consider tree-level massless Quantum Electrodynamics (QED), in which there is a notion of soft electrons, i.e. electrons with vanishingly small energy.1 The emission amplitude for such soft electrons is given to leading order by a soft electron theorem. We will recast such theorem as a Ward identity of fermionic asymptotic charges, thus suggesting the existence of asymptotic fermionic symmetries. At this stage, however, we are unable to fully characterize the underlying symmetry algebra. In particular, we lack a trustworthy evaluation of the commutator between fermionic symmetries. We will describe what the obstacles are and discuss possible strategies to overcome them. Footnote 1: Alternatively, we are studying massive electrons in a regime where the soft energy \(E_{\rm soft}\) is much larger than the electron mass yet much smaller than all other energies involved in the process: \(m_{e}\ll E_{\rm soft}\ll E_{\rm hard}\). Fermionic asymptotic symmetries have already been discussed in other contexts. They naturally arise in supergravity theories [12; 13; 14; 15; 16] where they are associated to soft gravitinos [9; 10]. In [8], Dumitrescu, He, Mitra and Strominger identified asymptotic fermionic charges in supersymmetric abelian gauge theories. Our work follows closely their analysis, with the electron field here playing the role of photino field there. A comprehensive discussion of (conformally) soft fermions can be found in [17]. The organization of the paper is as follows. In the next section we introduce notation and review basic concepts on radiative phase spaces and asymptotic symmetries, in the context of massless QED. In section III we present a soft electron theorem and interpret it as a Ward identify of an asymptotic fermionic charge. In section IV we discuss various aspects of the associated fermionic symmetries, including the commutation relations with bosonic symmetries. We conclude in section V, where we highlight the open questions left for future work. Additional material is given in three appendices: In appendix A we review the relationship between the momentum space and null infinity descriptions of fields. In appendix B we review the soft photon theorem and derive the analogous soft electron theorem. In appendix C we present a preliminary exploration on the non-linear structure of the fermionic symmetry, and observe how a naive evaluation of the commutator of two fermionic symmetries displays similarities with the asymptotic symmetry associated to subleading soft photons. ## II Preliminaries ### Conventions The elementary field variables for QED are the \(U(1)\) gauge field \({\cal A}_{\mu}\) and the anticommuting Dirac spinor \(\Psi\). In the massless case the lagrangian density is2 Footnote 2: We follow conventions from [18] modulo a sign in the definition of the coupling constant \(e\). \[{\cal L}=\sqrt{-\eta}\left(-\frac{1}{4}{\cal F}^{\mu\nu}{\cal F}_{\mu\nu}+i \overline{\Psi}\gamma^{\mu}{\cal D}_{\mu}\Psi\right), \tag{1}\] where \(\sqrt{-\eta}\) is the Minkowski volume element, \({\cal F}_{\mu\nu}=\partial_{\mu}{\cal A}_{\nu}-\partial_{\nu}{\cal A}_{\mu}\) is the field strength, \(\gamma^{\mu}\) Dirac matrices, \(\overline{\Psi}=\Psi^{\dagger}\gamma^{0}\) and \[{\cal D}_{\nu}=\partial_{\mu}+ie{\cal A}_{\mu} \tag{2}\] the gauge covariant derivative. The electric current is defined as \[{\cal J}^{\mu}=e\overline{\Psi}\gamma^{\mu}\Psi. \tag{3}\] Taking variations of the lagrangian density, one finds \[\delta{\cal L}=eom+\partial_{\mu}\theta^{\mu}(\delta) \tag{4}\] where \[eom=(\partial_{\mu}{\cal F}^{\mu\nu}-{\cal J}^{\nu})\delta{\cal A}_{\nu}+(i \delta\overline{\Psi}\gamma^{\mu}{\cal D}_{\mu}\Psi+c.c.) \tag{5}\] yield the field equations and \[\theta^{\mu}(\delta)=\sqrt{-\eta}(-{\cal F}^{\mu\nu}\delta{\cal A}_{\nu}+i \overline{\Psi}\gamma^{\mu}\delta\Psi) \tag{6}\] is the symplectic potential current. Besides Poincare (and in fact Conformal) symmetries, the theory is invariant under local gauge transformations \[\delta_{\Lambda}{\cal A}_{\mu}=\partial_{\mu}\Lambda,\quad\delta_{\Lambda} \Psi=-ie\Lambda\Psi, \tag{7}\] as well as global axial rotations \[\delta_{\rm A}{\cal A}_{\mu}=0,\quad\delta_{\rm A}\Psi=i\gamma^{5}\Psi. \tag{8}\] The latter famously displays a 1-loop anomaly [19], but for the purposes of our tree-level discussion, we will regard (8) as an exact symmetry. ### Asymptotic fields at null infinity #### ii.2.1 Outgoing null coordinates In order to describe the fields near (future) null infinity, it is convenient to work in outgoing null coordinates. These can be defined as follows. First, assign a future null direction \(q^{\mu}\) to every point \(x\equiv(z,\bar{z})\) on the celestial sphere, \[q^{\mu}(x):=\frac{1}{\sqrt{2}}\left(1+|z|^{2},z+\bar{z},-i(z-\bar{z}),1-|z|^{2} \right), \tag{9}\] The specific choice (9) leads to a flat conformal frame on the celestial sphere, see e.g. [20] for a discussion on other possible frames. Next, choose a reference null vector \(k^{\mu}\) transverse to \(q^{\mu}\) that specifies the "flow of time", \[k^{\mu}=\frac{1}{\sqrt{2}}\left(1,0,0,-1\right),\quad k^{\mu}q_{\mu}(x)=-1. \tag{10}\] Finally, parametrize cartesian coordinates \(X^{\mu}\) by \((r,u,x)\) according to \[X^{\mu}(r,u,x)=rq^{\mu}(x)+uk^{\mu}, \tag{11}\] in terms of which the spacetime metric takes the form \[dX^{\mu}dX_{\mu}=-2dudr+2r^{2}dzd\bar{z}. \tag{12}\] Future null infinity \({\cal I}\) is reached by taking \(r\to\infty\) with constant \((u,x)\). One can similarly define retarded null coordinates that are adapted to past null infinity. We will however focus our discussion on fields at future null infinity, with the understanding that a parallel construction is available at past null infinity. #### ii.2.2 Fall-offs in \(r\) Near null infinity the electromagnetic field is described by the leading transversal components [2; 3], \[{\cal A}_{z}(r,u,x)\stackrel{{ r\to\infty}}{{=}}A_{z}(u,x)+\cdots, \tag{13}\] where \(A_{z}\) is regarded as a gauge field on \({\cal I}\). The massless Dirac field fall-offs have been discussed in [8]. As for other massless fields, it decays as the inverse power of the radial coordinate.3 The Dirac equation imposes restrictions on the leading spinor components, leaving only two independent asymptotic fields:4 Footnote 3: We are assuming free-field fall-offs, which suffice for the tree-level considerations of this work. Loop corrections may imply slower fall-offs, see e.g. [21]. Footnote 4: Further details are given in the discussion following Eq. (C9). \[\Psi\stackrel{{ r\to\infty}}{{=}}\frac{1}{2^{1/4}r}\begin{pmatrix}- \bar{z}\psi_{-}\\ \psi_{-}\\ \psi_{+}\\ z\psi_{+}\end{pmatrix}+\cdots, \tag{14}\] where the overall normalization is chosen for later convenience and \(\psi_{\pm}\) are regarded as complex fermionic fields on \(\mathcal{I}\). Upon quantization, they describe electrons of positive/negative helicity. Similarly, \(A_{z}/A_{\bar{z}}\) describe photons of positive/negative helicity. We shall use the notation \[A_{+}:=A_{z},\quad A_{-}:=A_{\bar{z}}. \tag{15}\] Notice that, unlike \(\psi_{\pm}\), the gauge field satisfies the reality condition \(A_{+}^{*}=A_{-}\). ### Radiative phase space Given the fall-offs from the previous section, we can evaluate the symplectic potential current (6) at null infinity. For the relevant component \(\mu=r\) one finds \[\lim_{r\to\infty}\theta^{r}(\delta)=\dot{A}_{z}\delta A_{\bar{z}}+\dot{A}_{ \bar{z}}\delta A_{z}+i\bar{\psi}_{+}\delta\psi_{+}+i\bar{\psi}_{-}\delta\psi _{-} \tag{16}\] where \(\dot{A}_{z}\equiv\partial_{u}A_{z}\) and \(\bar{\psi}_{\pm}\) is the complex conjugate of \(\psi_{\pm}\). Taking a second variation in (16) and integrating over \((u,z,\bar{z})\) we obtain the symplectic structure at \(\mathcal{I}\)[22], \[\Omega:=\sum_{s=\pm}\int_{\mathcal{I}}dud^{2}x\left(\delta\dot{A}_{-s}\wedge \delta A_{s}+i\delta\bar{\psi}_{s}\wedge\delta\psi_{s}\right), \tag{17}\] where we used the notation (15) for the asymptotic gauge field. Evaluating (17) on two variations \(\delta_{1}\) and \(\delta_{2}\) one has \[\Omega(\delta_{1},\delta_{2})=\sum_{s}\int_{\mathcal{I}}\left(\delta_{1}\dot{ A}_{-s}\delta_{2}A_{s}+i\delta_{1}\bar{\psi}_{s}\delta_{2}\psi_{s}\right)-( \delta_{1}\leftrightarrow\delta_{2}). \tag{18}\] In the above expressions it is important that \(\psi_{\pm}\) are regarded as anticommuting fields. In particular, the reality of the symplectic form follows from the property \(\overline{\psi_{1}\psi_{2}}=\bar{\psi}_{2}\bar{\psi}_{1}\). Fall-offs in \(u\) In order for (17) to be well-defined, we must impose fall-offs conditions on the fields as \(|u|\to\infty\). We shall henceforth assume them to be \[A_{s}(u,x) \stackrel{{|u|\to\infty}}{{=}} O(1)+O(1/|u|^{\epsilon}) \tag{19}\] \[\psi_{s}(u,x) \stackrel{{|u|\to\infty}}{{=}} O(1/|u|^{1+\epsilon}) \tag{20}\] for some \(\epsilon>0\). Condition (19) is slightly more general than the typical scattering fall-off, which corresponds to \(\epsilon=1\)[23]. Condition (20) is stronger than a minimal requirement for convergence of (17) (for which it would be enough a fall-off faster than \(1/|u|^{1/2}\)). We require (20) to ensure finiteness of the asymptotic fermionic charge defined in section III. #### ii.1.2 Radiative PBs From the symplectic structure (17) one can obtain the elementary non-trivial Poisson brackets (PBs)5 Footnote 5: Our conventions are as follows. The PBs between two functions \(F\) and \(G\) is given by \(\{F,G\}:=X_{G}(F)=-X_{F}(G)=\Omega(X_{G},X_{F})\) where \(X_{F}\) is defined by the condition \(\Omega(\delta,X_{F})=\delta F\) (and similarly for \(X_{G}\)). If \(F\) and \(G\) are fermionic, there are additional signs that can be determined by requiring the grasmannian Leibnitz rule on PBs, namely \(\{F_{1}F_{2},G\}=F_{1}\{F_{2},G\}\pm\{F_{1},G\}F_{2}\) where the minus occurs if both \(F_{2}\) and \(G\) are fermionic. In particular the PBs between two fermionic functions is symmetric rather than antisymmetric. We refer to chapter 6 of [24] for further details on fermionic PBs. \[\{A_{s}(u,x),\dot{A}_{-s}(u^{\prime},x^{\prime})\}= \frac{1}{2}\delta(u-u^{\prime})\delta^{(2)}(x,x^{\prime}), \tag{21}\] \[\{\psi_{s}(u,x),\bar{\psi}_{s}(u^{\prime},x^{\prime})\}= -i\delta(u-u^{\prime})\delta^{(2)}(x,x^{\prime}).\] We recall however a well known subtlety with the gauge field PBs [6], which is the occurrence of a \(1/2\) discontinuity that can be expressed as \[\int_{-\infty}^{\infty}du\{\cdot,\dot{A}_{s}(u,x)\}=\frac{1}{2}\{\cdot,\int_{ -\infty}^{\infty}du\dot{A}_{s}(u,x)\}, \tag{22}\] where \(\{\cdot,F\}\equiv X_{F}\) is the Hamiltonian vector field of a functional \(F\) (see footnote 5). A fix to this problem was proposed in [6] via the isolation of the zero mode component of the gauge field and the introduction of boundary terms in the symplectic structure. For simplicity we will continue to work with the standard radiative phase space symplectic structure while keeping care when needed of the aforementioned subtlety. ### Asymptotic Fock space To obtain the asymptotic Fock space of photons and massless electrons, we start by considering the Fourier transform of the fields with respect to the \(u\)-variable, \[\begin{split} A_{s}(u,x)=\int_{-\infty}^{\infty}\frac{d\omega}{2 \pi}\tilde{A}_{s}(\omega,x)e^{-i\omega u},\\ \psi_{s}(u,x)=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\tilde{ \psi}_{s}(\omega,x)e^{-i\omega u},\\ \bar{\psi}_{s}(u,x)=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi} \tilde{\bar{\psi}}_{s}(\omega,x)e^{i\omega u}.\end{split} \tag{23}\] The brackets (21) imply \[\begin{split}\{\tilde{A}_{s}(\omega,x),\tilde{A}_{-s}(\omega^{ \prime},x^{\prime})\}=&\qquad\frac{i\pi}{\omega^{\prime}}\delta( \omega+\omega^{\prime})\delta^{(2)}(x,x^{\prime}),\\ \{\tilde{\psi}_{s}(\omega,x),\tilde{\bar{\psi}}_{s}(\omega^{ \prime},x^{\prime})\}=&-i2\pi\delta(\omega-\omega^{\prime}) \delta^{(2)}(x,x^{\prime}).\end{split} \tag{24}\] Notice that the information of \(A_{s}(u,x)\) is contained in \(\tilde{A}_{s}(\omega,x),\omega>0\) since \(\tilde{A}_{s}(-\omega,x)=(\tilde{A}_{-s}(\omega,x))^{*}\) due to the reality of the gauge field. On the contrary, there is no relation between the positive and negative frequency components of \(\tilde{\psi}_{s}(\omega,x)\). An independent set of mode functions is then given by: \[\begin{split} a_{s}(\omega,x):=&\qquad\tilde{A}_{s} (\omega,x),\quad\omega>0\\ b_{s}(\omega,x):=&\qquad\tilde{\psi}_{s}(\omega,x), \quad\omega>0\\ c_{s}(\omega,x):=&\qquad\tilde{\bar{\psi}}_{-s}(- \omega,x),\quad\omega>0.\end{split} \tag{25}\] Upon quantization, these become the annihilation Fock operators of photons, electrons and positrons respectively.6 The (anti) commutation relations of these operators with their hermitian adjoints are dictated by \(i\) times the PBs (24), from which one finds Footnote 6: The normalization however is different from the standard momentum-space Fock operators, see appendix A. \[\begin{split}[a_{s}(\omega,x),a_{s}^{\dagger}(\omega^{\prime},x ^{\prime})]=&\frac{\pi}{\omega}\delta(\omega-\omega^{\prime}) \delta^{(2)}(x,x^{\prime}),\\ [b_{s}(\omega,x),b_{s}^{\dagger}(\omega^{\prime},x^{\prime})]=& \ [c_{s}(\omega,x),c_{s}^{\dagger}(\omega^{\prime},x^{\prime})]=2\pi \delta(\omega-\omega^{\prime})\delta^{(2)}(x,x^{\prime}).\end{split} \tag{26}\] It will be useful for later purposes to define angular-density operators for each kind of particle: \[\rho_{s}^{A}(x) := \int_{0}^{\infty}\frac{d\omega}{\pi}\omega a_{s}^{\dagger}(\omega,x )a_{s}(\omega,x) \tag{27}\] \[\rho_{s}^{\psi}(x) := \int_{0}^{\infty}\frac{d\omega}{2\pi}b_{s}^{\dagger}(\omega,x)b_{s }(\omega,x)\] (28) \[\rho_{s}^{\bar{\psi}}(x) := \int_{0}^{\infty}\frac{d\omega}{2\pi}c_{s}^{\dagger}(\omega,x)c_{ s}(\omega,x) \tag{29}\] as well as particle-number operators \[N_{s}^{X}:=\int d^{2}x\rho_{s}^{X}(x),\quad X=\{A,\psi,\bar{\psi}\}. \tag{30}\] We finally note the identity \[\int_{-\infty}^{\infty}du\bar{\psi}_{s}(u,x)\psi_{s}(u,x)=\rho_{s}^{\psi}(x)- \rho_{-s}^{\bar{\psi}}(x). \tag{31}\] ### Bosonic (asymptotic) symmetries In this section we review bosonic symmetries of tree-level massless QED at null infinity: Poincare, large \(U(1)\) gauge and axial rotations. We leave out of the discussion other bosonic symmetries that are more challenging to describe at null infinity: Those arising from sub\({}^{n}\)-leading soft photons7[25; 26] as well as 4-dimensional conformal symmetry. Footnote 7: See however appendix C for a preliminary incorporation of subleading soft photon symmetries. #### ii.5.1 Lorentz Infinitesimal Lorentz transformations (or more generally superrotations) near null infinity are parametrized by holomorphic vector fields \(Y(z)\partial_{z}\) (and their complex conjugate) by \[\xi_{Y}=Y(z)\partial_{z}+\frac{1}{2}Y^{\prime}(z)(u\partial_{u}-r\partial_{r}) +\cdots, \tag{32}\] They act on fields via standard Lie derivatives, which for spinors include an internal rotation (see e.g. [8]) \[\delta_{\xi}\Psi=\left(\xi^{\mu}\partial_{\mu}-\frac{1}{8}\nabla_{\mu}\xi_{ \nu}[\gamma^{\mu},\gamma^{\nu}]\right)\Psi. \tag{33}\] Evaluating (33) for \(\xi=\xi_{Y}\) on (14) one finds \[\delta_{\xi_{Y}}\Psi=\frac{1}{2^{1/4}r}\left(\begin{matrix}-\bar{z}\delta_{Y} \psi_{-}\\ \delta_{Y}\psi_{-}\\ \delta_{Y}\psi_{+}\\ z\delta_{Y}\psi_{+}\end{matrix}\right)+\cdots \tag{34}\] where \[\delta_{Y}\psi_{-} = \left(Y(z)\partial_{z}+\frac{1}{2}Y^{\prime}(z)+\frac{1}{2}Y^{ \prime}(z)u\partial_{u}\right)\psi_{-} \tag{35}\] \[\delta_{Y}\psi_{+} = \left(Y(z)\partial_{z}+Y^{\prime}(z)+\frac{1}{2}Y^{\prime}(z)u \partial_{u}\right)\psi_{+} \tag{36}\] Similarly, for infinitesimal Lorentz transformations \(\xi_{\bar{Y}}\) parametrized by \(\bar{Y}(\bar{z})\partial_{\bar{z}}\) one finds \[\delta_{\bar{Y}}\psi_{-} = \left(\bar{Y}(\bar{z})\partial_{\bar{z}}+\bar{Y}^{\prime}(\bar{z} )+\frac{1}{2}\bar{Y}^{\prime}(\bar{z})u\partial_{u}\right)\psi_{-} \tag{37}\] \[\delta_{\bar{Y}}\psi_{+} = \left(\bar{Y}(\bar{z})\partial_{\bar{z}}+\frac{1}{2}\bar{Y}^{ \prime}(\bar{z})+\frac{1}{2}\bar{Y}^{\prime}(\bar{z})u\partial_{u}\right)\psi_ {+}. \tag{38}\] From a 2d perspective, the above expressions determine the holomorphic/antiholomorphic conformal dimensions of the (\(u\)-independent part of) \(\psi_{\pm}\). In particular, one concludes the 2d spin (or equivalently the 4d helicity) of \(\psi_{\pm}\) is equal to \(\pm 1/2\). The analogous calculation on the gauge fields leads to \[\delta_{Y}A_{z} = \left(Y(z)\partial_{z}+Y^{\prime}(z)+\frac{1}{2}Y^{\prime}(z)u \partial_{u}\right)A_{z} \tag{39}\] \[\delta_{Y}A_{\bar{z}} = \left(Y(z)\partial_{z}+\frac{1}{2}Y^{\prime}(z)u\partial_{u} \right)A_{\bar{z}}\] (40) \[\delta_{\bar{Y}}A_{z} = \left(\bar{Y}(\bar{z})\partial_{\bar{z}}+\frac{1}{2}\bar{Y}^{ \prime}(\bar{z})u\partial_{u}\right)A_{z}\] (41) \[\delta_{\bar{Y}}A_{\bar{z}} = \left(\bar{Y}(\bar{z})\partial_{\bar{z}}+\bar{Y}^{\prime}(\bar{z })+\frac{1}{2}\bar{Y}^{\prime}(\bar{z})u\partial_{u}\right)A_{\bar{z}} \tag{42}\] from which one concludes the 2d spin/4d helicity of \(A_{z}\) is \(+1\) and that of \(A_{\bar{z}}\) is \(-1\). For holomorphic vector fields, the generator is given by \[J_{Y}=\sum_{s=\pm}\int_{\cal I}dud^{2}x\left(\dot{A}_{-s}\delta_{Y}A_{s}+i\bar{ \psi}_{s}\delta_{Y}\psi_{s}\right), \tag{44}\] with similar expression holding in the antiholomorphic case. These represent the total angular momentum of the system evaluated at null infinity. Translations Spacetime translations (or more generally supertranslations) take the asymptotic form near null infinity \[\xi_{f}=f(z,\bar{z})\partial_{u}+\cdots \tag{45}\] where the sphere function \(f\) for a translation \(a^{\mu}\) is \[f(z,\bar{z})=a_{\mu}q^{\mu}(z,\bar{z}) \tag{46}\] with \(q^{\mu}\) given in (9). The action on the asymptotic fields is obtained from the Lie derivative, leading to \[\delta_{f}\psi_{s}=f\dot{\psi}_{s},\quad\delta_{f}A_{s}=f\dot{A}_{s}. \tag{47}\] The phase space generator of the above transformation is then given by \[P_{f}=\sum_{s=\pm}\int_{\cal I}dud^{2}x\left(\dot{A}_{-s}\delta_{f}A_{s}+i\bar{ \psi}_{s}\delta_{f}\psi_{s}\right), \tag{48}\] and represents the total linear momentum of the system evaluated at null infinity. #### ii.2.3 Large \(U(1)\) gauge Gauge transformations (7) with asymptotic behavior \[\Lambda(r,u,x)\stackrel{{ r\to\infty}}{{=}}\lambda(x)+\cdots \tag{49}\] induce the following action on the fields at null infinity: \[\delta_{\lambda}A_{a}=\partial_{a}\lambda,\quad\delta_{\lambda}\psi_{s}=-ie \lambda\psi_{s}. \tag{50}\] The canonical generator for (50) is given by \[Q_{\lambda}=\int_{\cal I}dud^{2}x\lambda\big{(}-\partial^{a}\dot{A}_{a}+e( \bar{\psi}_{+}\psi_{+}+\bar{\psi}_{-}\psi_{-})\big{)}. \tag{51}\] By expressing the fermionic field in terms of Fock operators, the "hard" part of the charge can be written as \[Q_{\lambda}^{\rm hard}=e\int d^{2}x\lambda(\rho_{+}^{\psi}+\rho_{-}^{\psi}- \rho_{+}^{\bar{\psi}}-\rho_{-}^{\bar{\psi}}). \tag{52}\] For \(\lambda=1\), \(Q_{\lambda}\) reduces to the total electric charge as measured at null infinity: \[Q_{\lambda=1}=e(N_{+}^{\psi}+N_{-}^{\psi})-e(N_{+}^{\bar{\psi}}+N_{-}^{\bar{ \psi}}), \tag{53}\] where we recall that \(N_{\pm}^{X}\) is the number operator defined in (30). Axial rotations The axial symmetry (8) induces at null infinity the transformation \[\delta_{\rm A}\psi_{s}=-is\psi_{s}. \tag{54}\] The canonical generator is given by \[Q_{\rm A}=(N_{+}^{\psi}+N_{+}^{\bar{\psi}})-(N_{-}^{\psi}+N_{-}^{\bar{\psi}}) \tag{55}\] and counts the excess of positive helicity fermions over negative helicity fermions. ## III Soft electrons and fermionic asymptotic charges In this section we present a tree-level formula for soft electrons (and soft positrons) and compute the associated asymptotic charges. For simplicity we treat all particles as outgoing, and correspondingly derive the asymptotic charge at future null infinity. A detailed derivation of the soft electron theorem is given in appendix B. Consider a tree-level amplitude involving \(n\) hard particles and a soft electron of momentum \(p^{\mu}=\omega q^{\mu}(x)\), with \(\omega\to 0\) and \(q(x)\) as in (9). As in other instances of soft theorems, the dominant diagrams are those where the soft electron is attached to an external hard leg. There are two possibilities. Either the soft electron emerges from an external hard photon, leaving behind an internal electron line, or it emerges from an external hard positron, leaving behind an internal photon line: In both cases one obtains a result that is proportional to the \(n\)-point amplitude left behind the soft emission vertex. However, unlike the situation for soft photons, there is a change in the type of hard particle involved in the process.8 The proportionality factor in the two types of process coincide, and is trivial unless there is certain helicity matching at the vertex. Taking for concreteness a soft electron of positive helicity and calling \(p_{i}\) the momentum of the hard particle, the proportionality factor is given by (see appendix B) Footnote 8: A change in the type of hard particles also occurs for soft photinos [8] and for subleading soft photons in presence of non-minimally coupled matter [27; 28]. \[-e\frac{\bar{u}_{+}(p)\hbox to 0.0pt{/}{\hbox to 0.0pt{/}{\hbox to 0.0pt{/}{ \hbox to 0.0pt{/}{\hss{$+$}}}}}u_{+}(p_{i})}{2p\cdot p_{i}}=\frac{e}{ \sqrt{\omega\omega_{i}}(z-z_{i})}. \tag{56}\] Figure 1: The two types of diagrams contributing to a soft electron amplitude. One thus arrives at the (positive helicty) soft electron theorem \[\begin{split}\mathcal{A}_{n+1}(\{p_{i}\},\omega q_{+}^{\psi})) \stackrel{{\omega\to 0}}{{=}}&\frac{e}{\sqrt{\omega}}\sum_{i\in A _{+}}\frac{1}{\sqrt{\omega_{i}}(z-z_{i})}\mathcal{A}_{n}(\dots,p_{i}^{\ A} \to p_{i+}^{\ \psi},\dots)\\ +&\frac{e}{\sqrt{\omega}}\sum_{i\in\bar{\psi}_{-}} \frac{1}{\sqrt{\omega_{i}}(z-z_{i})}\mathcal{A}_{n}(\dots,p_{i}^{\ \bar{\psi}}\to p_{i-}^{\ A},\dots),\end{split} \tag{3.2}\] where we use the labels \(\psi,\bar{\psi}\) and \(A\) for electrons, positrons and photons respectively and particle helicities are displayed by \(\pm\) subscripts. The argument in \(\mathcal{A}_{n}\) indicates the \(n\)-point amplitude involves a change in the \(i\)-th hard particle type. Following what has been done for other soft theorems, we can interpret the above formula as a Ward identity of an asymptotic charge. Since the particle states in (3.2) are normalized according to the standard momentum-space Fock operators (see appendix A), it is simpler to first write the charge in terms of them. From (3.2) one can read-off the direction-dependent fermionic charge \[\frac{4\pi i}{\sqrt{2}}Q_{\psi_{+}}(x):=\lim_{\omega\to 0}\sqrt{ \omega}b_{+}^{standard}(\omega q(x))\\ -e\int\widetilde{dp^{\prime}}\frac{1}{\sqrt{\omega^{\prime}}(z-z^ {\prime})}\left(a_{+}^{standard\,\dagger}(p^{\prime})b_{+}^{standard}(p^{\prime })+\ c_{-}^{standard\,\dagger}(p^{\prime})a_{-}^{standard}(p^{\prime})\right). \tag{3.3}\] The overall normalization is chosen for later convenience and the label _standard_ is to distinguish the momentum-space Fock operators from the ones defined in section II.4. Next, we look to express the charge in terms of the asymptotic fields \(\psi_{s}(u,x)\) and \(A_{s}(u,x)\). To achieve this, we first rewrite (3.3) in terms of the asymptotic Fock operators of section II.4. These are related to the momentum-space operators by (see appendix A) \[a_{s}^{standard}=4\pi ia_{s},\quad b_{s}^{standard}=\frac{4\pi i}{\sqrt{2\omega }}b_{s},\quad c_{s}^{standard}=\frac{4\pi i}{\sqrt{2\omega}}c_{s}. \tag{3.4}\] Substituting (3.4) in (3.3) and using \(\widetilde{dp}^{\,\prime}=\frac{\omega^{\prime}}{2(2\pi)^{3}}d^{2}x^{\prime}d \omega^{\prime}\) we get \[Q_{\psi_{+}}(x)=\lim_{\omega\to 0}b_{+}(\omega,x)\\ +\frac{ie}{2\pi}\int d^{2}x^{\prime}\frac{1}{(z-z^{\prime})}\int _{0}^{\infty}\frac{d\omega^{\prime}}{2\pi}\left(a_{+}^{\dagger}(\omega^{\prime },x^{\prime})b_{+}(\omega^{\prime},x^{\prime})+\ c_{-}^{\dagger}(\omega^{\prime },x^{\prime})a_{-}(\omega^{\prime},x^{\prime})\right). \tag{3.5}\] We now Fourier transform from \(\omega\) to \(u\)-space using the expressions from section II.4. Let us discuss the two terms in (3.5) separately. Following the standard terminology, we refer to them as "soft" and "hard" charge respectively. The "soft" part of the charge is found to be given by \[Q_{\psi_{+}}^{soft}(x)\equiv\lim_{\omega\to 0}b_{+}(\omega,x)=\int_{-\infty}^{ \infty}du\psi_{+}(u,x). \tag{3.6}\] The hard term is a bit more involved. We start by rewriting it as \[Q_{\psi_{+}}^{hard}(x)=ie\partial_{\bar{z}}^{-1}\sigma_{\bar{z}+}(x), \tag{3.7}\] where \[\partial_{\bar{z}}^{-1}=\frac{1}{2\pi}\int d^{2}x^{\prime}\frac{1}{(z-z^{\prime})}, \tag{3.8}\] and \[\sigma_{\bar{z}+}(x):=\int_{0}^{\infty}\frac{d\omega^{\prime}}{2\pi}\left(a_{+} ^{\dagger}(\omega^{\prime},x^{\prime})b_{+}(\omega^{\prime},x^{\prime})+\ c_{-}^{ \dagger}(\omega^{\prime},x^{\prime})a_{-}(\omega^{\prime},x^{\prime})\right). \tag{3.9}\] From Eqs. (2.23) and (2.25) one finds (3.9) can be written as \[\sigma_{\bar{z}+}(x) = \int_{0}^{\infty}\frac{d\omega^{\prime}}{2\pi}\left(\tilde{A}_{ \bar{z}}(-\omega^{\prime},x^{\prime})\tilde{\psi}_{+}(\omega^{\prime},x^{ \prime})+\tilde{\psi}_{+}(-\omega^{\prime},x^{\prime})\tilde{A}_{\bar{z}}( \omega^{\prime},x^{\prime})\right) \tag{3.10}\] \[= \int_{-\infty}^{\infty}duA_{\bar{z}}(u,x^{\prime})\psi_{+}(u,x^{ \prime}). \tag{3.11}\] We finally combine the "soft" and "hard" charges by factoring out an inverse derivative in the former \[Q_{\psi_{+}}^{soft}(x)=\partial_{\bar{z}}^{-1}\int_{-\infty}^{\infty}du \partial_{\bar{z}}\psi_{+}(u,x). \tag{3.12}\] Comparing (3.12) with (3.7) and (3.11), we conclude the total charge can be written as \[Q_{\psi_{+}}(x)=\partial_{\bar{z}}^{-1}\int_{-\infty}^{\infty}duD_{\bar{z}} \psi_{+}(u,x), \tag{3.13}\] where \[D_{a}\psi_{s}\equiv(\partial_{a}+ieA_{a})\psi_{s} \tag{3.14}\] is the gauge covariant derivative at null infinity. Repeating the previous analysis for a negative helicity soft electron yields a charge of the form \[Q_{\psi_{-}}(x)=\partial_{z}^{-1}\int_{-\infty}^{\infty}duD_{z}\psi_{-}(u,x). \tag{3.15}\] Finally, soft positrons lead to charges that are the complex conjugates of (3.13) and (3.15). It is natural to combine all these direction-dependent charges into a single smeared asymptotic charge which we define as \[F_{\chi}:=i\int d^{2}xdu\left(\bar{\chi}_{+}^{\bar{z}}D_{\bar{z}}\psi_{+}+\bar {\chi}_{-}^{z}D_{z}\psi_{-}+\chi_{+}^{z}D_{z}\bar{\psi}_{+}+\chi_{-}^{\bar{z} }D_{\bar{z}}\bar{\psi}_{-}\right). \tag{3.16}\] where \(\chi_{+}^{z}(x)\) and \(\chi_{-}^{\bar{z}}(x)\) are the components of a spinor-vector smearing parameter \[\chi=(\chi_{+}^{z},\chi_{-}^{\bar{z}}), \tag{3.17}\] with complex conjugate \(\bar{\chi}=(\bar{\chi}_{+}^{\bar{z}},\bar{\chi}_{-}^{z})\). We take \(\chi\) to be grassmanian so that \(F_{\chi}\) is real/hermitian. On Fermionic Asymptotic Symmetries Even though conserved asymptotic charges imply the existence of asymptotic symmetries, the nature of the latter may be challenging to decipher (see for example [7; 8]). One reason for this difficulty is that asymptotic symmetries are spontaneously broken [3], while the charges obtained from soft theorems are evaluated on a single vacuum sector. To characterize the symmetry one needs to make manifest the vacuum manifold, thus going beyond standard Fock-space amplitudes [29; 30; 31; 32; 33]. At the classical level, this usually requires an extension of the radiative phase space, see e.g. [34; 35; 36; 37; 38; 39; 40; 41; 42]. In this section we take the first steps towards characterizing the asymptotic fermionic symmetry implied by the soft electron theorem. We will start by evaluating the charge action according to the radiative phase space brackets (2.21). We shall see this action does not respect the \(|u|\to\infty\) behavior of the asymptotic fields, thus indicating the need of a phase space extension. We leave for future work the identification of such extension as well as the related problem of fully characterizing the symmetry algebra. We will nevertheless be able to verify the commutator between fermionic and bosonic symmetries. In appendix C we shall further discuss a naive evaluation of the commutator between two fermionic symmetries. ### Symmetry action from radiative PBs Infinite dimensional phase spaces may present subtleties that are absent in finite dimensions. In particular, not all phase-space functionals are guaranteed to yield well-defined PB actions. As we shall see, this is the case for the fermionic charge \(F_{\chi}\). Let us for a moment ignore the aforementioned subtlety and consider the action obtained from the standard formula \[\delta_{\chi}=\{\cdot,F_{\chi}\}. \tag{4.1}\] From the elementary PBs (2.21) and the expression for \(F_{\chi}\) (3.16) one finds9 Footnote 9: We recall that \(D_{a}=\partial_{a}+ieA_{a}\) is the gauge covariant derivative at null infinity and \(\chi_{z-}=\chi_{-}^{\bar{z}}\,,\quad\chi_{\bar{z}+}=\chi_{+}^{z}\). The action on \(\bar{\psi}_{\pm}\) can be obtained from that of \(\psi_{\pm}\) by complex conjugation. To simplify expressions we have chosen to display \(\delta_{\chi}\dot{A}_{s}\) rather than \(\delta_{\chi}A_{s}\) (which is non-local in \(u\)). \[\begin{split}\delta_{\chi}\psi_{+}&=D_{z}\chi_{+} ^{z}\\ \delta_{\chi}\psi_{-}&=D_{\bar{z}}\bar{\chi}_{-}^{ \bar{z}}\\ \delta_{\chi}\dot{A}_{\bar{z}}&=\frac{e}{2}(\bar{ \chi}_{z+}\psi_{+}-\chi_{z-}\bar{\psi}_{-})\\ \delta_{\chi}\dot{A}_{\bar{z}}&=\frac{e}{2}(\bar{ \chi}_{\bar{z}-}\psi_{-}-\chi_{\bar{z}+}\bar{\psi}_{+}).\end{split} \tag{4.2}\] We first notice that \(\lim_{u\to\pm}\delta_{\chi}\psi_{s}\neq 0\), and thus the transformation does not preserve the condition \(\psi_{s}\stackrel{{|u|\to\infty}}{{\to}}0\) (2.20). This suggests a phase space extension that allows for non-trivial asymptotic values of \(\psi_{s}\) when \(u\to\pm\infty\). The transformation rule for \(\dot{A}_{s}\) (4.2) is compatible with the asymptotic behaviour of \(A_{s}\) given in (2.19) provided \(\psi_{s}\) decays to zero as in (2.20). The discussion from the previous paragraph however indicates that in an extended space where \(\lim_{u\to\pm\infty}\psi_{s}\neq 0\) we would need to allow for non-trivial \(|u|\to\infty\) values of \(\dot{A}_{s}\). This resembles the situation for the asymptotic charges obtained from subleading photons [7], whose action creates an \(O(u)\) term in \(A_{s}\). If one continues the previous considerations back and forth between \(\delta_{\chi}\psi_{s}\) and \(\delta_{\chi}A_{s}\), one is led to conclude that all powers of \(u\) should be allowed in the \(|u|\to\infty\) behaviour of the fields. We recall however that (4.2) is at best valid only around the trivial vacuum and cannot be trusted beyond linear order in \(\chi\). Higher order iterations of \(\delta_{\chi}\) may require the inclusion of terms that are absent in (4.2) and that could modify the previous conclusion. We leave for future work the elucidation of this non-linear structure.10 There are however checks to be made at first order in \(\chi\), namely the commutation of \(\delta_{\chi}\) with the bosonic symmetries reviewed in section II.5. We discuss them next. Footnote 10: A preliminary attempt go beyond linear order is presented in appendix C, where we evaluate the commutator between two fermionic variations (4.2). Although the result cannot be trusted due to the aforementioned limitations, it may still be of use in more complete treatments. ### Algebra relations with bosonic symmetries Let \(\delta_{B}\) be a bosonic symmetry action corresponding to a charge \(B\), i.e. \[\delta_{B}=\{\cdot,B\}. \tag{4.3}\] We restrict attention to the bosonic symmetries discussed in section II.5 so that \(\delta_{B}=\delta_{Y},\delta_{f},\delta_{\lambda},\delta_{\Lambda}\) for \(B=J_{Y},P_{f},Q_{\lambda},Q_{\Lambda}\) respectively. As we shall see, these symmetries have a natural action on the fermionic parameter, \[\chi\mapsto\chi+\delta_{B}\chi \tag{4.4}\] such that the commutator of variations is given by \[[\delta_{\chi},\delta_{B}]=\delta_{\delta_{B}\chi}. \tag{4.5}\] At the level of charges, this should imply the PB relations11 Footnote 11: In the absence of a central extension, as it turns out to be the case. \[\{F_{\chi},B\}=-F_{\delta_{B}\chi}. \tag{4.6}\] There are however various subtleties in the evaluation of PBs that make (4.6) challenging to verify directly. It is for this reason that we will instead focus on the relations \[\delta_{B}F_{\chi} = -F_{\delta_{B}\chi}, \tag{4.7}\] \[\delta_{\chi}B = F_{\delta_{B}\chi}. \tag{4.8}\] It turns out that (4.7) follows straightforwardly from the expression of \(F_{\chi}\) and (4.5). This should lead, via (4.3), to (4.6). Finally from (4.1) one would arrive at (4.8). There are however two obstructions to this logic chain. First, Eq. (4.3) does not hold for \(Q_{\lambda}\) if one uses the radiative PBs (2.21) (see the discussion following this equation). Since these were the brackets used in the definition of \(\delta_{\chi}\) (4.2), there will be a mismatch in (4.8) when \(B=Q_{\lambda}\). This problem should go away if \(\delta_{\chi}\) is constructed from improved PBs as the ones proposed in [6]. A second obstruction appears from the fact that the relation \[\delta F_{\chi}=\Omega(\delta,\delta_{\chi}) \tag{4.9}\] only holds for variations \(\delta\) that decay to zero faster than the assumed ones in order to compensate for the singular behavior of \(\delta_{\chi}\) at \(u=\pm\infty\). This leads to additional difficulties in verifying (4.8), for instance when \(B=J_{Y}\). A complete fix to this second obstruction would presumably require the inclusion of soft fermionic degrees of freedom in the symplectic structure. We now discuss in more detail the situation for each bosonic symmetry separately. #### iv.1.1 Lorentz The transformation properties of \(\chi\) under the Lorentz group follow directly from those of the elementary fields \(A_{s}\) and \(\psi_{\pm}\). By direct evaluation one readily obtains (4.5) and (4.7) with \[\delta_{(Y,\bar{Y})}\chi_{+}^{z} =\left(Y\partial_{z}+\bar{Y}\partial_{\bar{z}}+{\frac{1}{2}} \bar{Y}^{\prime}\right)\chi_{+}^{z} \tag{4.10}\] \[\delta_{(Y,\bar{Y})}\chi_{-}^{\bar{z}} =\left(Y\partial_{z}+\bar{Y}\partial_{\bar{z}}+{\frac{1}{2}} Y^{\prime}\right)\chi_{-}^{\bar{z}}\ \,. \tag{4.11}\] Thus, from a 2d perspective, \(\chi_{+}^{z}\) and \(\chi_{-}^{\bar{z}}\) have (anti-)holomorphic dimensions \((h,\bar{h})=(0,1/2)\) and \((1/2,0)\) respectively. Relation (4.8) is verified provided one discards boundary terms of the type \[\left[A_{\bar{z}}\delta_{\chi}\delta_{(Y,\bar{Y})}A_{z}\right]_{u=-\infty}^{u =\infty}, \tag{4.12}\] which are however non-trivial under the assumed fall-offs (2.19), (2.20). #### iv.1.2 Translations It is easy to verify that \(\delta_{\chi}\) commutes with translations, \[\left[\delta_{\chi},\delta_{f}\right]=0 \tag{4.13}\] and that \(\delta_{f}F_{\chi}=0\) (assuming the fall-offs (2.19), (2.20) otherwise there could be non-zero boundary terms). The relation \(\delta_{\chi}P_{f}=0\) follows with no caveats. Large \(U(1)\) gauge As in the Lorentz case, the transformation properties of \(\chi\) follow from those of the elementary fields. One can easily verify (4.5) and (4.7) with \[\delta_{\lambda}\chi_{+}^{z} = -ie\chi_{+}^{z} \tag{4.14}\] \[\delta_{\lambda}\chi_{-}^{\bar{z}} = -ie\chi_{-}^{\bar{z}}. \tag{4.15}\] As anticipated earlier, Eq. (4.8) is not verified unless one addresses the \(1/2\) discontinuity in the radiative PBs (2.22). Let us describe how the issue arises in the present context. The RHS of (4.8) takes the form \[F_{\delta_{\lambda}\chi} = -e\int_{\cal I}\lambda\bar{\chi}_{+}^{\bar{z}}D_{\bar{z}}\psi_{+}+\cdots \tag{4.16}\] \[= -e\int_{\cal I}\partial_{\bar{z}}\lambda\bar{\chi}_{+}^{\bar{z}} \psi_{+}+e\int_{\cal I}\lambda D_{\bar{z}}\bar{\chi}_{+}^{\bar{z}}\psi_{+}+\cdots, \tag{4.17}\] where in the second line we integrated by parts and for simplicity we are only displaying what corresponds to the first term in \(F_{\chi}\) (3.16) (the others follow a similar pattern). The LHS of (4.8) can be written as a \[\delta_{\chi}Q_{\lambda}=\delta_{\chi}Q_{\lambda}^{\rm hard}+\delta_{\chi}Q_ {\lambda}^{\rm soft} \tag{4.18}\] with \[\delta_{\chi}Q_{\lambda}^{\rm hard} = \delta_{\chi}\int_{\cal I}e\lambda(\bar{\psi}_{+}\psi_{+}+\bar{ \psi}_{-}\psi_{-}) \tag{4.19}\] \[= e\int_{\cal I}\lambda\delta_{\chi}\bar{\psi}_{+}\psi_{+}+\cdots\] (4.20) \[= e\int_{\cal I}\lambda D_{\bar{z}}\bar{\chi}_{+}^{\bar{z}}\psi_{+}+\cdots \tag{4.21}\] and \[\delta_{\chi}Q_{\lambda}^{\rm soft} = \int_{\cal I}(\partial_{\bar{z}}\lambda\dot{A}_{z}+\partial_{z} \lambda\dot{A}_{\bar{z}}) \tag{4.22}\] \[= \int_{\cal I}\partial_{\bar{z}}\lambda\delta_{\chi}\dot{A}_{z}+\cdots\] (4.23) \[= \frac{e}{2}\int_{\cal I}\partial_{\bar{z}}\lambda\bar{\chi}_{+}^{ \bar{z}}\psi_{+}+\cdots, \tag{4.24}\] where we only displayed the terms that correspond to those shown in (4.17). Comparing the expressions one sees that whereas \(\delta_{\chi}Q_{\lambda}^{\rm hard}\) reproduces the first term in (4.17), \(\delta_{\chi}Q_{\lambda}^{\rm soft}\) fails to reproduce the second term by a factor of \(1/2\). The origin of this mismatch is the same as the one leading to the discontinuity discussed in Eq. (2.22) and so it would be fixed by the use of improved PBs [6]. Axial rotations In this case, equations (4.7) and (4.8) are verified with no caveats, with \[\delta_{\rm A}\chi_{+}^{z} = -i\chi_{+}^{z} \tag{4.25}\] \[\delta_{\rm A}\chi_{-}^{\bar{z}} = i\chi_{-}^{\bar{z}}. \tag{4.26}\] ## V Outlook Over the past decade there has been a fruitful revision on the subject of asymptotic symmetries in (asymptotically) flat spacetimes, driven by the discovery of their connection with soft theorems and memory effects [3]. Whereas asymptotic symmetries were originally tied to gauge symmetries that are non-trivial at infinity, their relation to soft theorems led to a broader perspective. Indeed, there is a growing list of soft theorems that admit an interpretation in terms of asymptotic symmetries with no obvious gauge origin.12 Here we have enlarged such list, by presenting an asymptotic symmetry associated to soft electrons in tree-level massless QED.13 Following what is by now a standard procedure, we identified the form of the asymptotic charge and initiated the study of its symmetry action. Footnote 12: What appears as a non-gauge asymptotic symmetry may sometimes be realized as large gauge, either by allowing divergent gauge transformations [43] or by performing a change of field variables [44]. It is however not clear whether such reinterpretation is always possible. Footnote 13: Although we have set our discussion in the context of QED, it should admit a straightforward generalization to the case of non-abelian gauge fields as well as chiral fermions. To our knowledge, this is the first example of a fermionic asymptotic symmetry in a theory with no _standard_ supersymmetry. The existence of such fermionic symmetry, however, suggest that tree-level massless QED possess an _asymptotic_ supersymmetry algebra. Unfortunately the present work remains agnostic as to what such algebra should be. Below we describe this and others shortcomings of our analysis as well as possible strategies to overcome them. We followed what may be referred to as a canonical approach to asymptotic symmetries, in which the classical phase space at null infinity provides the bridge between symmetries and charges. A difficulty with this approach is that it often requires an enlarged version of the radiative phase space, with no simple recipe to determine it.14 Whereas we showed that an enlargement is implied by soft electrons, we were not able to characterize it beyond linear level. This in turn impeded us to trustfully evaluate the commutator between fermionic asymptotic symmetries. Footnote 14: Recent developments on so called corner symmetries [45; 46; 47; 48] may in fact yield such recipe (at least in the case of large gauge symmetries). There is however a second approach in which symmetries are described in terms of 2d conserved currents of a dual CFT [49; 50; 51; 52]. There, the algebraic structure is read off from collinear factorization theorems [53; 54; 55; 56; 57; 58; 59] without the need of phase-space considerations.15 It would be very interesting to study the fermionic symmetries presented here from this perspective, as it could allow us to extract information on the fermionic symmetry algebra. Regardless of the approach, it has been found in the context of gauge and gravitational theories that completeness of the asymptotic symmetry algebra requires charges associated to subleading soft theorems of arbitrarily high order, see e.g. [60; 61]. It would be interesting to explore the existence of higher order soft electron theorems and their connection with the soft photon ones. A hint that a nontrivial interplay may occur is provided by a naive evaluation of the commutator between the fermionic variations presented here, leading to an expression that is reminiscent to the subleading soft photon charge action (see appendix C). A final pressing open problem regards the fate of the symmetries beyond tree-level. The structure of their modification or breaking due to loop corrections should be worth studying. In particular, it would be interesting to explore any possible connection with the chiral anomaly, whose consequences at null infinity were recently analyzed in [66]. ## Acknowledgements We would like to thank Ivan Agullo, Alok Laddha, Guzman Hernandez, Pablo Pais and Michael Reisenberger for illuminating discussions. We specially thank Alok Laddha for his feedback and encouragement. AA acknowledges support from PEDECIBA and from CSIC grant I+D 583. MC acknowledges support from PEDECIBA and from ANII grant FCE-1-2019-1-155865. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. ## Appendix A Asymptotic vs. momentum-space Fock operators In this appendix we work out the relation between the Fock operators introduced in section II.4 and the standard momentum-space Fock operators. In particular we verify they yield equivalent (anti) commutation relations. We start with the momentum expansion of asymptotic free fields: \[\begin{split}\mathcal{A}_{\mu}(X)&=\sum_{s=\pm} \int\widetilde{dp}\,\left(a_{s}(p)\varepsilon_{\mu}^{s*}(p)e^{ip\cdot X}+a_{s} ^{\dagger}(p)\varepsilon_{\mu}^{s}(p)e^{-ip\cdot X}\right)\\ \Psi(X)&=\sum_{s=\pm}\int\widetilde{dp}\,\left(b_{ s}(p)u_{s}(p)e^{ip\cdot X}+c_{s}^{\dagger}(p)v_{s}(p)e^{-ip\cdot X}\right) \end{split} \tag{10}\] where \(\widetilde{dp}\,=d^{3}p/((2\pi)^{3}2|p|)\) and the photon/electron polarization vectors are described below. To simplify notation we do not yet include additional labels to the Fock operators, but it should be kept in mind that they have different normalization from those introduced in section II.4. To study the null infinity limit of these expressions, we parametrize \(X^{\mu}\) in terms of \((r,u,x)\) as in (11) and write the null momentum \(p^{\mu}\) as \[p^{\mu}=\omega q^{\mu}(x^{\prime}) \tag{11}\] with \(q^{\mu}\) as in (2.9) with \(x^{\prime}=(z^{\prime},\bar{z}^{\prime})\). In this parametrization the momentum measure takes the form \[\widetilde{dp}\,=\frac{\omega}{2(2\pi)^{3}}d^{2}x^{\prime}d\omega\] (A3) and the plane-wave phase becomes \[p\cdot X=r\omega q(x)\cdot q(x^{\prime})+u\omega k\cdot q(x^{\prime})\] (A4) where \[q(x)\cdot q(x^{\prime})=-|z-z^{\prime}|^{2}.\] (A5) In the \(r\to\infty\) limit, the \(d^{2}x^{\prime}\) integral can be evaluated by a saddle point analysis. In the simplest case of a positive-frequency scalar one gets \[\lim_{r\to\infty}\int\widetilde{dp}\,\phi(p)e^{ip\cdot X}=\frac{1}{4\pi ir} \int_{0}^{\infty}\frac{d\omega}{2\pi}\phi(\omega q(x))e^{-i\omega u}+O(1/r^{2}).\] (A6) Before extending this formula to the photon and fermion fields, let us recall the form of the corresponding momentum wave-functions. The polarization vector for a photon with momentum \(p=\omega q^{\mu}(x)\) can be taken to be [6] \[\varepsilon^{+\mu}=\partial_{z}q^{\mu}=\frac{1}{\sqrt{2}}(\bar{z},1,-i,-\bar{ z}),\quad\varepsilon^{-\mu}=\partial_{\bar{z}}q^{\mu}=\frac{1}{\sqrt{2}}(z,1,i,-z)\] (A7) which satisfy \[\varepsilon^{\pm}\cdot q=0,\quad\varepsilon^{\pm}\cdot\varepsilon^{\pm}=0, \quad\varepsilon^{+}\cdot\varepsilon^{-}=1,\quad\varepsilon^{+}_{\mu} \varepsilon^{-}_{\nu}+\varepsilon^{-}_{\mu}\varepsilon^{+}_{\nu}+\frac{q_{\mu }k_{\nu}+k_{\mu}q_{\nu}}{q\cdot k}=\eta_{\mu\nu}\] (A8) with \(k^{\mu}\) a null vector with non-zero dot product with \(q^{\mu}\), as for instance the one given in (2.10). For the fermion we have (see e.g. [8; 67]) \[u_{+}=v_{-}=2^{1/4}\sqrt{\omega}\begin{pmatrix}0\\ 0\\ 1\\ z\end{pmatrix},\quad u_{-}=v_{+}=2^{1/4}\sqrt{\omega}\begin{pmatrix}-\bar{z}\\ 1\\ 0\\ 0\end{pmatrix}\] (A9) which satisfy \[\not{q}u_{\pm}=0,\quad\bar{u}_{s}\gamma^{\mu}u_{s^{\prime}}=2p^{\mu}\delta_{ ss^{\prime}},\quad\sum_{s}u_{s}\bar{u}_{s}=-\not{p},\] (A10) and similarly for \(v_{\pm}\). Expressions (A7), (A9) can be obtained from those used in the spinor-helicity formalism [67] with the choice of spinor \(2^{1/4}\sqrt{\omega}(1,z)\) for the momentum \(p^{\mu}=\omega q^{\mu}(z,\bar{z})\).16 Footnote 16: The photon polarization in (A7) corresponds to a choice of “reference spinor” of the form \((1,\infty)\). Using the above expressions, one finds \[\mathcal{A}_{z}=\frac{1}{4\pi i}\int_{0}^{\infty}\frac{d\omega}{2\pi}\left(a_ {+}e^{-i\omega u}-a_{-}^{\dagger}e^{i\omega u}+\right)+O(1/r)\] (A11) \[\Psi=\frac{1}{4\pi ir}\int_{0}^{\infty}\frac{d\omega}{2\pi}2^{1/4}\sqrt{\omega} \left(\begin{array}{c}-\bar{z}\\ 1\\ 1\\ z\end{array}(b_{+}e^{-i\omega u}-c_{-}^{\dagger}e^{i\omega u})\\ \end{array}\right). \tag{101}\] Comparing with (2.23), (2.25) we see that the momentum space Fock operators are related to the Fock operators of section II.4 by \[a_{s}=\frac{1}{4\pi i}a_{s}^{standard},\quad b_{s}=\frac{\sqrt{2\omega}}{4\pi i }b_{s}^{standard},\quad c_{s}=\frac{\sqrt{2\omega}}{4\pi i}c_{s}^{standard}, \tag{102}\] where we have added the label "standard" to the operators of this section to distinguish them from those of section II.4. The commutation relations (2.26) then imply the standard momentum-space (anti) commutators \[[d_{s}^{standard}(p),d_{s^{\prime}}^{standard\,\dagger}(p^{\prime})]=2|p|(2 \pi)^{3}\delta_{ss^{\prime}}\delta^{(3)}(p-p^{\prime}),\quad d=\{a,b,c\}, \tag{103}\] where we used that \(\delta^{(3)}(p-p^{\prime})=\frac{1}{\omega^{2}}\delta(\omega-\omega^{\prime} )\delta^{(2)}(x,x^{\prime})\) for \(p=\omega q(x)\) and \(p^{\prime}=\omega q(x^{\prime})\). ## Appendix B Soft photons and soft electrons In this appendix we derive the tree-level formulas for the emission of soft particles. We treat all particles as outgoing. Since the formulas are dictated by the QED 3-point interaction, it will be useful to start the discussion by recalling the structure of general 3-point amplitudes. Consider then a process involving an electron, positron and photon of momenta \[\begin{array}{l}p_{e}=\omega_{e}q(z_{e},\bar{z}_{e})\\ p_{\bar{e}}=\omega_{\bar{e}}q(z_{\bar{e}},\bar{z}_{\bar{e}})\\ p_{\gamma}=\omega_{\gamma}q(z_{\gamma},\bar{z}_{\gamma})\end{array} \tag{104}\] and helicities \(s,\bar{s}\) and \(h\). The (momentum conserving delta function stripped) amplitude is given by \[A_{3}((p_{e},s),(p_{\bar{e}},\bar{s}),(p_{\gamma},h))=\raisebox{-28.452756pt}{ \includegraphics[width=14.226378pt]{fig/ We have not yet imposed momentum conservation. This typically leads to trivial amplitudes (unless one allows for complex momenta [67]). An exception however is when one of the particles goes soft with the remaining two becoming coincident:17 Footnote 17: Since we are working with real momenta, \(z\to z_{0}\) implies \(\bar{z}\to\bar{z}_{0}\). \[\begin{split}\omega_{\gamma}\to 0&\implies z_{\bar{e}} \to z_{e},\quad\omega_{\bar{e}}\to\omega_{e}\\ \omega_{e}\to 0&\implies z_{\bar{e}}\to z_{\gamma},\quad \omega_{\bar{e}}\to\omega_{\gamma}\\ \omega_{\bar{e}}\to 0&\implies z_{e}\to z_{\gamma},\quad \omega_{e}\to\omega_{\gamma}.\end{split} \tag{10}\] Notice that in the soft electron/positron limit, two out of the four amplitudes in (11) vanish. We next describe the behavior of general amplitudes when one particle goes soft. As is well known, the leading contribution comes from diagrams where the soft particle is attached to an external hard particle. In the following we discuss the relevant diagrams for each type of soft particle. We consider processes involving \(n\) hard particles and 1 soft particle. ### Soft photon The amplitude for a process in which a photon is attached to an external electron is given by \[\begin{split}\includegraphics[width=142.26378pt, width=142.26378pt]{figs.eps} =-ie\varepsilon_{\mu}^{h}(p_{\gamma})\bar{u}_{s}(p_{e})\gamma^{\mu}i\frac{ \not{p}_{e}+\not{p}_{\gamma}}{(p_{e}+p_{\gamma})^{2}}\mathbf{A}_{n-1}(p_{e}+p _{\gamma})\\ \stackrel{{\omega_{\gamma}\to 0}}{{=}}e \varepsilon_{\mu}^{h}(p_{\gamma})\bar{u}_{s}(p_{e})\gamma^{\mu}\frac{\not{p}_{ e}}{2p_{e}\cdot p_{\gamma}}\mathbf{A}_{n-1}(p_{e})+O(\omega_{\gamma}^{0})\\ =-e\frac{\varepsilon^{h}(p_{\gamma})\cdot p_{e}}{p_{\gamma}\cdot p _{e}}A_{n}+O(\omega_{\gamma}^{0})\end{split} \tag{11}\] where \(\mathbf{A}_{n-1}\) denotes the spinor-valued off-shell amplitude corresponding to the remaining \(n-1\) hard particles. In going to the second line we kept the leading terms in the \(\omega_{\gamma}\to 0\) limit, whereas in going to the third line we used the identities in (10) and the fact that the fermion helicities in the vertex must match for the amplitude to be nonzero. \(A_{n}\) represents the amplitude for the \(n\) hard particles. One can similarly compute the amplitude for the case where the photon is attached to an outgoing positron, resulting in an expression as the above modulo an overall sign. Summing over all external particles one obtains Weinberg's soft photon theorem [68]. ### Soft electron We now consider processes in which an electron goes soft. From the structure of the QED vertex, we see that the electron can be attached either to an external photon or to an external positron (recall we are taking all particles to be outgoing). The contribution when the electron is attached to an external photon is \[\includegraphics[scale=0.4]{photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photonphoton-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photon-photonphoton-photon-photon-photon-photonphoton-photon-photon-photonphoton-photon-photonphoton-photon-photonphoton-photonphoton-photonphoton-photonphoton-photonphoton-photon-photonphoton-photonphoton-photonphoton-photonphoton-photonphoton-photon-photonphoton-photonphoton-photonphoton-photonphoton-photonphoton-photonphoton-photonphoton-photonphotonphoton-photonphoton-photonphoton-photonphoton-photonphotonphoton-photonphotonphoton-photonphotonphoton-photonphotonphoton-photonphotonphoton-photonphotonphoton-photonphotonphotonphoton-photonphotonphotonphoton-photonphotonphotonphotonphotonphoton- ## Appendix C Naive commutator of fermionic charges and subleading soft photons In this appendix we present the asymptotic symmetry associated to Low's subleading photon theorem [69], as first introduced by Lysov, Pasterski and Strominger (LPS) [7]. We next evaluate the commutator of two fermionic symmetries and compare it with the LPS symmetry. Even though the two expression do not coincide, there are certain similarities that suggest the LPS symmetry may feature in an eventual (super) symmetry algebra involving the fermionic generators. We hope the calculation presented here will be of use in a more complete treatment. ### Asymptotic symmetries for subleading soft photon Soft photons obey a factorization theorem at subleading order [69] that can be understood in terms of LPS asymptotic symmetries [7]. The corresponding charges are parametrized by vector fields \(Y^{a}\) on the sphere and take the form, \[Q_{Y}^{\rm LPS}=\int_{\cal I}dud^{2}x\left(-2u\dot{A}_{\bar{z}}\partial_{z}^{2} Y^{z}+u\partial_{z}Y^{z}j_{u}+Y^{z}j_{z}\right)+c.c., \tag{100}\] where \(j_{u}\) and \(j_{a}\) are the leading components of the asymptotic current, \[{\cal J}_{u} = j_{u}/r^{2}+\cdots \tag{101}\] \[{\cal J}_{a} = j_{a}/r^{2}+\cdots. \tag{102}\] In order to compute the symmetry action on the spinor field, we need to express \(j_{u}\) and \(j_{a}\) in terms of \(\psi_{s}\). This is achieved by substituting the asymptotic expansion of the Dirac spinor (14) in the current expression (3). For the \(u\) component one gets \[j_{u}=-e\sum_{s}\bar{\psi}_{s}\psi_{s}. \tag{103}\] The evaluation of \(j_{a}\) requires one order further in the asymptotic expansion of the Dirac field, \[\Psi=\frac{1}{r}\overset{0}{\Psi}+\frac{1}{r^{2}}\overset{1}{\Psi}+\cdots \tag{104}\] where \(\overset{0}{\Psi}\) is the leading term displayed in (14). Substituting (104) in the current expression one finds \[{\cal J}_{a} = \frac{e}{r}\left(\overset{0}{\Psi}+\frac{1}{r}\overset{1}{ \Psi}+\cdots\right)\partial_{a}q\!\!\!/\left(\overset{0}{\Psi}+\frac{1}{r} \overset{1}{\Psi}+\cdots\right) \tag{105}\] \[= \frac{e}{r^{2}}\left(\overset{0}{\Psi}\partial_{a}q\!\!\!/ \overset{1}{\Psi}+\overset{1}{\Psi}\partial_{a}q\!\!\!/\overset{0}{\Psi} \right)+\cdots \tag{106}\] where we used that \[\overset{0}{\Psi}\partial_{a}q\!\!\!/\overset{0}{\Psi}=0. \tag{107}\] We finally need to express \(\stackrel{{ 1}}{{\Psi}}\) in terms of \(\psi_{s}\). Imposing the asymptotic free-field equation on \(\Psi\) one finds18 Footnote 18: Whereas this suffices for tree-level considerations, a more complete treatment should deal with the full non-linear asymptotic field equations. This in turn may modify the assumed fall-offs (C5) (and hence (C2), (C3)). See [21] for related discussions. \[0=\not{\partial}\Psi=-\frac{1}{r}\not{q}\partial_{u}\stackrel{{ 0}}{{\Psi}}+\frac{1}{r^{2}} \left(-\not{q}\partial_{u}\stackrel{{ 1}}{{\Psi}}+\not{k} \stackrel{{ 0}}{{\Psi}}+\partial_{a}\not{q}\partial^{\stackrel{{ 0}} {{\Psi}}}\right)+\cdots \tag{107}\] The vanishing of the \(1/r\) term leads to (2.14), which we write as19 Footnote 19: In (2.14) and (C10) we are setting to zero a possible \(u\)-independent spinor that may not be in the kernel of \(\not{q}\). It may be that such spinor is needed in an extended version of the radiative phase space. \[\stackrel{{ 0}}{{\Psi}}=\psi_{+}{\bf u}_{+}+\psi_{-}{\bf u}_{-} \tag{108}\] where \[{\bf u}_{+}=\frac{1}{2^{1/4}}\begin{pmatrix}0\\ 0\\ 1\\ z\end{pmatrix},\quad{\bf u}_{-}=\frac{1}{2^{1/4}}\begin{pmatrix}-\bar{z}\\ 1\\ 0\\ 0\end{pmatrix}, \tag{109}\] span the kernel of the matrix \(\not{q}\). \(\stackrel{{ 1}}{{\Psi}}\) is to be determined from the vanishing of the \(1/r^{2}\) term in (107). Due to the non-invertibility of \(\not{q}\), it is not immediately obvious this equation can be solved. However, using that \(k^{\mu}=1/2\partial^{a}\partial_{a}q^{\mu}\), the equation can be brought into the form \[0=-\not{q}(\partial_{u}\stackrel{{ 1}}{{\Psi}}+\frac{1}{2} \partial^{a}\partial_{a}\stackrel{{ 0}}{{\Psi}})+\frac{1}{2} \partial^{a}\partial_{a}(\not{q}\stackrel{{ 0}}{{\Psi}}). \tag{110}\] Since the last term vanishes due to (108), the equation fixes \(\stackrel{{ 1}}{{\Psi}}\) modulo elements in the kernel of \(\not{q}\):20 Footnote 20: The first term in (102) is what one gets from solving \(\Box\Psi=0\). The apparent indeterminacy in \(\stackrel{{ 1}}{{\Psi}}\) gets fixed upon requiring integrability on the equation for \(\stackrel{{ 2}}{{\Psi}}\). However, we do not need such terms for the present discussion. \[\stackrel{{ 1}}{{\Psi}} = -\frac{1}{2}\partial_{u}^{-1}\partial^{a}\partial_{a}\stackrel{{ 0}}{{\Psi}}+\ker(\not{q}). \tag{111}\] \[= -\sum_{s}\partial_{u}^{-1}\partial_{a}\psi_{s}\partial^{a}{\bf u} _{s}+\ker(\not{q}). \tag{112}\] Substituting in (105) we get \[j_{z} = -e\left(\bar{\psi}_{-}\partial_{z}\partial_{u}^{-1}\psi_{-}+ \partial_{z}\partial_{u}^{-1}\bar{\psi}_{+}\psi_{+}\right), \tag{113}\] \[j_{\bar{z}} = -e\left(\bar{\psi}_{+}\partial_{\bar{z}}\partial_{u}^{-1}\psi_{+ }+\partial_{\bar{z}}\partial_{u}^{-1}\bar{\psi}_{-}\psi_{-}\right), \tag{114}\] where we used that \(\bar{\bf u}_{s}\partial_{a}\not{q}{\bf u}_{s^{\prime}}=0\) (which ensures \(j_{a}\) is independent of the \(\ker(\not{q})\) indeterminacy in \(\stackrel{{ 1}}{{\Psi}}\)), together with the fact that the only nonzero components of \(\bar{\bf u}_{s}\partial_{a}\not{q}\partial_{b}{\bf u}_{s^{\prime}}\) are \[\bar{\bf u}_{+}\partial_{\bar{z}}\not{q}\partial_{z}{\bf u}_{+}=\bar{\bf u}_{-} \partial_{z}\not{q}\partial_{\bar{z}}{\bf u}_{-}=1. \tag{101}\] Defining \(\delta^{\rm LPS}_{Y}=\{\cdot,Q^{\rm LPS}_{Y}\}\) with the PBs (2.21) one obtains \[\delta^{\rm LPS}_{Y}\psi_{+} = ie\left(Y^{a}\partial_{a}+u\partial_{a}Y^{a}\partial_{u}+ \partial_{z}Y^{z}\right)\partial_{u}^{-1}\psi_{+},\] \[\delta^{\rm LPS}_{Y}\psi_{-} = ie\left(Y^{a}\partial_{a}+u\partial_{a}Y^{a}\partial_{u}+ \partial_{\bar{z}}Y^{\bar{z}}\right)\partial_{u}^{-1}\psi_{-}, \tag{102}\] \[\delta^{\rm LPS}_{Y}\hat{A}_{z} = -\partial_{z}^{2}Y^{z}.\] ### Naive commutator of two fermionic generators In this section we evaluate the (anti) commutator between two fermionic variations \(\delta_{\chi}\) defined in (4.2). As discussed there, one should not take this result too seriously, since the expressions do not incorporate "corner" fermionic degrees of freedom that are presumably needed to have a well defined phase space action. To facilitate the computation, let us introduce the notation \[\chi_{+}=\chi_{z-},\quad\chi_{-}=\chi_{\bar{z}+} \tag{103}\] so that (4.2) can be written more compactly as, \[\delta_{\chi}\psi_{s} = D_{s}\chi_{-s}, \tag{104}\] \[\delta_{\chi}\dot{A}_{s} = \frac{e}{2}\left(\bar{\chi}_{-s}\psi_{s}-\chi_{s}\bar{\psi}_{-s} \right).\] We focus on the anticommutator \([\delta_{\chi},\delta_{\chi}]=2\delta_{\chi}\delta_{\chi}\), since the general case can be obtained from this basic one. Acting twice with (104) one finds, \[[\delta_{\chi},\delta_{\chi}]\psi_{+} = -ie^{2}\left(\bar{\chi}_{-}\chi_{-}\partial_{u}^{-1}\psi_{+}- \chi_{+}\chi_{-}\partial_{u}^{-1}\bar{\psi}_{-}\right) \tag{105}\] \[[\delta_{\chi},\delta_{\chi}]\dot{A}_{z} = -e\left(D_{z}\bar{\chi}_{+}\chi_{+}+\bar{\chi}_{-}D_{z}\chi_{-}\right)\] (the corresponding expressions for \(\psi_{-}\) and \(\dot{A}_{\bar{z}}\) can be obtained from (105) by interchanging \(+\leftrightarrow-\) and \(z\leftrightarrow\bar{z}\)). The result bears certain resemblance with \(\delta^{\rm LPS}_{Y}\) (102) if \[Y^{z}\sim e\partial_{z}^{-1}\bar{\chi}_{s}\chi_{s}. \tag{106}\]
2304.08170
Factorization number and subgroup commutativity degree via spectral invariants
The factorization number $F_2(G)$ of a finite group $G$ is the number of all possible factorizations of $G=HK$ as product of its subgroups $H$ and $K$, while the subgroup commutativity degree $\mathrm{sd}(G)$ of $G$ is the probability of finding two commuting subgroups in $G$ at random. It is known that $\mathrm{sd}(G)$ can be expressed in terms of $F_2(G)$. Denoting by $\mathrm{L}(G)$ the subgroups lattice of $G$, the non--permutability graph of subgroups $\Gamma_{\mathrm{L}(G)}$ of $G$ is the graph with vertices in $\mathrm{L}(G) \setminus \mathfrak{C}_{\mathrm{L}(G)}(\mathrm{L}(G))$, where $\mathfrak{C}_{\mathrm{L}(G)}(\mathrm{L}(G))$ is the smallest sublattice of $\mathrm{L}(G)$ containing all permutable subgroups of $G$, and edges obtained by joining two vertices $X,Y$ such that $XY\neq YX$. The spectral properties of $\Gamma_{\mathrm{L}(G)}$ have been recently investigated in connection with $F_2(G)$ and $\mathrm{sd}(G)$. Here we show a new combinatorial formula, which allows us to express $F_2(G)$, and so $\mathrm{sd}(G)$, in terms of adjacency and Laplacian matrices of $\Gamma_{\mathrm{L}(G)}$.
Seid Kassaw Muhie, Daniele Ettore Otera, Francesco G. Russo
2023-04-17T11:34:30Z
http://arxiv.org/abs/2304.08170v1
# Factorization number and subgroup commutativity degree via spectral invariants ###### Abstract. The factorization number \(F_{2}(G)\) of a finite group \(G\) is the number of all possible factorizations of \(G=HK\) as product of its subgroups \(H\) and \(K\), while the subgroup commutativity degree \(\operatorname{sd}(G)\) of \(G\) is the probability of finding two commuting subgroups in \(G\) at random. It is known that \(\operatorname{sd}(G)\) can be expressed in terms of \(F_{2}(G)\). Denoting by \(\operatorname{L}(G)\) the subgroups lattice of \(G\), the non-permutability graph of subgroups \(\Gamma_{\operatorname{L}(G)}\) of \(G\) is the graph with vertices in \(\operatorname{L}(G)\setminus\mathfrak{C}_{\operatorname{L}(G)}(\operatorname{L }(G))\), where \(\mathfrak{C}_{\operatorname{L}(G)}(\operatorname{L}(G))\) is the smallest sublattice of \(\operatorname{L}(G)\) containing all permutable subgroups of \(G\), and edges obtained by joining two vertices \(X,Y\) such that \(XY\neq YX\). The spectral properties of \(\Gamma_{\operatorname{L}(G)}\) have been recently investigated in connection with \(F_{2}(G)\) and \(\operatorname{sd}(G)\). Here we show a new combinatorial formula, which allows us to express \(F_{2}(G)\), and so \(\operatorname{sd}(G)\), in terms of adjacency and Laplacian matrices of \(\Gamma_{\operatorname{L}(G)}\). Key words and phrases:Subgroup commutativity degree; factorization number; Laplacian matrix; spectrum ; non-permutability graph of subgroups _Mathematics Subject Classification (2020)_: Primary: 20D60, 05C25, 05C07; Secondary: 05C15, 20K27 The adjacency matrix of \(\Gamma_{\mathrm{L}(G)}\) is the square matrix \[A(\Gamma_{\mathrm{L}(G)})=\left(a_{X,Y}\right)_{X,Y\in V(\Gamma_{\mathrm{L}(G)})},\quad\text{ where }\ a_{X,Y}=\left\{\begin{array}{rl}1,&\text{if }(X,Y)\in E(\Gamma_{ \mathrm{L}(G)}),\\ 0,&\text{if }(X,Y)\not\in E(\Gamma_{\mathrm{L}(G)}).\end{array}\right. \tag{1.6}\] Note that the degree of a vertex \(X\) in (1.1) is defined by \[\deg(X)=\sum_{Y\in V(\Gamma_{\mathrm{L}(G)})}a_{X,Y}. \tag{1.7}\] Since \(\Gamma_{\mathrm{L}(G)}\) is an undirected graph without loops, the Laplace matrix of \(\Gamma_{\mathrm{L}(G)}\) is the matrix \[L(\Gamma_{\mathrm{L}(G)})=D-A(\Gamma_{\mathrm{L}(G)}), \tag{1.8}\] where \(D=\mathrm{diag}(\deg(X_{i}))\), for all \(X_{i}\in V(\Gamma_{\mathrm{L}(G)})\) and \(i=1,2,\cdots,m=|V(\Gamma_{\mathrm{L}(G)})|\). These are common notions, which are usually considered in spectral graph theory, see [4, 5]. On the other hand, we are also interested in the so-called _subgroup commutativity degree_ of \(G\), studied in [1, 22, 29]. This is the probability that two subgroups of \(G\) commute, namely \[\mathrm{sd}(G)=\frac{|\{(X,Y)\in\mathrm{L}(G)\times\mathrm{L}(G)\ |\ XY=YX\}|}{| \mathrm{L}(G)|^{2}}. \tag{1.9}\] If any two randomly chosen subgroups of \(G\) commute, then \(G\) is called _quasihamiltonian_, and these groups were classified since long time by Iwasawa (see [25]). Abelian groups are of course quasihamiltonian, but the quaternion group \(Q_{8}\) of order \(8\) is a nonabelian group of \(\mathrm{sd}(Q_{8})=1\). Evidently \(G\) is quasihamiltonian if and only if \(\mathrm{sd}(G)=1\), therefore (1.9) is a measure of how far is a group from being quasihamiltonian. It will be useful to introduce the following sets \[\mathcal{H}(G)=\{H\in\mathrm{L}(G)\ |\ \mathrm{sd}(H)\neq 1\}\ \text{ and }\ \mathcal{K}(G)=\{K\in\mathrm{L}(G)\ |\ \mathrm{sd}(K)=1\} \tag{1.10}\] which clearly determine a disjoint union of the form \[\mathrm{L}(G)=\mathcal{H}(G)\cup\mathcal{K}(G). \tag{1.11}\] Note that permutable subgroups are subnormal, while normal subgroups are of course permutable, see [25]. The combinatorial formulas, which were found in [19, Theorem 1.3, Proposition 3.2, Corollary 3.3], illustrate important relations between (1.6), (1.8) and (1.9). For instance, if \[\mathrm{spec}(A(\Gamma_{\mathrm{L}(G)}))=\{\lambda_{1},\lambda_{2},\cdots, \lambda_{m}\}\ \text{ and }\ \mathrm{spec}(L(\Gamma_{\mathrm{L}(G)}))=\{\sigma_{1},\sigma_{2},\cdots,\sigma_{ m}\} \tag{1.12}\] are the spectrum of the adjacency and the Laplacian matrix respectively, then [19, (3.6)] shows that for groups with \(\mathrm{sd}(G)\neq 1\) \[\mathrm{sd}(G)=1-\frac{1}{|\mathrm{L}(G)|^{2}}\sum_{i=1}^{m}\lambda_{i}^{2}=1- \frac{1}{|\mathrm{L}(G)|^{2}}\sum_{i=1}^{m}\sigma_{i}. \tag{1.13}\] Another important quantity which is associated to a group \(G\) is the _factorization number_ \[F_{2}(G)=|\{(H,K)\in\mathrm{L}(G)\times\mathrm{L}(G)\ |\ G=HK\}|; \tag{1.14}\] this denotes the number of all possible factorizations of \(G\) as product of two subgroups \(H\) and \(K\). In fact we say that a group \(G\) has _factorization_\(HK\) if there are two subgroups \(H\) and \(K\) of \(G\) such that \(G=HK\) (see [15, 24]). We also mention from [25, SS1.1] that _an interval_ of \(\mathrm{L}(G)\) is the set \[[K/H]=\{Z\in\mathrm{L}(G)\ |\ H\leq Z\leq K\}, \tag{1.15}\] where \(H\leq K\). Note that \([K/H]\) is a sublattice of \(\mathrm{L}(G)\). From [21] the Mobius function \(\mu:\mathrm{L}(G)\times\mathrm{L}(G)\to\mathbb{Z}\) is recursively defined by: \[\sum_{Z\in[K/H]}\mu(H,Z)=\left\{\begin{array}{ll}1,&\quad H=K,\\ 0,&\quad\text{otherwise}.\end{array}\right. \tag{1.16}\] In particular, the Mobius number of \(G\) is \(\mu(G)=\mu(1,G)\), considering \([G/1]=\mathrm{L}(G)\). Our main result is the following: **Theorem 1.1**.: _Let \(G\) be a group with \(\mathrm{sd}(G)\neq 1\). Then_ \[F_{2}(G)=\Big{(}\sum_{K\in\mathcal{K}(G)}\ |\mathrm{L}(K)|^{2}\ \mu(K,G) \Big{)}+\Big{(}\sum_{H\in\mathcal{H}(G)}\Big{(}\ |\mathrm{L}(H)|^{2}\ -\sum_{i=1}^{m}\sigma_{i}\ \Big{)}\mu(H,G) \Big{)}, \tag{1.17}\] _where \(m=|V(\Gamma_{\mathrm{L}(H)})|\) and \(\{\sigma_{1},\sigma_{2},\cdots,\sigma_{m}\}=\mathrm{spec}(L(\Gamma_{\mathrm{L }(H)}))\). In particular,_ \[\mathrm{sd}(G)=\frac{1}{|\mathrm{L}(G)|^{2}}\Big{(}\sum_{S\in\mathrm{L}(G)} \sum_{W\in\mathcal{K}(S)}|\mathrm{L}(W)|^{2}\ \mu(W,S)+\sum_{S\in\mathrm{L}(G)}\sum_{U\in \mathcal{H}(S)}\Big{(}\ |\mathrm{L}(U)|^{2}\ -\sum_{j=1}^{k}\tau_{j}\ \Big{)}\mu(U,S) \Big{)}, \tag{1.18}\] _where \(k=|V(\Gamma_{\mathrm{L}(U)})|\) and \(\{\tau_{1},\tau_{2},\cdots,\tau_{k}\}=\mathrm{spec}(L(\Gamma_{\mathrm{L}(U)}))\)._ We shall mention that the theory of the subgroup commutativity degree has been recently discussed in [16, 17, 22, 23, 24, 29], but only in [18, 19] in connection with notions of spectral graph theory on the line of [4, 5]. Therefore Theorem 1.1 belongs to the line of research of [18, 19] and explores new connections with the theory of the factorization number in [15, 23, 24]. Section 2 collects information of general nature on the references which are pertinent to the topic, but also some classical results on the partitions of groups. Section 3 contains the proof of Theorem 1.1 along with some applications. ## 2. Groups with partitions, factorization number and subgroup commutativity degree In order to count the number of edges of the non-permutability graph of subgroups of a group \(G\), combinatorial formulas were found in [18, Lemma 2.10, Theorem 3.1] involving the subgroup commutativity degree. We report some results from [18, 19] below: **Lemma 2.1** (See [19], Lemma 2.5).: _For a group \(G\) we have_ \[2\ |E(\Gamma_{\mathrm{L}(G)})|=|\mathrm{L}(G)|^{2}\ (1-\mathrm{sd}(G)). \tag{2.1}\] This formula shows that we can obtain the number of edges in \(\Gamma_{\mathrm{L}(G)}\) if we know \(\mathrm{sd}(G)\), and vice-versa. Moreover [19, Proposition 3.2] shows that \(\mathrm{sd}(G)\) can be rewritten in terms of spectral invariants of \(\Gamma_{\mathrm{L}(G)}\). **Lemma 2.2** (See [19], Theorem 1.2).: _Let \(G\) be a group with \(\mathrm{sd}(G)\neq 1\). Then \(\mathrm{sd}(G)\) is invariant under the spectrum of \(A(\Gamma_{\mathrm{L}(G)})\). In particular,_ \[\mathrm{sd}(G)=1-\frac{1}{|\mathrm{L}(G)|^{2}}\sum_{X,Y\in V(\Gamma_{\mathrm{ L}(G)})}a_{X,Y}. \tag{2.2}\] The above formula allows us to match an approach of spectral nature with another of combinatorial nature (see [1, 30, 16, 23]), since \(\operatorname{sd}(G)\) may be obtained in terms of \(F_{2}(G)\) by the formula \[\operatorname{sd}(G)=\frac{1}{|\mathrm{L}(G)|^{2}}\sum_{H\in\mathrm{L}(G)}F_{2 }(H). \tag{2.3}\] In fact (2.3) shows that the subgroup commutativity degree can be reduced to the computation of the factorization number. This has led to important numerical evaluations for \(\operatorname{sd}(G)\) via \(F_{2}(H)\), because it was found that \(F_{2}(H)\) may be expressed for several families of groups via Gaussian trinomial integers. Consequently, we may connect the spectral invariants of \(\Gamma_{\mathrm{L}(G)}\) to \(F_{2}(G)\) as indicated below. **Corollary 2.3** (See [19], Lemma 2.6).: _For a group \(G\) we have_ \[2\ |E(\Gamma_{\mathrm{L}(G)})|=|\mathrm{L}(G)|^{2}\ -\ \sum_{H\in\mathrm{L}(G)}F_{2 }(H). \tag{2.4}\] Now we report a few notions which are classical in the area of the theory of partitions of groups, referring mostly to [3, 9, 10, 11, 32]. **Definition 2.4** (See [10], Definition, SS7.1).: Given a prime \(p\) and a group \(G\), \[H_{p}(G)=\langle g\in G\mid g^{p}\neq 1\rangle \tag{2.5}\] is the \(Hughes\ subgroup\) of \(G\). From Definition 2.4, \(H_{p}(G)\) turns out to be the smallest subgroup of \(G\) outside of which all elements of \(G\) have order \(p\). Of course, if \(G\) has \(\exp(G)=p\), then \(H_{p}(G)=1\). Moreover \(H_{p}(G)\) is a characterstic subgroup in \(G\). The reader can refer to [10, Chapter 7] for more information on Hughes subgroups and their role in the theory of groups with nontrivial partitions. **Definition 2.5** (See [32], p.575).: A group \(G\) is said to be a group of \(Hughes\)-\(Thompson\)_type_ if it is not a \(p\)-group and \(H_{p}(G)\neq G\) for some prime \(p\). It can be shown that groups as per Definition 2.5 have \(H_{p}(G)\) nilpotent of \(|G:H_{p}(G)|=p\), see [9]. Omitting details of the definitions, we refer to [14, Definition 8.1, Kapitel V, SS8] for the notion of \(Frobenius\ group\), and to [14, Bemerkungen 10.15, 10.17, Kapitel II, SS10] for the notion of \(Suzuki\ group\ \mathrm{Sz}(2^{2n+1})\). Originally, Baer, Kegel and Kontorovich [3, 9, 11, 32] classified groups with partitions, but the result below is due to Farrokhi: **Theorem 2.6** (See [8], Classification Theorem, pp.119-120).: _Let \(G\) be a group with a nontrivial partition. Then \(G\) is isomorphic to exactly one of the following groups_ 1. \(S_{4}\)_;_ 2. \(a\) \(p\)_-group with_ \(H_{p}(G)\neq G\)_;_ 3. _a group of Hughes-Thompson type;_ 4. _a Frobenius group;_ 5. \(\mathrm{PSL}(2,p^{n})\) _for_ \(p^{n}\geq 4\)_;_ 6. \(\mathrm{PGL}(2,p^{n})\) _for_ \(p^{n}\geq 5\) _odd prime power;_ 7. \(\mathrm{Sz}(2^{2n+1})\)_._ We recalled Theorem 2.6 here, because the subgroup commutativity degree has been computed for most of the groups with nontrivial partitions. Let's see this with more details. For instance, Farrokhi and Saeedi [23, 24] completely determined the factorization number of groups in Theorem 2.6 (i), (v) and (vi). **Proposition 2.7** (See [24], Theorem 2.4).: _The projective special linear group \(\mathrm{PSL}(2,p^{n})\) has_ \[F_{2}(\mathrm{PSL}(2,p^{n}))=\left\{\begin{array}{ll}2|\mathrm{L}(\mathrm{PSL}( 2,p^{n}))|+2p^{n}(p^{2n}-1)-1&if\ \ p=2\ and\ n>1,\\ \\ 2|\mathrm{L}(\mathrm{PSL}(2,p^{n}))|+p^{n}(p^{2n}-1)-1&if\ \ p>2,n>1,\ and\ (p^{n}-1)/2\\ &is\ odd,but\ p^{n}\neq 3,7,11,19,23,59,\\ \\ 2|\mathrm{L}(\mathrm{PSL}(2,p^{n}))|-1&if\ \ p>2,n>1,\ and\ (p^{n}-1)/2\\ &is\ even,but\ p^{n}\neq 5,9,29.\end{array}\right.\] _In the other cases,_ \[F_{2}(\mathrm{PSL}(2,p^{n}))=17,27,237,1141,2033,4935,17223,48261,68799,780695\] _if \(p^{n}=2,3,5,7,9,11,19,23,29,59\), respectively._ Of course, one would like to evaluate numerically \(|\mathrm{L}(\mathrm{PSL}(2,p^{n}))|\) in Proposition 2.7 and this can be made in different ways. For instance, Shareshian [27] computed the Mobius function (1.16) for \(\mathrm{PSL}(2,p^{n})\) and this helps to find \(|\mathrm{L}(\mathrm{PSL}(2,p^{n}))|\). Another method is due to Dickson: we may list all the subgroups of \(\mathrm{PSL}(2,p^{n})\) and count them. Historically this was the first method to investigate \(|\mathrm{L}(\mathrm{PSL}(2,p^{n}))|\). **Proposition 2.8** (Dickson's Theorem, see [14], Hauptsatz 8.27, Kapitel II, SS8).: _The subgroups of \(\mathrm{PSL}(2,p^{n})\) are the following:_ (i)_. \(p^{n}(p^{n}\pm 1)/2\) cyclic subgroups \(C_{d}\) of order \(d\), where \(d\) is a divisor of \((p^{n}\pm 1)/2\);_ (ii)_. \(p^{n}(p^{2n}-1)/(4d)\) dihedral subgroups \(D_{2d}\) of order \(2d\), where \(d\) is a divisor of \((p^{n}\pm 1)/2\) and \(d>2\) and \(p^{n}(p^{2n}-1)/24\) dihedral subgroups \(D_{4}\);_ (iii)_. \(p^{n}(p^{2n}-1)/24\) alternating subgroups \(A_{4}\);_ (iv)_. \(p^{n}(p^{2n}-1)/24\) symmetric subgroups \(S_{4}\) when \(p^{n}\equiv 7\mod 8\);_ (v)_. \(p^{n}(p^{2n}-1)/60\) alternating subgroups \(A_{5}\) when \(p^{n}\equiv\pm 1\mod 10\);_ (vi)_. \(p^{n}(p^{2n}-1)/(p^{m}(p^{2m}-1))\) subgroups \(\mathrm{PSL}(2,p^{n})\) where \(m\) is a divisor of \(n\);_ (vii)_. The elementary abelian group \(C_{p}^{m}\) for \(m\leq n\);_ (viii)_. \(C_{p}^{m}\rtimes C_{d}\), where \(d\) divides both \((p^{n}-1)/2\) and \(p^{m}-1\)._ A result, which is similar to Proposition 2.7, is available for projective general linear groups. **Proposition 2.9** (See [24], Theorem 2.5).: _For any \(p>2\) let \(M\) be the unique subgroup of \(G=\mathrm{PGL}(2,p^{n})\) isomorphic to \(\mathrm{PSL}(2,p^{n})\). If \(p^{n}>29\), then_ \[F_{2}(G)=\left\{\begin{array}{ll}3p^{n}(p^{2n}-1)+4|L(G)|-2|L(M)|-3&if\ n\ even\ or\ p\equiv 1\pmod{4},\\ \\ 4p^{n}(p^{2n}-1)+4|L(G)|-2|L(M)|-3,&if\ n\ odd\ and\ p\equiv 3\pmod{4}. \end{array}\right.\] _In the other cases,_ \[F_{2}(G)=177,1103,3083,4919,15549,14529,31093,58429,111567,99527,144297,192349\] _if \(p^{n}=3,5,7,9,11,13,17,19,23,25,27,29\), respectively._ Essentially, we may compute the factorization number for all the groups which are mentioned in Theorem 2.6, referring to methods of combinatorics and number theory in [1, 2, 23, 24], but let's focus only on \(\mathrm{PSL}(2,p^{n})\) and \(\mathrm{PGL}(2,p^{n})\), in order to show significant applications of the spectral invariants which we associated to \(\Gamma_{\mathrm{L}(G)}\). From Propositions 2.7 and 2.9, a precise computation of the factorization number should involve a numerical evaluation of the cardinalities of the subgroups lattices. There are details again in [23, 24] in this sense and the main idea is to introduce the Mobius function (1.16), as originally made by Hall [13]. The case of \(p\)-groups is known since long time: **Lemma 2.10** (See [12]).: _In a \(p\)-group \(G\) of order \(p^{n}\) we have \(\mu(G)=0\), unless \(G\) is elementary abelian, in which case we have \(\mu(G)=(-1)^{n}p^{\binom{n}{2}}\)._ In case of a symmetric group, \(\mu(1,S_{n})\) was compute by Shareshian [26] and Pahlings [20]. **Proposition 2.11** (See [26], Theorems 1.6, 1.8, 1.10).: (i). _Let \(p\) be a prime. Then \(\mu(1,S_{p})=(-1)^{p-1}\frac{p!}{2}\)._ (ii). \(\mu(1,S_{n})=\left\{\begin{array}{ll}-n!,&\mbox{ if n-1 is prime and p=3 mod 4},\\ &\\ \frac{n!}{2},&\mbox{ if n=22},\\ &\\ \frac{-n!}{2},&otherwise,\end{array}\right.\)__ (iii). _Let \(n=2^{\alpha}\) for an integer \(\alpha\geq 1\). Then \(\mu(1,S_{n})=\frac{-p!}{2}\)._ In addition to symmetric groups, Shareshian [27] computed \(\mu(1,G)\) also for projective general linear groups, projective special linear groups and for Suzuki groups, see [26, 27]. ## 3. Proof of the main theorem and some applications Our main result connects the factorization number of a group with the spectrum of the Laplacian matrix via the Mobius function. Proof of Theorem 1.1.: In a group \(G\) we have always that \[F_{2}(G)=\sum_{T\in\operatorname{L}(G)}\operatorname{sd}(T)\ |\operatorname{L}(T)|^{2}\ \mu(T,G) \tag{3.1}\] This is just an application of the Mobius Inversion Formula to (2.3). Note from [18] that \(\Gamma_{\operatorname{L}(G)}\) is a null graph whenever \(G\) is quasihamiltonian. Then, in what follows, we shall assume that \(G\) is not quasihamiltonian and \(K\) is an arbitrary subgroup of \(G\) of \(\operatorname{sd}(K)=1\). Consequently, \(\Gamma_{\operatorname{L}(K)}\) is the null graph. Similarly, we assume \(H\) to be an arbitrary subgroup of \(G\) of \(\operatorname{sd}(H)\neq 1\). Consequently, \(\Gamma_{\operatorname{L}(H)}\) exists and is different from the null graph. From Lemma 2.2, we have for \(m_{T}=|V(\Gamma_{\operatorname{L}(T)})|\) \[\operatorname{sd}(T)=1-\frac{1}{\left|\operatorname{L}(T)\right|^{2}}\sum_{i= 1}^{m_{T}}\sigma_{i}. \tag{3.2}\] and so we can use (3.1), obtaining \[F_{2}(G)=\sum_{T\in\operatorname{L}(G)}\Big{(}\ |\operatorname{L}(T)|^{2}\ - \sum_{i=1}^{m_{T}}\sigma_{i}\ \Big{)}\mu(T,G). \tag{3.3}\] But if \(T\in\mathcal{K}(G)\) in (1.11), then \(\Gamma_{\operatorname{L}(K)}\) is the null graph and so we may assume each \(\sigma_{i}=0\) with respect to \(L(\Gamma_{\operatorname{L}(K)})\). Hence we get \[F_{2}(G)=\sum_{K\in\mathcal{K}(G)}\Big{(}\ |\operatorname{L}(K)|^{2}\ - \sum_{i=1}^{m_{K}}\sigma_{i}\ \Big{)}\mu(K,G)+\sum_{H\in\mathcal{H}(G)}\Big{(}\ | \operatorname{L}(H)|^{2}\ -\sum_{i=1}^{m_{H}}\sigma_{i}\ \Big{)}\mu(H,G) \tag{3.4}\] \[=\sum_{K\in\mathcal{K}(G)}\Big{(}\ |{\rm L}(K)|^{2}\mu(K,G)\Big{)}+\sum_{H\in \mathcal{H}(G)}\Big{(}\ |{\rm L}(H)|^{2}\ -\sum_{i=1}^{m_{H}}\sigma_{i}\ \Big{)}\mu(H,G),\] where \(m_{H}=m=|V(\Gamma_{{\rm L}(H)})|\) as claimed. From (2.3) and (3.4), now we consider an arbitrary \(S\in{\rm L}(G)\) and a corresponding partition \({\rm L}(S)=\mathcal{H}(S)\cup\mathcal{K}(S)\), as made for \(G\) in (1.11). We get \[|{\rm L}(G)|^{2}\ {\rm sd}(G)=\sum_{S\in{\rm L}(G)}F_{2}(S) \tag{3.5}\] \[=\sum_{S\in{\rm L}(G)}\Big{(}\sum_{W\in\mathcal{K}(S)}\ |{\rm L}(W)|^{2}\ \mu(W,S)+ \sum_{U\in\mathcal{H}(S)}\Big{(}\ |{\rm L}(U)|^{2}\ -\sum_{j=1}^{k}\tau_{j}\ \Big{)}\mu(U,S) \Big{)}\] \[=\sum_{S\in{\rm L}(G)}\sum_{W\in\mathcal{K}(S)}\ |{\rm L}(W)|^{2}\ \mu(W,S)+ \sum_{S\in{\rm L}(G)}\sum_{U\in\mathcal{H}(S)}\Big{(}\ |{\rm L}(U)|^{2}\ -\sum_{j=1}^{k}\tau_{j}\ \Big{)}\mu(U,S)\] in correspondence of \(\{\tau_{1},\tau_{2},\cdots,\tau_{k}\}={\rm spec}(L(\Gamma_{{\rm L}(U)}))\). The result follows. Of course, we may repeat the proof of Theorem 1.1, replacing (3.2) with the first equation in (1.13) and involving \({\rm spec}(A(\Gamma_{{\rm L}(G)}))\) instead of \({\rm spec}(L(\Gamma_{{\rm L}(G)}))\). **Corollary 3.1**.: _Let \(G\) be a group with \({\rm sd}(G)\neq 1\). Then_ \[F_{2}(G)=\Big{(}\sum_{K\in\mathcal{K}(G)}\ |{\rm L}(K)|^{2}\ \mu(K,G)\Big{)}+ \Big{(}\sum_{H\in\mathcal{H}(G)}\Big{(}\ |{\rm L}(H)|^{2}\ -\sum_{i=1}^{m}\lambda_{i}^{2}\ \Big{)}\mu(H,G) \Big{)}, \tag{3.6}\] _where \(m=|V(\Gamma_{{\rm L}(H)})|\) and \(\{\lambda_{1},\lambda_{2},\cdots,\lambda_{m}\}={\rm spec}(A(\Gamma_{{\rm L}( H)}))\). In particular,_ \[{\rm sd}(G)=\frac{1}{|{\rm L}(G)|^{2}}\Big{(}\sum_{S\in{\rm L}(G)}\sum_{W\in \mathcal{K}(S)}\ |{\rm L}(W)|^{2}\ \mu(W,S)+\sum_{S\in{\rm L}(G)}\sum_{U\in\mathcal{H}(S)}\ \Big{(}\ |{\rm L}(U)|^{2}\ - \!\!\sum_{j=1}^{k}\rho_{j}^{2}\ \Big{)}\mu(U,S)\Big{)}, \tag{3.7}\] _where \(k=|V(\Gamma_{{\rm L}(U)})|\) and \(\{\rho_{1},\rho_{2},\cdots,\rho_{k}\}={\rm spec}(A(\Gamma_{{\rm L}(U)}))\)._ We present a few applications of Theorem 1.1, but some relevant comments should be made. **Remark 3.2**.: Suppose to compute \(F_{2}(G)\) for \(G={\rm PSL}(2,p^{n})\). We may proceed as below: (1). Use Proposition 2.7 and compute \(|{\rm L}(G)|\) applying Proposition 2.8. (2). Apply (1.17) of Theorem 1.1, but in order to do this we should previously: (a). Determine \(\Gamma_{{\rm L}(H)}\) and \({\rm spec}(L(\Gamma_{{\rm L}(H)}))\) in (1.17); (b). Find the Mobius numbers \(\mu(H,G)\) and \(\mu(K,G)\) in (1.17). (c). Find \(|{\rm L}(H)|\) and \(|{\rm L}(K)|\) in (1.17). The method (1) has been introduced in [24, Lemma 3.2, Corollary 3.3]. The method (2) is presented here for the first time and is apparently harder than (1), but softwares are available such as GAP [31] and NewGraph [28] which can assist better with the steps (2a), (2b) and (2c). Therefore it is very efficient. We sketch similar techniques for the corresponding subgroup commutativity degrees. **Remark 3.3**.: Suppose to compute \({\rm sd}(G)\) for \(G={\rm PSL}(2,p^{n})\). We may proceed as below: (I). Combine Propositions 2.7 and 2.8 for the computation of \(F_{2}(H)\) where \(H\in{\rm L}(G)\) with the formula (2.3). 2. Apply (1.18) of Theorem 1.1, but in order to do this we should previously: 1. Determine \(\Gamma_{\mathrm{L}(U)}\), \(L(\Gamma_{\mathrm{L}(U)})\) and \(\mathrm{spec}(L(\Gamma_{\mathrm{L}(U)}))\) in (1.18); 2. Find the Mobius numbers \(\mu(W,S)\) and \(\mu(U,S)\) in (1.18). 3. Find \(|\mathrm{L}(U)|\) and \(|\mathrm{L}(W)|\) in (1.18). 3. Apply (1.13), after computing \(|\mathrm{L}(G)|\) and \(\mathrm{spec}(L(\Gamma_{\mathrm{L}(G)}))\). The method (I) has been followed in [24, Theorem 3.4]. The method (II) is presented here for the first time. The method (III) has been introduced in [19]. The difference is subtle between (II) and (III): for small groups we prefer of course (III), but for large groups with big \(\mathcal{K}(S)\) in (1.18) and small \(\mathcal{H}(S)\) (or viceversa) (II) gives soon a qualitative evaluation of \(\mathrm{sd}(G)\). For instance, a \(minimal\)\(nonabelian\)\(group\)\(M\) is a group which is nonabelian but all of whose proper subgroups are abelian. In this situation, one has \(\mathcal{K}(M)=\mathrm{L}(M)\setminus\{M\}\) and \(\mathcal{H}(M)=\{M\}\) from the definitions. Then (II) is more convenient than (III) here. Note that minimal nonabelian groups were classified by Redei [14, Aufgabe 14, Kapitel III, SS5 ]. The following examples illustrate Theorem 1.1 in the spirit of Remarks 3.2 and 3.3. **Example 3.4**.: The symmetric group \(S_{4}\) is presented by \(S_{4}=\langle a,b,c\mid a^{2}=b^{3}=c^{4}=abc=1\rangle\), where \(a=(12)\), \(b=(123)\) and \(c=(1234)\). It is well known that the set of all normal subgroups forms a sublattice of the subgroups lattice of a given group (see [25]). In other words, the set \(\mathrm{N}(S_{4})\) of all normal subgroups of \(S_{4}\) is a sublattice of \(\mathrm{L}(S_{4})\) and we have \[\mathrm{N}(S_{4})=\{\{1\},\langle(12)(34),(13)(24)\rangle,A_{4},S_{4}\}. \tag{3.8}\] Moreover, one can check that \[\mathfrak{C}_{\mathrm{L}(S_{4})}(\mathrm{L}(S_{4}))=\mathrm{N}(S_{4}), \tag{3.9}\] since we have \[\mathrm{L}(S_{4})=\{\{1\},\langle(12)\rangle,\langle(13)\rangle, \langle(23)\rangle,\langle(14)\rangle,\langle(24)\rangle,\langle(34)\rangle, \langle(13)(24)\rangle,\langle(14)(23)\rangle,\langle(12)(34)\rangle,\] \[\langle(123)\rangle,\langle(124)\rangle,\langle(134)\rangle, \langle(234)\rangle,\langle(1234)\rangle,\langle(1324)\rangle,\langle(1423) \rangle,\langle(12)(34),(13)(24)\rangle,\langle(13),(24)\rangle,\] \[\langle(14),(23)\rangle,\langle(12),(34)\rangle,\langle(123),(12 )\rangle,\langle(124),(12)\rangle,\langle(134),(13)\rangle,\langle(234),(23)\rangle,\] \[\langle(1234),(13)\rangle,\langle(1243),(14)\rangle,\langle(1324 ),(12)\rangle,A_{4},S_{4}\}. \tag{3.10}\] There are 30 elements in \(\mathrm{L}(S_{4})\) and these are divided into 11 conjugacy classes and 9 isomorphism types. It is easy to check that there are in \(\mathrm{L}(S_{4})\) * 9 subgroups isomorphic to \(C_{2}\); * 4 subgroups isomorphic to \(C_{3}\); * 3 subgroups isomorphic to \(C_{4}\); * 3 subgroups isomorphic to \(C_{2}\times C_{2}\); * 4 subgroups isomorphic to \(S_{3}\); * 3 subgroups isomorphic to \(D_{4}\). In particular, we find that \[|V(\Gamma_{\mathrm{L}(S_{4})})|=|\mathrm{L}(S_{4})\setminus\mathrm{N}(S_{4})| =26. \tag{3.11}\] Now we are going to focus on special subgroups of \(S_{4}\). First of all, consider \(A_{4}\) and its non-permutability graph of subgroups \(\Gamma_{\mathrm{L}(A_{4})}\). We have 7 vertices, namely \[V(\Gamma_{\mathrm{L}(A_{4})})=\{\langle(123)\rangle,\langle(124)\rangle, \langle(134)\rangle,\langle(234)\rangle,\langle(12)(34)\rangle,\langle(14)(23 )\rangle,\langle(13)(24)\rangle\}, \tag{3.12}\] since \[\mathfrak{C}_{\mathrm{L}(A_{4})}(\mathrm{L}(A_{4}))=\mathrm{N}(A_{4})=\{\{1\}, \langle(12)(34),(13)(24)\rangle,A_{4}\} \tag{3.13}\] and a corresponding computation of edges can be done via [28], obtaining the graph below. Now we describe \(B=\langle(123),(12)\rangle\simeq S_{3}\) and \(\Gamma_{\mathrm{L}(B)}\). Here we get a triangle, because \[V(\Gamma_{\mathrm{L}(B)})=\mathrm{L}(B)\setminus\mathfrak{C}_{\mathrm{L}(B)}( \mathrm{L}(B))=\mathrm{L}(B)\setminus\mathrm{N}(B)=\{\langle(12)\rangle, \langle(13)\rangle,\langle(23)\rangle\} \tag{3.14}\] and again [28] can help with the computation of the edges. See below: Finally, we consider \(C=\langle(1234),(13)\rangle\simeq D_{4}\) which has \(\Gamma_{\mathrm{L}(C)}\) with four vertices and four edges, namely \[V(\Gamma_{\mathrm{L}(C)})=\mathrm{L}(C)\setminus\mathfrak{C}_{\mathrm{L}(C)}( \mathrm{L}(C))=\{\langle(13)\rangle,\langle(24)\rangle,\langle(14)(23) \rangle,\langle(12)(34)\rangle\}. \tag{3.15}\] Again this is another very simple situation: the graph is a rectangle. **Figure 1**: The non-permutability graph of subgroups \(\Gamma_{\mathrm{L}(A_{4})}\). **Figure 2**: The non-permutability graph of subgroups \(\Gamma_{\mathrm{L}(B)}\) for \(B\simeq S_{3}\). **Figure 3**: The non-permutability graph of subgroups \(\Gamma_{\mathrm{L}(C)}\) for \(C\simeq D_{4}\). From Theorem 1.1, we may compute \(F_{2}(S_{4})\) in the following way: \[F_{2}(S_{4})=\Big{(}\sum_{K\in\mathcal{K}(S_{4})}\ |{\rm L}(K)|^{2}\ \mu(K,S_{4}) \Big{)}+\Big{(}\sum_{H\in\mathcal{H}(S_{4})}\Big{(}\ |{\rm L}(H)|^{2}\ -\sum_{i=1}^{m}\sigma_{i}\ \Big{)}\mu(H,S_{4})\Big{)}, \tag{3.16}\] where \(K\) is a subgroup of \(S_{4}\) belonging to \[\mathcal{K}(S_{4})=\{\{1\},\langle 12\rangle,\langle 13\rangle, \langle 23\rangle,\langle 14\rangle,\langle 24\rangle,\langle 34\rangle, \langle(13)(24)\rangle,\langle(14)(23)\rangle,\langle(12)(34)\rangle, \langle 123\rangle,\langle 124\rangle,\] \[\langle 134\rangle,\langle 234\rangle,\langle 1234\rangle, \langle 1324\rangle,\langle 1423\rangle,\langle(12)(34),(13)(24)\rangle, \langle(13),(24)\rangle,\langle(14),(23)\rangle,\langle(12),(34)\rangle\}, \tag{3.17}\] and \(H\) a subgroup of \(S_{4}\) belonging to \[\mathcal{H}(S_{4})=\{\langle(123),(12)\rangle,\langle(124),(12)\rangle, \langle(134),(13)\rangle,\langle(234),(23)\rangle,\] \[\langle(1234),(13)\rangle,\langle(1243),(14)\rangle,\langle(1324),(12)\rangle,A_{4},S_{4}\}. \tag{3.18}\] Now we need to find \(\mu(K,S_{4})\) and \(\mu(H,S_{4})\) for all \(K\) and \(H\), but it is enough to find these values for each conjugacy classes only. Using Lemma 2.10 and Proposition 2.11 (iii), we find \[\mu(\{1\},S_{4})=-n!=-24,\ \ \mu(\langle 12\rangle,S_{4})=2,\ \ \mu(\langle(13)(24)\rangle,S_{4})=0,\ \ \mu(\langle 123\rangle,S_{4})=1,\] \[\mu(\langle(12)(34),(13)(24)\rangle,S_{4})=3,\ \ \mu(\langle(13),(24)\rangle,S_{4})=0,\ \ \mu(\langle 1234\rangle,S_{4})=0,\] \[\mu(\langle(123),(12)\rangle,S_{4})=-1,\ \ \mu(\langle(1234),(13)\rangle,S_{4})=-1,\ \ \mu(A_{4},S_{4})=-1.\ \ \mu(S_{4},S_{4})=1. \tag{3.19}\] On the other hand, we may use [28], in order to find the spectra of the Laplacian matrices \(L(\Gamma_{{\rm L}(B)})\), \(L(\Gamma_{{\rm L}(C)})\) and \(L(\Gamma_{{\rm S}(A_{4})})\), obtaining \[{\rm spec}(L(\Gamma_{{\rm L}(B)}))=\{0,3,3\},\ {\rm spec}(L(\Gamma_{{\rm L}(C)}))=\{ 0,2,2,4\},\ {\rm spec}(L(\Gamma_{{\rm L}(A_{4})}))=\{0,4,4,7,7,7\}, \tag{3.20}\] but we haven't reported all the details of the non-permutability graph \(\Gamma_{{\rm L}(S_{4})}\), since it is very technical. Just to give an idea, \[{\rm spec}(L(\Gamma_{{\rm L}(S_{4})}))=\{0,7.22863,7.60860,7.60860,11.39978,11. 39978,11.72495,12.01650,\] \[12.01650,14,14.56069,14.56069,14.56069,15.61486,16.33888,16.33888,16.33888,\] \[17.29890,17.29890,18,20.10043,20.10043,20.10043,20.43156,20.67622,20.6762\} \tag{3.21}\] is the spectrum of the Laplacian matrix \(L(\Gamma_{{\rm L}(S_{4})})\). Replacing the values which we found in (3.16), we get \[F_{2}(S_{4})=-24+6(2^{2})(2)+3(2^{2})(0)+4(2^{2})(1)+(5^{2})(3)+3(4^{2})(0)+3(3^{ 2})(0)+4(6^{2}-6)(-1)\] \[+3(10^{2}-8)(-1)+(10^{2}-36)(-1)+(30^{2}-378)(1)=177. \tag{3.22}\] Note also that \[\mu(\{1\},A_{4})=4,\ \ \mu(\langle(13)(24)\rangle,A_{4})=0,\ \ \mu(\langle(12)(34),(13)(24)\rangle,A_{4})=-1,\] \[\mu(\langle(123)\rangle,A_{4})=-1,\ \ \mu(A_{4},A_{4})=1, \tag{3.23}\] imply with a similar argument that \[F_{2}(A_{4})=4+3(2^{2})(0)+4(2^{2})(-1)+(5^{2})(-1)+(10^{2}-36)(1)=27. \tag{3.24}\] With our new method of computation, we have just seen that Theorem 1.1 shows an alternative method of computational nature for \(F_{2}({\rm PGL}(2,3))\) and \(F_{2}({\rm PSL}(2,3))\). In fact \({\rm PSL}(2,3)\simeq A_{4}\) and \({\rm PGL}(2,3)\simeq S_{4}\), then \(F_{2}({\rm PSL}(2,3))=F_{2}(A_{4})=27\) and \(F_{2}({\rm PGL}(2,3))=F_{2}(S_{4})=177\), which are the same values found in Propositions 2.7 and 2.9. Note that some open problems were posed by Tarnauceanu [29] on the subgroup commutativity degree and the logic which we applied in Example 3.4, along with Theorem 1.1 and [28], could bring solutions. In fact Remarks 3.2 and 3.3 suggest a methodology of general interest which can be applied to large families of groups, so not necessarily to linear groups. We show another application of our main results. **Example 3.5**.: From a direct computation, if we consider \(A_{4}\), then the denominator of (1.9) is equal to \(100\), namely \(\left|\mathrm{L}(A_{4})\right|^{2}=100\) and the numerator of (1.9) is equal to \(64\), hence \[\mathrm{sd}(A_{4})=\frac{16}{25} \tag{3.25}\] according to [29, p.2510]. On the other hand, we may consider (3.20) and replace it in (3.2) \[\mathrm{sd}(A_{4})=1-\frac{\sigma_{1}+\ldots+\sigma_{7}}{\left|\mathrm{L}(A_{ 4})\right|^{2}}=1-\frac{36}{100}=\frac{16}{25}. \tag{3.26}\] Moreover, it is easy to check that \(A_{4}\) is minimal nonabelian, then \(\mathcal{K}(A_{4})=\mathrm{L}(A_{4})\setminus\{A_{4}\}\) and \(\mathcal{H}(A_{4})=\{A_{4}\}\). Now we can apply (1.17) to obtain \(F_{2}(\{1\})=1\), \(F_{2}(\langle(13)(24)\rangle)=F_{2}(\langle(14)(23)\rangle)=3\), \(F_{2}(\langle(123)\rangle)=F_{2}(\langle(124)\rangle)=F_{2}(\langle(13) \rangle)=F_{2}(\langle(234)\rangle)=3\), \(F_{2}(\langle(12)(34),(13)(24)\rangle)=15\) and \(F_{2}(A_{4})=27\). Therefore, using(1.18) \[\mathrm{sd}(A_{4})=\frac{1+7(3)+15+27}{\left|\mathrm{L}(A_{4})\right|^{2}}= \frac{16}{25} \tag{3.27}\] which is the same value obtained in (3.25) and (3.26) in different ways. Of course, we may repeat a similar arguments in Example 3.5, in order to find \(\mathrm{sd}(S_{3})\), \(\mathrm{sd}(S_{4})\) and \(\mathrm{sd}(D_{4})\) on the basis of the values which we have in Example 3.4, but we presented here just the case of \(A_{4}\) supporting Remark 3.3 (III) and (II). We end with the following problem, which we encountered in our investigations: **Problem 3.6**.: Study systematically the non-permutability graph of subgroups for the groups in Theorem 2.6, developing a corresponding spectral graph theory for non-permutability graph of subgroups of groups with nontrivial partitions. Determine the subgroup commutativity degree of all the groups in Theorem 2.6 via spectra of Laplacian matrices of the corresponding non-permutability graph of subgroups.
2305.11708
Some remarks on Hayward black hole with a cloud of strings
We obtain the metric corresponding to the Hayward black hole spacetime surrounded by a cloud of strings and investigate the role played by this cloud on the horizons, geodesics, effective potential and thermodynamics. We compare the obtained results with the ones of the literature, corresponding to the Hayward black hole, when the cloud of strings is absent. Also, the question related to its nature, with respect to regularity, in this scenario, is examined.
F. F. Nascimento, V. B. Bezerra, J. M. Toledo
2023-05-19T14:42:14Z
http://arxiv.org/abs/2305.11708v1
# Some remarks on Hayward black hole with a cloud of strings ###### Abstract We obtain the metric corresponding to the Hayward black hole space-time surrounded by a cloud of strings and investigate the role played by this cloud on the horizons, geodesics, effective potential and thermodynamics. We compare the obtained results with the ones of the literature, corresponding to the Hayward black hole, when the cloud of strings is absent. Also, the question related to its nature, with respect to regularity, in this scenario, is examined. Keywords:Hayward black hole Cloud of strings Thermodynamics + Footnote †: journal: Eur. Phys. J. C ## 1 Introduction Black hole solutions of Einstein's equations were known since the middle 1910s, and then shortly after the formulation of the General Theory of Relativity. The simplest vacuum solution, which describes the gravitational field of a static, uncharged, and spherically symmetric body, was obtained by Schwarzschild [1]. Its generalization, to include the presence of electric charge as a source was obtained by Reissner and Nordstrom [2; 3]. After nearly fifty years, Kerr [4] obtained a generalization of the Schwarzschild solution, by considering rotation and it took only two years before the metric of a charged and rotating gravitational body to be obtained, the well-known Kerr-Newman solution [5]. It is worth emphasizing that the metrics corresponding to these black hole solutions have a curvature singularity at \(r=0\), whose existence creates some difficulties in the General Theory of Relativity because, at the singularity point, the physical quantities diverge and, therefore, the physical laws are not valid. To solve these difficulties related to the curvature singularity and its consequences, some black hole solutions have been considered, whose metrics and curvature invariants have no singularity, that is, they are regular everywhere, particularly, at the origin. These solutions correspond to what are called Regular Black Holes. The first solution of a static spherically symmetric Regular Black Hole dated back to the later 1960s and was obtained by Bardeen [6]. Nowadays, there are several different static Regular Black Hole solutions, among them, we can mention [7; 8; 9; 10; 11; 12; 13; 14; 15], in special, the one obtained by Hayward [16]. The solution of the field equations presented by Hayward is free of charge term and its physical aspects are quite like Bardeen's solution [6]. The Hayward black hole solution [16] is static, uncharged and spherically symmetric. It is worth calling attention to the fact that this solution becomes a de Sitter space-time at the center of the black hole, and therefore, there is no singularity at \(r=0\), and, additionally, it is asymptotically flat as \(r\rightarrow\infty\). Originally, the Hayward black hole solution was obtained from the modified Einstein equations, with the parameter appearing in the solution being related to the energy level in the near-horizon region of the black hole [16], which can be viewed as a constant acting in this space-time. On the other hand, the Hayward black hole solution can also be obtained in the context of a gravity theory coupled with nonlinear electrodynamics, in which case, the parameter mentioned is no more a universal constant, but a magnetic charge. The Regular Black Hole solutions and the interesting consequences arising from these solutions have inspired further investigations related to such black holes, as, for example, those regarding particle geodesics [17; 18; 19; 20], structure and lens effect [21; 22], thermodynamics [23; 24; 25; 26] and quasi-normal modes [27; 28; 29]. In the later 1970s, Letelier [30] obtained general solutions of the Einstein equations corresponding to spherically, plane-symmetric and cylindrically symmetric space-times, by considering a cloud of strings as sources of the gravitational field. In the first case, namely, when a spherically symmetric cloud of strings, radially directed, surrounds the gravitating body, the obtained solution is basically the Schwarzschild black hole solution slightly modified, in such a way that the metric is similar to the Schwarzschild one, but with a solid deficit angle which depends on the parameter associated with the presence of the cloud of strings. Therefore, the gravitational effects are of global origin, with respect to the cloud. As an example of these effects, we mention the fact that the radius of the event horizon is enlarged as compared with the Schwarzschild radius. Given the possible astrophysical consequences, it is important to investigate the gravitational consequences when a black hole is immersed in a cloud of strings. With this aim, several studies concerning different aspects associated with the physics of black holes surrounded by a cloud of strings were performed during the last decades, in the context of the General Theory of Relativity [31; 32; 33; 34; 35; 36], as well as in different modified versions of this Theory [37; 38; 39; 40; 41; 42]. Also, in which concerns the Regular Black Holes, some studies have been done by considering the presence of a cloud of strings[33; 43], and some aspects of the thermodynamics were investigated, with emphasis to the role played by the cloud of strings. In this paper, we investigate the role played by a cloud of strings surrounding a Hayward black hole on the horizons, singular behavior, geodesics, effective potential, and thermodynamics as compared to the case where the cloud is absent. Also, some discussion about the regularity is presented. The paper is organized as follows. In Sect. 2, we review the Hayward black hole solution and obtain the Hayward black hole solution in the case in which this gravitational body is surrounded by a cloud of strings. In Sect. 3, we focus the discussion on the horizons, singularity, geodesics, and effective potential. Section 4 is devoted to different aspects of thermodynamics, with emphasis on the role played by the parameter that codifies the presence of the cloud of strings. In Section 5, we briefly summarize our results. ## 2 The metric In this section, we obtain the metric corresponding to the Hayward black hole with a cloud of strings and analyze its properties. ### The Hayward solution The metric of the non-singular(regular) black hole obtained by Hayward [16] is given by \[ds^{2}=f(r)dt^{2}-f(r)^{-1}dr^{2}-r^{2}d\Omega^{2}, \tag{1}\] where \(l\) and \(m\) are positive constants, and \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\). The function \(f(r)\) is given by \[f(r)=1-\frac{2mr^{2}}{r^{3}+2l^{2}m} \tag{2}\] Note that \[\lim_{r\to 0}\left(1-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right)=1, \tag{3}\] which means that the Hayward metric is a regular (non-singular) solution of Einstein equations. Furthermore, we can observe this property by calculating the Kretschmann scalar, which is given by \[\begin{split} K&=R_{\alpha\beta\mu\nu}R^{\alpha \beta\mu\nu}\\ &=\frac{48m^{2}(r^{12}-4r^{9}g^{3}+18r^{6}g^{6}-2r^{3}g^{9}+2g^{ 12})}{(r^{3}+g^{3})^{6}},\end{split} \tag{4}\] In the limit \(r\to 0\), we get \[\begin{split}\lim_{r\to 0}K&=\lim_{r\to 0}\frac{48m^{2}(r^{12}-4r^{9}g^{3}+18r^{6}g^{6}-2r^{3}g^{9}+2g^{ 12})}{(r^{3}+g^{3})^{6}}\\ &=\frac{96m^{2}}{g^{6}}.\end{split} \tag{5}\] Using the metric given by Eq. (1), we can obtain the following components of the Einstein tensor [16]: \[G_{t}^{\;\;t}=G_{r}^{\;\;r}=\frac{12l^{2}m^{2}}{(r^{3}+g^{3})^{2}}, \tag{6}\] \[G_{\theta}^{\;\;\theta}=G_{\phi}^{\;\;\phi}=-\frac{24(r^{3}-l^{2}m)l^{2}m^{2}} {(r^{3}+g^{3})^{3}}, \tag{7}\] where \[g^{3}\equiv 2l^{2}m. \tag{8}\] which, through Einstein equations, are proportional to the stress-energy tensor of the source. In Hayward's solution, the function \(f(r)\) can also be written as \[f(r)=1-\frac{2m(r)}{r}, \tag{9}\] where \[m(r)=\frac{mr^{3}}{(r^{3}+g^{3})}, \tag{10}\] is the black hole mass, which depends on the radial coordinate and \(g^{3}\) is given by Eq.(8). We can observe that, if \(r\rightarrow\infty\), \(m(r)\to m\). Thus, very far from the Black Hole, the Hayward solution is similar to the Schwarzschild one. For \(r\to 0\) (small values of \(r\)), we can write \[m(r)\approx\frac{mr^{3}}{g^{3}}, \tag{11}\] and then \[f(r)\approx 1-Cr^{2}, \tag{12}\] with \(C=\frac{2m}{g^{3}}\) being a positive constant. Observe that the space-time metric with \(f(r)\) given by Eq. (12) is similar to De Sitter space-time. Thus, the Hayward black hole has an internal core with behavior similar to de Sitter metric [16]. ### Hayward black hole with a cloud of strings Now, let us consider the Hayward black hole with a cloud of strings. A spherically symmetric cloud of strings is described by the stress-energy tensor [30] \[T^{\mu\nu}=\rho\frac{\Sigma^{\mu\beta}\Sigma^{\;\nu}_{\beta}}{(-\gamma)^{1/2}}, \tag{13}\] where \(\Sigma^{\mu\nu}\) is a bivector that represents the world sheet of the strings and is given by \[\Sigma^{\mu\nu}=\epsilon^{ab}\frac{\partial x^{\mu}}{\partial\lambda^{a}} \frac{\partial x^{\nu}}{\partial\lambda^{b}}, \tag{14}\] where \(\epsilon^{ab}\) is the Levi-Civita bidimensional symbol and \(\epsilon^{01}=-\epsilon^{10}=1\). The non-null components of the stress-energy tensor of the cloud of strings are given by [30]: \[T_{0}^{\;0}=T_{1}^{\;1}=\frac{a}{r^{2}}, \tag{15}\] \[T_{2}^{\;2}=T_{3}^{\;3}=0. \tag{16}\] Now, let us consider the line element corresponding to a Hayward black hole with a cloud of strings given by [44]: \[ds^{2}=e^{\nu}dt^{2}-e^{\lambda}dr^{2}-r^{2}d\theta^{2}-r^{2}\sin^{2}\theta d \phi^{2}, \tag{17}\] where, \(\nu\) and \(\lambda\) are functions of the radial coordinate, \(r\), only, if we assume that the metric is static. The non-null components of the Einstein tensor obtained by the metric of Eq. (17) are given by \[G_{t}^{\;t}=e^{-\lambda}\left(\frac{\lambda^{\prime}}{r}-\frac{1}{r^{2}} \right)+\frac{1}{r^{2}}. \tag{18}\] \[G_{r}^{\;r}=-e^{-\lambda}\left(\frac{\nu^{\prime}}{r}+\frac{1}{r^{2}}\right)+ \frac{1}{r^{2}}, \tag{19}\] \[G_{\theta}^{\;\theta}=G_{\phi}^{\;\phi}=\frac{1}{2}e^{-\lambda}\left(\frac{ \nu^{\prime}\lambda^{\prime}}{2}+\frac{\lambda^{\prime}}{r}-\frac{\nu^{\prime }}{r}-\frac{\nu^{\prime 2}}{2}-\nu^{\prime\prime}\right). \tag{20}\] Thus, considering that there is no interaction between the cloud of strings and the black hole, the total stress-energy tensor can be obtained by the linear superposition of the individual ones. Thus, using Eqs.(18)-(20), (6)-(7) and (15)-(16) the Einstein equations are given by: \[e^{-\lambda}\left(\frac{\lambda^{\prime}}{r}-\frac{1}{r^{2}}\right)+\frac{1}{ r^{2}}=\frac{12l^{2}m^{2}}{(r^{3}+2l^{2}m)^{2}}+\frac{a}{r^{2}}, \tag{21}\] \[-e^{-\lambda}\left(\frac{\nu^{\prime}}{r}+\frac{1}{r^{2}}\right)+\frac{1}{r^ {2}}=\frac{12l^{2}m^{2}}{(r^{3}+2l^{2}m)^{2}}+\frac{a}{r^{2}}, \tag{22}\] \[\begin{split}&\frac{1}{2}e^{-\lambda}\left(\frac{\nu^{\prime} \lambda^{\prime}}{2}+\frac{\lambda^{\prime}}{r}-\frac{\nu^{\prime}}{r}-\frac{ \nu^{\prime 2}}{2}-\nu^{\prime\prime}\right)=\\ &-\frac{24(r^{3}-l^{2}m)l^{2}m^{2}}{(r^{3}+2l^{2}m)^{3}}.\end{split} \tag{23}\] Subtracting Eqs. (21) and (22), we obtain \[\lambda=-\nu\Rightarrow\lambda^{\prime}=-\nu^{\prime}. \tag{24}\] Summing Eqs.(21) and (22) and taking into account Eq.(24), we obtain \[e^{-\lambda}\frac{\lambda^{\prime}}{r}-e^{-\lambda}\frac{1}{r^{2}}+\frac{1}{r^ {2}}=\frac{12l^{2}m^{2}}{(r^{3}+2l^{2}m)^{2}}+\frac{a}{r^{2}}. \tag{25}\] Now, let us write the following relations \[\nu=-\lambda=ln(1+f(r)). \tag{26}\] Taking into account Eqs.(24) and (26), we can write Eqs.(25) and (23), respectively, as follows: \[-\frac{1}{r^{2}}(rf^{\prime}+f)=\frac{12l^{2}m^{2}}{(r^{3}+2l^{2}m)^{2}}+\frac {a}{r^{2}}, \tag{27}\] \[2\frac{f^{\prime}}{r}+f^{\prime\prime}=48\frac{(r^{3}-l^{2}m)l^{2}m^{2}}{(r^ {3}+2l^{2}m)^{3}}. \tag{28}\] Summing Eqs.(27) and (28) and multiplying by \(r^{2}\), we get: \[r^{2}f^{\prime\prime}+rf^{\prime}-f-a-\frac{12l^{2}m^{2}r^{2}}{(r^{3}+2l^{2}m )^{2}}-48\frac{(r^{3}-l^{2}m)l^{2}m^{2}r^{2}}{(r^{3}+2l^{2}m)^{3}}=0, \tag{29}\] whose solution is given by \[f(r)=\frac{4l^{2}m^{2}-2al^{2}mr-ar^{4}}{r(r^{3}+2l^{2}m)}+\frac{C_{1}}{r}+rC_ {2}, \tag{30}\] where we can adopt \(C_{1}=-2m\) and \(C_{2}=0\). Substituting Eq.(30) into Eq.(26), we get \[\nu=-\lambda=ln\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right). \tag{31}\] Thus, the metric of the Hayward black hole with a cloud of strings is written as \[\begin{split} ds^{2}&=\left(1-a-\frac{2mr^{2}}{r^{ 3}+2l^{2}m}\right)dt^{2}\\ &-\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right)^{-1}dr^{2}\\ &-r^{2}d\theta^{2}-r^{2}\sin^{2}\theta d\phi^{2}.\end{split} \tag{32}\] Note that, if \(a=0\), we obtain the Hayward metric[16] given by Eq. (1). If \(l=0\), the metric of Eq.(32) becomes similar to the Letelier spacetime [30]. Thus, for the metrics given by Eq. (32), the Kretschmann scalar is \[\begin{split} K&=R_{\alpha\beta\mu\nu}R^{\alpha \beta\mu\nu}=\frac{4a^{2}}{r^{4}}+\frac{16am}{g^{3}r^{2}+r^{5}}\\ &+\frac{48m^{2}\left(r^{12}-4r^{9}g^{3}+18r^{6}g^{6}-2r^{3}g^{9}+ 2g^{12}\right)}{\left(r^{3}+g^{3}\right)^{6}},\end{split} \tag{33}\] If we neglect the could of strings, \(a=0\), we obtain the Kretschmann scalar given in Eq. (4). Now, let us determine the value of these results in the limits \(r\to 0\) and \(r\rightarrow\infty\). \[\begin{split}\lim_{r\to 0}K&=\lim_{r \to 0}\left(\frac{4a^{2}}{r^{4}}+\frac{16am}{g^{3}r^{2}+r^{5}}\right.\\ &\left.+\frac{48m^{2}\left(r^{12}-4r^{9}g^{3}+18r^{6}g^{6}-2r^{3}g ^{9}+2g^{12}\right)}{\left(r^{3}+g^{3}\right)^{6}}\right)=\infty,\end{split} \tag{34}\] \[\begin{split}\lim_{r\rightarrow\infty}K&=\lim_{r \rightarrow\infty}\left(\frac{4a^{2}}{r^{4}}+\frac{16am}{g^{3}r^{2}+r^{5}} \right.\\ &+\left.\frac{48m^{2}\left(r^{12}-4r^{9}g^{3}+18r^{6}g^{6}-2r^{3}g ^{9}+2g^{12}\right)}{\left(r^{3}+g^{3}\right)^{6}}\right)=0.\end{split} \tag{35}\] Therefore, from the analysis of the Kretschmann scalar, in the limit \(r\to 0\), we conclude that the inclusion of the cloud of strings influences the metric, by destroying the regularity, and as a consequence, introducing a singularity. In other words, the metric of the Hayward black hole with a cloud of strings is singular in the origin (\(r=0\)). ## 3 Black hole horizons, geodesics and effective potential ### Black hole horizons In what follows, we study the horizons of the Hayward black hole space-time, given by the line element shown in Eq.(32). From now on, let us identify the function \(g(r)\) as \[g(r)=1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m} \tag{36}\] Analyzing the roots of the function \(g(r)\), we can obtain a critical black hole mass, which is given by \[m_{*}=\frac{3}{4}\sqrt{3}(1-a)^{3/2}l, \tag{37}\] as well as a critical value for the radial coordinate, \(r\), shown in what follows \[r_{*} =-\frac{\sqrt{3}}{2(1-a)^{5/2}}\left[a^{3}-3a^{2}+a\sqrt{1-a} \sqrt[3]{-(1-a)^{9/2}}\right.\] \[+\left.\left(-(1-a)^{9/2}\right)^{2/3}+3a-\frac{(a-1)^{5}}{\left( -(1-a)^{9/2}\right)^{2/3}}-1\right]l. \tag{38}\] Considering only positive values of \(r\), if the black hole mass is higher than the critical mass, \(m>m_{*}\), \(g(r)\) has two real roots. If \(m=m_{*}\), \(g(r)\) has a unique real root, which is equal to \(r_{*}\). Finally, \(g(r)\) has no real roots for \(m<m_{*}\). If we neglect the cloud of strings, \(a=0\), the critical mass is reduced to \(m_{*}=(3\sqrt{3}/4)l\) and the critical radius is \(r_{*}=\sqrt{3}l\), quantities already obtained by Hayward [16]. The described behavior of \(g(r)\) can be observed in Fig. 1. ### Black hole geodesics Given the space-time metric, the trajectory of particles and light can be described by the geodesic motion. The geodesic equations can also be obtained from the Lagrangian given in the equation \[\mathcal{L}=\frac{1}{2}g_{\mu\nu}\frac{dx^{\mu}}{d\tau}\frac{dx^{\nu}}{d\tau} =\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu},\] which, in the space-time of the Hayward black hole with a cloud of strings, can be written as \[\mathcal{L} =\frac{1}{2}\left[\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right) \dot{t}^{2}-\frac{1}{\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right)}\dot{r}^{2}\right.\] \[-\left.r^{2}\dot{\theta}^{2}-\left.r^{2}\sin^{2}\theta\dot{\phi}^ {2}\right], \tag{39}\] where the dot represents the derivative in respect to the proper time \(\tau\). Rescaling the parameter \(\tau\), we can define \(L=2\mathcal{L}\), which, for time-like geodesics, is equal to \(+1\), for space-like geodesics is equal to \(-1\) and is equal to \(0\) for null geodesics [45]. The Euler-Lagrange equations are given by \[\frac{d}{d\tau}\left(\frac{\partial\mathcal{L}}{\partial\dot{x}^{\mu}}\right) -\frac{\partial\mathcal{L}}{\partial x^{\mu}}=0. \tag{40}\] For \(\mu=0\) and \(\mu=3\) in Eq.(40), with \(\mathcal{L}\) given by Eq.(39), we get, respectively: \[\dot{t}=\frac{E}{\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right)}, \tag{41}\] \[\dot{\phi}=-\frac{J}{r^{2}\sin^{2}\theta}, \tag{42}\] where \(E\) and \(J\) are movement constants which corresponds to the Killing vectors \(\partial_{t}\) and \(\partial_{\phi}\), respectively. We can interpret these constants as the energy \(E\) and the angular momentum \(J\) of the particle which is moving nearby the black hole. Let us restrict the analysis of the geodesics to the equatorial plane of the black hole, \(\theta=\frac{\pi}{2}\). Doing that, the Eqs. (41)-(42) are reduced to \[\dot{t}=\frac{E}{\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right)}, \tag{43}\] \[\dot{\phi}=-\frac{J}{r^{2}}. \tag{44}\] where \(\dot{t}\) and \(\dot{\phi}\) are the derivatives of \(t\) and \(\phi\) with respect to the proper time \(\tau\). Substituting Eqs. (43) and (44) into Eq.(39), we get \[E^{2}=\dot{r}^{2}+V_{eff}, \tag{45}\] where \[V_{eff}=\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right)\left(\frac{J^{2}}{r^{2} }+L\right) \tag{46}\] represents the effective potential for the geodesic motion in the space-time of a Hayward black hole with a cloud of strings. Using the relation \[\frac{dr}{dt}\frac{dt}{d\tau}=\frac{dr}{d\tau}\Rightarrow\left(\frac{dr}{dt} \right)^{2}\left(\frac{dt}{d\tau}\right)^{2}=\left(\frac{dr}{d\tau}\right)^{ 2}\Rightarrow\left(\frac{dr}{dt}\right)^{2}\dot{t}^{2}=\dot{r}^{2} \tag{47}\] into Eq.(45) and using Eqs.(46) and eq.(43), we get \[\left(\frac{dr}{dt}\right)^{2}=g(r)^{2}\left[1-\frac{g(r)}{E^{2}}\left(\frac{ J^{2}}{r^{2}}+L\right)\right]. \tag{48}\] #### 3.2.1 Radial movement of a photon For the radial movement (\(J=0\)) of a photon (\(L=0\)), Eq.(48) can be written as \[\left(\frac{dr}{dt}\right)^{2}=g(r)^{2}. \tag{49}\] Substituting Eq.(36) into Eq.(49), we get the relation between the coordinates \(t\) and \(r\), which is given by \[\pm t=\int\frac{1}{1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}}dr. \tag{50}\] Using the Eq.(45), we can obtain the relation between the coordinate \(r\) the proper time \(\tau\) for the radial movement of a photon, which is given by \[\left(\frac{dr}{d\tau}\right)^{2}=E^{2},\] \[\pm\tau=\frac{r}{E}. \tag{51}\] #### 3.2.2 Radial movement of a massive particle Now, let us consider the movement of massive particles (\(L=1\)) in radial trajectories (\(J=0\)) nearby the black hole. From Eq.(48), we obtain \[\left(\frac{dr}{dt}\right)^{2}=g(r)^{2}-\frac{g(r)^{3}}{E^{2}}. \tag{52}\] Substituting Eq.(36) into Eq.(52), we can find the relationship between the coordinates \(t\) and \(r\) for the radial movement of the particle: \[\pm t=\int\frac{dr}{\sqrt{\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right)^{2}- \frac{\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m}\right)^{3}}{E^{2}}}}. \tag{53}\] From Eq.(45), we get the relationship between the proper time \(\tau\) and the radial coordinate \(r\): \[\left(\frac{dr}{d\tau}\right)^{2}=E^{2}-g(r),\] \[\pm\tau=\int\frac{dr}{\sqrt{E^{2}-\left(1-a-\frac{2mr^{2}}{r^{3}+2l^{2}m} \right)}}. \tag{54}\] ### Effective potential The behavior of the effective potential (\(V_{eff}\)) of the geodesic motion, given by Eq. (46), can tell us about the behavior of a massive particle or a photon near the black hole. So, in Figs. 2 to 5, we plot the effective potentials for different values of \(a\) and \(l\) for time-like and null-like geodesics. In some figures, we represent, in detail, \(V_{eff}\) near the black hole (\(r\) near zero). In Fig. 2, we represent the effective potential for geodesics (\(L=1\) and \(J^{2}=20\)). We can observe that, for \(l=0\), there is no stable circular geodesics, since the graphics do not show a local minimum for any value of \(a\). On the other hand, for \(l>0\), we can observe the possibility of the existence of stable circular geodesics, depending on the value of the cloud of strings parameter \(a\). For non-radial time-like geodesics (Fig. 3), we can observe that, in all cases, \(V_{eff}\to 0\) for regions far from the black hole, \(r\rightarrow\infty\). For \(l=0\), there is no stable circular geodesics, since the graphics do not show local minima. The existence of stable circular orbits of photons around the black hole depends also on the cloud of strings parameter, as can be seen in Fig. 3. In which concerns the behavior of radial time-like geodesics, Fig. 4 shows us that the stability of radial movement does not occur for \(l=0\). On the other hand, for \(l>0\), there will always be stable geodesics. Finally, we can observe in Fig. 5 that, for radial null-like geodesics (\(J^{2}=0\) and \(L=0\)), the effective potential is constant and equal to zero. ## 4 Black Hole thermodynamics In this section, we study the thermodynamics of the Hayward black hole with a cloud of strings by examining the behavior of the mass, Hawking temperature, and heat capacity as a function of entropy. ### Black hole mass Let \(r_{h}\) be the radius of the horizon, thus we have that \(g(r_{h})=0\), where \(g(r)\) is given by Eq.(36). Thus, we can write the mass of the black hole in terms of \(r_{h}\) through the following equation: \[m=\frac{(1-a)r_{h}^{3}}{2(r_{h}^{2}+(a-1)l^{2})}, \tag{55}\] which is written in terms of the parameter that takes care of the presence of the cloud of strings, namely, \(a\). Note that if \(a=0\), we recover the mass of the regular Hayward black hole, without the cloud of strings, in terms of the horizon radius. By setting \(l=0\) and \(a=0\) we recover the mass of the Schwarzschild black hole. The area of the horizon can be calculated by \[A=\int\sqrt{-g}d\theta d\phi=4\pi r_{h}^{2}. \tag{56}\] On the other hand, the entropy of the black hole can be calculated through the area law [46], using the relation \[S=\frac{A}{4}=\pi r_{h}^{2}. \tag{57}\] Thus, we can write the mass parameter as a function of the entropy as \[m=\frac{(1-a)S^{3/2}}{2\pi^{3/2}(\frac{S}{\pi}+(a-1)l^{2})}. \tag{58}\] In Figure 6, we represent the behavior of the mass parameter, \(m\), as a function of the entropy of the black hole, \(S\), in different situations. Note that, for the Schwarzschild black hole, (\(a=0\) and \(l=0\)), the mass parameter presents only positive values for positive values of the entropy,\(S\). Similar behavior is obtained when (\(l=0\)), thus we return to the Schwarzschild black hole scenario, but now, with a cloud of strings, \(0<a<1\). If we consider the Hayward black hole, it is possible to notice that the mass parameter has positive and negative values depending on the parameters of the black hole. This is also repeated when we consider the cloud of strings in Hayward's black hole space-time. It is important to note that the cloud of string parameter modifies the phase transition point for Hayward black hole surrounded by this cloud. ### Hawking temperature The surface gravity (\(\kappa\)) for the Hayward black hole with a cloud of strings can be calculated using the following expression: \[\kappa=\frac{g^{\prime}(r)}{2}\bigg{|}_{r_{h}}\, \tag{59}\] with \({}^{\prime}\) denoting the derivative with respect to the radial coordinate. Hawking showed that the black hole emits radiation and its corresponding temperature, Hawking temperature, for stationary space-time, is given by [47]: \[T_{\kappa}=\frac{\kappa}{2\pi}. \tag{60}\] **Fig. 3**: Effective potential for non-radial null-like geodesics (\(L=0\) and \(J^{2}=20\)), for different values of \(a\) and \(l\). **Fig. 2**: Effective potential for non-radial time-like geodesics (\(L=1\) and \(J^{2}=20\)), for different values of \(a\) and \(l\). It is worth remembering that for vacuum space-time with spherical symmetry, the first law of thermodynamics gives \[dm=T_{f}dS\to T_{f}=\frac{dm}{dS}, \tag{61}\] where \(m\) and \(S\) are the total energy and entropy of the system, respectively, and \(T_{f}\) is the temperature of the black hole predicted in the first law. Taking into account the area law given by Eq.(57), the first law of thermodynamics does not provide a correct way to calculate the temperature of regular black holes [48; 49]. Thus, the Hawking temperature, \(T_{\kappa}\) and \(T_{f}\), for the Hayward black hole obtained through equations Eqs. (60) and (61), respectively, are not equivalent, that is, the first law, given by Eq.(61), is not appropriate to calculate the correct temperature for regular black holes since such black holes are not vacuum solutions. Using Eq. (60) and with \(\kappa\) given by (59), it is possible to calculate the Hawking temperature, \(T_{\kappa}=T\), for the Hayward black hole with the cloud of strings: \[T=\frac{mr_{h}(r_{h}^{3}-4l^{2}m)}{2\pi(r_{h}^{3}+2l^{2}m)^{2}}. \tag{62}\] Putting Eq.(55) into Eq.(62), we obtain the following result \[T=\frac{(1-a)[r_{h}^{2}+3(a-1)l^{2}]}{4\pi r_{h}^{3}}. \tag{63}\] Substituting \(r_{h}=(\frac{S}{\pi})^{1/2}\) in Eq.(63), we finally obtain the Hawking temperature expression as a function of entropy for the Hayward black hole with the cloud of strings: \[T=\frac{(1-a)\sqrt{\pi}[3(a-1)l^{2}+\frac{S}{\pi}]}{4S^{3/2}}. \tag{64}\] In figure 7, we represent the behavior of the temperature parameter, \(T\), as a function of the entropy Figure 4: Effective potential for radial time-like geodesics (\(J^{2}=0\) and \(L=1\)), for different values of \(a\) and \(l\). Figure 5: Effective potential for radial null-like geodesics (\(J^{2}=0\) and \(L=0\)), for different values of \(a\) and \(l\). **Fig. 6** Black hole mass as a function of the entropy \(m(S)\) for different values of \(a\) and \(l\). **Fig. 7** Black hole temperature as a function of entropy \(T(S)\) for different values of \(a\) and \(l\). of the black hole, \(S\), in different situations. Note that for the Schwarzschild space-time (\(a=0\) and \(l=0\)) the temperature parameter presents only positive values for \(S>0\). Analogously, for the Letelier space-time (\(l=0\) and \(0<a<1\)). When we consider the Hayward spacetime (\(l=1\) and \(a=0\)), it is already possible to notice that the temperature parameter will present positive and negative values depending on the black hole parameters. This is also repeated when we consider the cloud of strings in Hayward spacetime (\(l=1\) and \(0<a<1\)). ### Heat capacity Heat capacity provides information about the thermodynamic stability of a system. We can calculate the heat capacity of the Hayward black hole with the cloud of strings from the following expression: \[C=T\frac{\partial S}{\partial T}=T\left(\frac{\partial T}{\partial S}\right)^ {-1}. \tag{65}\] Substituting Eq.(64) into Eq.(65), we find the following expression for the heat capacity as a function of the black hole entropy: \[C=-\frac{2S[3(a-1)l^{2}\pi+S]}{9(a-1)l^{2}\pi+S}. \tag{66}\] The behavior of the heat capacity as a function of entropy, for different values of \(l\) and \(a\), is given in Figure 8. For \(l=0\), Eq.(66) reduces to \(C=-2S\), as expected for the case of Schwarzschild spacetime. In this case, the heat capacity is negative for \(S>0\), which indicates an unstable thermodynamic system. When we take into account Hayward (\(l=1\),\(a=0\)) and the Hayward space-time with the cloud of strings (\(l=1\),\(0<a<1\)), there are values for the entropy for which the heat capacity acquires positive values. This means that the Hayward black hole can be unstable or stable depending on the values of the entropy considered. It is important to point out that the string parameter plays an important role in the behavior of the heat capacity, modifying the phase transition point. Thus, unlike the Schwarzschild black hole, which is thermodynamically unstable, the Hayward black hole and the Hayward black hole with a cloud of strings present regions of stability, depending on the chosen parameters. ## 5 Concluding remarks In conclusion, we can say that the cloud of strings has an interesting role in different aspects related to the Hayward black hole surrounded by this cloud. Firstly, let us analyze the Kretschmann scalar in the limit \(r\to 0\), whose result is \(\infty\). This means that the inclusion of the cloud of strings turns the black hole singular. In other words, the metric of the Hayward black hole with a cloud of strings is singular at the origin. With relation to the possible roots of \(g(r)\), and considering a critical mass, \(m_{*}\), given by Eq(37), three different scenarios are present, namely: (i) the black hole mass is higher than the critical mass, \(m>m_{*}\), thus \(g(r)\) has two real roots; (ii) the black hole mass is equal to the critical mass, \(m=m_{*}\), then, \(g(r)\) has a unique real root, which is equal to \(r_{*}\); and (iii) \(m<m_{*}\), in which case, \(g(r)\) has no real roots. The effective potential (\(V_{eff}\)) of the geodesic motion, given by Eq. (46), is represented in Figs. 2 to 5, for different values of \(a\) and \(l\) for time-like and null-like geodesics. As shown, in some cases, in which \(l=0\), there are no stable circular geodesics, irrespective of the values of \(a\). On the other hand, for \(l>0\), we can observe the possibility of the existence of stable circular geodesics, depending on the value of the cloud of strings parameter \(a\). For non-radial time-like geodesics (Fig. 3), we can observe that, in all cases, \(V_{eff}\to 0\) for regions far from the black hole, \(r\rightarrow\infty\). For \(l=0\), there is no stable circular geodesics. Otherwise, there are stable circular orbits of photons around the black hole, depending on the presence of the cloud of strings, as can be seen in Fig. 3. In which concerns the behavior of radial time-like geodesics, shown in Fig. 4, for \(l>0\), there exist stable geodesics in all situations. The behavior of the mass parameter, m, as a function of the entropy of the black hole, S, in different situations, are very similar, as we can see in Fig. 6. It is worth calling attention to the fact that preserves the shape, irrespective of the values of the parameter that codifies the presence of the cloud of strings. The only difference is a shift to the left, as the values of this parameter increase. When the Schwarzschild black hole space-time (a = 0 and l = 0) is taken into account, the temperature parameter presents only positive values for \(S>0\). The same behavior occurs when a cloud of strings is added to Schwarzschild black hole space-time, namely, \(l=0\) and \(0<a<1\). Now, if the Hayward black hole is considered, \(l=1\) and \(a=0\), the temperature will assume positive and negative values depending on the black hole parameters. The same situation appears when the presence of a cloud of string is considered in Hayward. The heat capacity assumes positive values or negative ones, depending on the values of entropy, when the cloud of strings is present. This means that the Hayward black hole with a cloud of strings can be unstable or stable depending on depending on the values of the entropy considered. This behavior is analogous to the one obtained when the cloud is absent. Otherwise, the role of the cloud of strings is to shift the graphs to the left. It is important to point out that the cloud of strings parameter plays an important role in the behavior of the heat capacity, modifying the points of the phase transitions, as well as on the Hawking temperature, black hole mass, effective potential, geodesics and horizons, and removing the regular character of the Hayward regular black hole solution, by removing it and turning the solution singular. ###### Acknowledgements. V.B. Bezerra is partially supported by CNPq-Brazil ( Conselho Nacional de Desenvolvimento Cientifico e Tecnologico) through Research Project No. 307211/2020-7. F. F. Nascimento and J. M. Toledo acknowledge Departamento de Fisica, Universidade Federal da Paraiba, for hospitality.
2306.10146
Multi-task 3D building understanding with multi-modal pretraining
This paper explores various learning strategies for 3D building type classification and part segmentation on the BuildingNet dataset. ULIP with PointNeXt and PointNeXt segmentation are extended for the classification and segmentation task on BuildingNet dataset. The best multi-task PointNeXt-s model with multi-modal pretraining achieves 59.36 overall accuracy for 3D building type classification, and 31.68 PartIoU for 3D building part segmentation on validation split. The final PointNeXt XL model achieves 31.33 PartIoU and 22.78 ShapeIoU on test split for BuildingNet-Points segmentation, which significantly improved over PointNet++ model reported from BuildingNet paper, and it won the 1st place in the BuildingNet challenge at CVPR23 StruCo3D workshop.
Shicheng Xu
2023-06-16T19:27:00Z
http://arxiv.org/abs/2306.10146v1
# Multi-task 3D building understanding with multi-modal pretraining ###### Abstract This paper explores various learning strategies for 3D building type classification and part segmentation on the BuildingNet dataset [17]. ULIP with PointNeXt [20] and PointNeXt segmentation [15] are extended for the classification and segmentation task on BuildingNet dataset. The best multi-task PointNeXt-s model with multi-modal pretraining achieves 59.36 overall accuracy for 3D building type classification, and 31.68 PartIoU for 3D building part segmentation on validation split. The final PointNeXt-XL model achieves 31.33 PartIoU and 22.78 ShapeIoU on test split for BuildingNet-Points segmentation, which significantly improved over PointNet++ model reported from BuildingNet paper, and it **won the 1st place** in the BuildingNet challenge at CVPR23 StruCo3D workshop1. Footnote 1: [https://eval.ai/web/challenges/challenge-page/1938/leaderboard/4590](https://eval.ai/web/challenges/challenge-page/1938/leaderboard/4590) ## 1 Introduction Understanding building types and building parts in 3D has wide applications in mapping, autonomous driving, architecture, construction, and gaming. Buildings are dominant objects in urban areas and are what most humans interact with daily. While architecture has a long history and building follows some ontology and style, the digitization process and understanding building semantics in 3d have been extremely challenging. The BuildingNet dataset [17] is a public research dataset for building exteriors and surroundings with segmentation of building parts. It includes about 2k 3d buildings of office, school, castle, house, hotel, etc; with 31 common semantic parts, such as wall, window, roof, floor, stairs, etc. There are several indoor 3D scene datasets, such as SceneNet [8], S3DIS [1], and CVPR23 CV4AEC workshop 2; as well as outdoor 3D scene datasets, such as Waymo open dataset [18], and KITTI-360 [10], that also annotates buildings, but none of them annotate building exterior parts. In parallel, researchers have also developed several methods to create synthetic buildings [5], [19]. Footnote 2: [https://cv4eec.github.io/](https://cv4eec.github.io/) In this paper, we will look at both 3D building type classification and part segmentation on the BuildingNet dataset. For 3D building type classification, the model needs to produce a class label for the entire building object, e.g. house, hotel, etc. As for 3D building part segmentation, the model needs to assign a part label for every point in the 3D point cloud, e.g. roof, window, etc. Comprehensive studies on various transfer learning strategies will show what multi-task training and multi-modal pretraining are effective approaches to reduce overfitting and improve performance on both tasks. The best multi-task PointNeXt-s model with multi-modal pretraining achieves 59.36 overall accuracy for 3D building type classification, and 31.68 PartIoU for 3D building part segmentation on validation split. The CVPR23 StruCo3D workshop hosts the BuildingNet challenge on building part segmentation task, with an updated BuildingNet v1 dataset with two phases. This project focuses on the Building-Points phase, which is designed for large-scale point-based processing algorithms that must deal with unstructured point cloud; The other phase is called BuildingNet-Mesh, which can also access the mesh data with subgroups. The final PointNeXt-XL model achieves 31.33 PartIoU for BuildingNet-Points segmentation, which marginally beats the MinkNet baseline but significantly improves over the PointNet++ model by over 100%. Figure 1: Proposed model architecture for 3D building classification and segmentation. ## 2 Related Work PointNet [14] and 3D convolution are two popular building blocks for constructing 3d deep learning models for unstructured point cloud. On one hand, the basic idea of PointNet [14] is applying MLP layers on each point feature, then, use max pooling to aggregate into global feature. PointNet++ follows by introducing set abstraction to form hierarchical point set features. More recently, PointMLP [11] proposes Geometric Affine module and Residual Point block as an alternative to extract local point set features. On the other hand, given the sparse nature of point cloud in 3D space, many researchers have explored 3D sparse convolution to build their 3D models. Submanifold Sparse Convolutional Networks [6] and Minkowski Convolution Neural Networks [4] are two noteable models in this direction. Submanifold SparseConv preserves the input/output space structure, while Minkowski SparseConv generalizes it to allow arbitrary output coordinates and sparse kernel. The BuildingNet paper reported results of PointNet++ and MinkNet on BuildingNet V0 dataset. For the updated v1 dataset, they established a new baseline using MinkRes16UNet34C model. While there is a small difference between V0 and V1 results, it is clear that there is a huge gap between PointNet++ and MinkNet, which affects the corresponding performance on BuildingNet-Mesh when combined with BuildingGNN. With the recent advancement in foundation language models [2], how to effectively pretrain vision models with self-supervision and leverage multi-modal information has become an important technique to improve supervised vision tasks. CLIP [16] and SLIP [13] started to learn an aligned representation from text and image pair and has significantly improved zero-shot classification and linear classification. ULIP [20] extends that by generating text, image, and point cloud triplet, and train the point cloud encoder to align its output with text and image encoder outputs. The ULIP author provided pretrained ULIP with PointNeXt-s model on ShapeNet55, and reported promising zero shot classification results on ModelNet40. ULIP-2 [21] further extends that by generating text prompts using large multi-modal model, and pretrain on larger Obyiaverse dataset. PointNeXt [15] revisited PointNet++ and aim to improve it in three areas: data augmentation, optimization, and model scaling. It also introduces Inverted Residual MLP blocks and receptive field scaling to allow model scaling and reduce receptive field sensitivy issue in PointNet++. PointNeXt, together with ULIP pretraining, has shown really strong performance in various 3D benchmark, such as ScanObjectNN, S3IDS, ShapeNet-Part, and ModelNet40. This encourages more research opportunity on PointNet-based 3D models. The main difference between this paper and the original BuildingNet paper is that 1. I use PointNeXt model instead of PointNet++ model, with improved data augmentation, optimization, and model scaling; 2. I tried various learning strategies, including ULIP pretraining and multi-task learning on both segmentation and classification tasks. More detailed comparison will be discussed in the following sections. ## 3 Methods Now, we formally define 3D classification and segmentation on point cloud. Given an unordered point set \(P=\{p_{1},p_{2},...,p_{n}\}\), where each point \(p_{i}\) is placed in a 3-dimensional space with coordinates \((x_{i},y_{i},z_{i})\), and has a feature \(f_{i}\in\mathbb{R}^{d}\). A 3D classification model on point cloud predicts a single class label \(c\) for \(P\). A 3D semantic segmentation model is expected to predict a class label \(s_{i}\) to each point \(p_{i}\in P\). For ULIP pretraining, the model will also receive a list of text prompts, and a list of multi-view images as additional training data. ### Review of PointNet++ and PointNeXt There are two main types of layer blocks in PointNet++: Set Abstraction and Feature Propagation. PointNeXt extends that with Inverted Residual MLP (InvResMLP). Set Abstraction and InvResMLP are used in the encoder stage. **Set Abstraction** aims to reduce number of features by aggregating from groups of neighboring points around the centroid. **InvResMLP** has three contributions: 1. residual connection, similar as ResNet to reduce vanishing gradient problem; 2. separable MLP, MLP layers are added after the reduction stage, this is similar to MobileNet to reduce model size; 3. the output channel of second MLP layer can be expanded for model scaling. **Feature Propagation** is used in the segmentation decoder stage to scale the encoder feature back to original point set size. \begin{table} \begin{tabular}{c|c|c|c} \hline Method & Dataset & Test PIoU & Test SIoU \\ \hline PointNet++ & V0 & 14.1 & 16.7 \\ \hline MinkNet & V0 & 29.9 & 24.3 \\ \hline MinkNet & V1 & 31.2 & 24.1 \\ \hline My PointNeXt-XL & V1 & 31.33 & 22.78 \\ \hline \end{tabular} \end{table} Table 1: Summary of BuildingNet baseline and my result. For classification, fully connected layers are used to propagate encoder feature to class logits. These three layer blocks can be futher broken down into following layers: **Sampling Layer.** Given input points \(\{p_{1},p_{2},...,p_{n}\}\), the sampling layer uses farthest point sampling (FPS) to choose a subset of points \(\{p_{i_{1}},p_{i_{2}},...,p_{i_{m}}\}\), such that \(p_{i_{j}}\) is the most distant point from \(\{p_{i_{1}},p_{i_{2}},...,p_{i_{j-1}}\}\). The sampled point set size is controlled by stride parameter \(s\), i.e., \(m=n//s\). FPS has better coverage than uniform sampling. The subset of points become the centroid points to the next grouping layer. **Grouping Layer.** Taking the original input points and centroid points from sampling Layer, the grouping layer groups neighbors of centroids. Ball query is typically used to find neighbors within a radius to the centroid point. The radius \(r\) determines the receptive field and is sensitive to the density of point cloud, especially when the point cloud is subsampled. In PointNet++, the initial value \(r\) will double after each sampling layer. As mentioned in PointNeXt paper, the radius value is very dataset dependent, especially if the dataset is voxelized with different voxel size. We maintain the same ratio between voxel size and radius value as PointNeXt to achieve reasonable performance. MLP Layer operates locally on each centroid, taking relative coordinate \(p_{j}-p_{i}\) as input. Reduction Layer aggregates features from neighbors (e.g. max-pooling). Finally, Feature Propagation Layer uses inverse distance weighted average based on \(k\) nearest neighbors to interpolate features of sampled points back to original points. Our baseline models will be training PointNeXt from scratch. I use cross entropy loss for classification, and pixel-wise weighted cross entropy loss for segmentation. The weight is calculated by inverse logarithm of label frequency from each task. From there, I explore how different learning strategies can help on various tasks in three categories: 1. transfer learning; 2. ULIP pretraining; 3. multi-task training. ### Transfer learning from a different dataset In 2D image modeling, transfer learning from a different dataset is commonly used to boost the training performance and reduce the dataset size for finetuning [9].Given ULIP has outstanding performance in ModelNet40 benchmark3, we want to first try to load pretrained ULIP+PointNeXt checkpoint from ShapeNet for 3d classification task. Note that there are many shared classes between ShapeNet55 and ModelNet40, but there isn't any class related to buildings. I load pretrained ULIP with PointNeXt-s backbone from ShapeNet55 [3] and finetune it on 3d classification task to see whether transfer learning helps in such setting. Footnote 3: [https://paperswithcode.com/sota/3d-point-cloud-classification-on-modelnet40](https://paperswithcode.com/sota/3d-point-cloud-classification-on-modelnet40) For 3d segmentation task, the PointNeXt model has achieved top performance on S3DIS, a 3d indoor scene datasets. Again, there is a huge difference between S3DIS (indoor scene) and BuildingNet (building exteriors). I load the same backbone for the PointNext-s segmentation model and finetune it on the BuildingNet dataset. ### Using ULIP as a pretraining framework Remind that the ULIP framework freezes the text and image encoders, and align point cloud encoder feature with text and image encoder features with a cross-modal contrastive loss. At test time, it computes the point cloud encoder feature and finds the closest text encoder feature among all categories to predict the class label. For finetuning, I load the best ULIP checkpoint and finetune it in the corresponding model. The PointNeXt-s encoders for classification and segmentation uses difference block sizes and stride sizes. For baseline, I train the classification model from scratch with both classification backbone and segmentation backbone for comparison, but here I only pretrain ULIP with segmentation backbone for consistency. ### Multi-task learning Lastly, I train a dual-task model that does both classification and segmentation with a shared backbone, repeat the Figure 3: ULIP pretraining process on BuildingNet. Figure 2: Model Layer Blocks in PointNet++/PointNeXt. experiment of training from scratch and loading pretrained checkpoint from ULIP. I use weighted sum to balance between tasks: \[L_{\text{total}}=\beta L_{\text{classification}}+(1-\beta)L_{\text{segmentation}} \tag{1}\] ## 4 Dataset and Features Each 3D building model comes with the following data: * Name: The name is prefixed with building class and subclass, e.g., in "COMMERCIALcastle_mesh0365", "COMMERCIAL" is the building class, "castle" is the subclass. I use the subclass ("castle") as the building type classification label. * Feature: Coordinates \((x,y,z)\), Normal \((nx,ny,nz)\), Color \((r,g,b)\). All coordinates are prenormalized to \([-0.5,0.5]\) and downsampled to 100,000 points, all features are prenormalized to \([0,1]\). Following PointNeXt, I use \(y-min(y)\) to generate the heights feature to help model differentiate between roof and ground. * Per-point class index for segmentation. For challenge submission, labels are only provided for training and validation splits. * 3D Warehouse IDs and links to original models. ### Text-image-point triplet generation ULIP pretraining requires text prompts and multi-view images to be generated with the point clouds. I generated 64 text prompts for each 3d building, 63 text prompts were generated using prompt template from [7], then I added a dedicated prompt "a point cloud model of category". Each text prompt will be sent to a pretrained SLIP text encoder, then I use average pooling to get the final text encoder feature. For multi-view images, I use the same renderer script as ULIP 4, but only generated 8 RGB images and depth maps per every 45 degrees to save storage space. This generates 16 images per building in total. During pretraining, I randomly selected one image and send that to a pretrained SLIP image ViT encoder to generate the image encoder feature. Footnote 4: [https://github.com/panmari/stanford-shapenet-renderer](https://github.com/panmari/stanford-shapenet-renderer) ### Point cloud preprocessing I perform all 3d point cloud preprocessing on the fly in the dataset loader to maximize the data diversity for model training. It can be divided into following stages: pre-voxelize data augmentation, voxelization, and post-voxelize data augmentation. **Pre-voxelize data augmentation.** I follow BuildingNet baseline to apply random rotation before voxelization for data augmentation. Random rotation is mentioned as a strong data augmentation in PointNeXt as well, however, they didn't apply it to PointNeXt-s as they see performance drop. BuildingNet only has 2k 3d models, so adding this is critical to avoid overfitting. The entire dataset will also be looped 12 times for each epoch. **Voxelization.** It is a common technique to group 3D point clouds into voxel grids. The voxel grids can be further downsampled to fit into the model. At training time, a random point from each voxel grid will be selected for model inputs. At test time, an exhaustive sampling will generate a list of sub-clouds that each sub-cloud contains one point from each voxel, so that the model can run inference on the entire cloud. The voxelized point cloud will be further downsampled to a fixed size. **Post-voxelize data augmentation.** I adopt all data augmentation used by PointNeXt model in post-voxelize data augmentation, except for random rotation. This includes color auto contrast, random scaling, jittering, and color drop. Color auto contrast automatically adjust color contrast [22], random scaling randomly scale the entire point cloud by a small factor, jittering randomly add independent noise to each point, and color drop randomly drop the rgb color feature to force the model to learn more from other features and the geometry relationship between points. ### Data statistical analysis BuildingNet is a challenging dataset. The reported metrics from MinkNet baseline is significantly lower than other 3D semantic segmentation datasets, such as S3DIS 5 and Waymo Open Dataset 6. I perform data statistical analysis to understand why this is so challenging before diving into experiments. Footnote 5: [https://paperswithcode.com/sota/semantic-segmentation-on-s3dis](https://paperswithcode.com/sota/semantic-segmentation-on-s3dis) Footnote 6: [https://waymo.com/open/challenges/2022/3d-semantic-segmentation](https://waymo.com/open/challenges/2022/3d-semantic-segmentation) Figure 5 shows the distribution of number of voxels on different voxel sizes and data splits. We can see that as the voxel size becomes smaller, the number of voxels grows dramatically. More importantly, I notice a distribution difference between train/val splits and test split, Both train and val splits have peaks around 2000 voxels, but the test split peaks at 5000 voxels. I empirically choose voxel size and Figure 4: Voxelize Visualization. sample size to be 0.02 and 12,500 for training experiments, and set ball query radius to 0.05. BuildingNet is a small dataset. There are 1480 buildings in train split, 187 buildings in val split, and 181 buildings in test split. While it has 15 building type classes and 31 building part classes. In Figure 6, we can see that the class distributions are extremely unbalanced for both classification and segmentation. In segmentation labels, there is a massive number of points that come with segmentation class index 0, i.e., "unspecified", which means those points are not labeled. We have excluded class 0 from data, loss function, and evaluation metrics to avoid any confusion to the model. ## 5 Experiments ### Experiment Plan Table 2 shows all combinations of learning strategies we have discussed. I use PointNeXt-s model for all experiments, except for the final model scale-up. Training epoch is fixed at 100, learning rate is 0.01, cosine decay is used for learning rate schedule for all experiments without specially mentioned. For multi-task training, we experimented with \(\beta=0.01\) and \(\beta=0.03\). ### Evaluation Metrics For 3d building type classification, we simply calculate overall accuracy for evaluation. \[\text{accuracy}=\frac{\text{\# true positive buildings}}{\text{\# all buildings}} \tag{2}\] For 3d building part segmentation, we mainly use PartIoU metrics [12], which calculates the average of intersection over union (IoU) for each semantic part across all buildings. The BulidingNet challenge also calculates ShapeIoU, which calculates the average of IoU per building. \[\text{PartIoU}=\frac{1}{N_{p}}(\sum_{B_{i}\in B}\text{IoU}_{p_{i},B_{i}}) \tag{3}\] where \(p\) is a building part, e.g. roof, wall, \(N_{p}\) is the number of part classes, IoU\({}_{p_{i},B_{i}}\) calculates the IoU for part \(p_{i}\) in building \(B_{i}\). Last but not least, for dual-task training, we use the harmonic mean of classification accuracy and segmentation PartIoU to select the best checkpoint. \[\text{Harmonic mean}=\frac{2}{\text{accuracy}^{-1}+\text{PartIoU}^{-1}} \tag{4}\] Next, we will discuss experiment results by tasks. ### 3D Building Type Classification Table 3 shows the experiment results for classification. The training accuracy is recorded at the best validation checkpoint. Training from scratch using classification backbone (Exp 1), and Multi-task training with ULIP pretraining (Exp 9') achieves top two performance. Figure 7 compares the classification loss curve between two models. The classification loss for training from scratch got exploded to NaN at epoch 66, and the best checkpoint was found early in epoch 16. The classification loss for multi-task training with ULIP pretraining went down more efficiently. \begin{table} \begin{tabular}{c|c|c} \hline Id & Pretrain & Task \\ \hline 1 & Scratch & Classification \\ \hline 1’ & Scratch (Seg backbone) & Classification \\ \hline 2 & Scratch & Segmentation \\ \hline 3 & Scratch & ULIP \\ \hline 4 & ULIP(ShapeNet) & Classification \\ \hline 5 & PointNeXt(S3DIS) & Segmentation \\ \hline 6 & ULIP(BuildingNet) & Classification \\ \hline 7 & ULIP(BuildingNet) & Segmentation \\ \hline 8 & Scratch & Class+Seg \\ \hline 9 & ULIP(BuildingNet) & Class+Seg \\ \hline \end{tabular} \end{table} Table 2: Summary of learning strategies. Figure 5: Histogram of number of voxels per point cloud model. Figure 6: Histogram of number of labels for building type classification and building segmentation segmentation. For ULIP pretraining, training accuracy was not reported. Training ULIP from scratch (Exp 3) is lower than other experiments, because it only uses a projection matrix to project encoder features into a hidden space for feature alignment. However, further finetuning it (Exp 6) doesn't bringing the performance better than training from scratch (Exp 1'). The multi-task training reacts differently to the task weight \(\beta\) when training from scratch and loading ULIP pre-trained checkpoint, this suggests that when loading ULIP pretrained checkpoint, the model already learns some information about classification and requires less finetuning on it. I further look into the per-class accuracy for the multi-task model (Exp 9'). The validation set is so small that most classes only have 1-2 buildings. The model is only able to produce metrics on the 6 dominant classes of the BuildingNet dataset, each has more than 40 buildings. The model archieves 79.49 accuracy on house but 50 on villa, which indicates the model got confused villa as many examples look similar to house. Surprisingly, the model achieves a high accuracy score on mosque, probably because of its unique tower shape (Figure 8). It is clear that the model needs more training and validation data to achieve reasonable building type classification performance. ### 3D Building Part Segmentation Table 5 shows the experiment results for 3d building part segmentation. Both segmentation model and multi-task model with ULIP pretraining achieves the best performance. Using ULIP as a pretraining framework is helpful for the segmentation task. Transfer learning from S3DIS doesn't really help because of the domain difference between indoor scene and building exteriors. \begin{table} \begin{tabular}{c|c|c} \hline Building Type & Accuracy & \# buildings \\ \hline house & 79.49 & 766 \\ \hline villa & 50. & 384 \\ \hline church & 63.16 & 265 \\ \hline office building & 50 & 109 \\ \hline mosque & 72.73 & 70 \\ \hline temple & 28.57 & 45 \\ \hline \end{tabular} \end{table} Table 4: Summary of accuracy per class. \begin{table} \begin{tabular}{c|c|c|c} \hline Id & Model & Train Acc & Val Acc \\ \hline 1 & Scratch+Class & 61.97 & **60.43** \\ \hline 1’ & Scratch+Seg & 58.83 & 58.82 \\ \hline 3 & Scratch+ULIP & N/A & 54.55 \\ \hline 4 & ULIP(ShapeNet)+Class & 68.23 & 56.68 \\ \hline 6 & ULIP(BN)+Seg & 54.92 & 57.22 \\ \hline 8 & Scratch+Multiask & & \\ & \(\beta=0.03\) & 99.84 & 58.29 \\ \hline 8’ & Scratch+Multiask & & \\ & \(\beta=0.01\) & 99.4 & 56.68 \\ \hline 9 & ULIP(BN)+Multiask & & \\ & \(\beta=0.03\) & 99.79 & 57.75 \\ \hline 9’ & ULIP(BN)+Multiask & & \\ & \(\beta=0.01\) & 97.93 & **59.36** \\ \hline \end{tabular} \end{table} Table 3: Summary of classification evaluation results. Figure 8: A rendered example of mosque. \begin{table} \begin{tabular}{c|c|c|c} \hline Id & Model & Train PIoU & Val PIoU \\ \hline 2 & Scratch+Seg & 55.44 & 31.66 \\ \hline 5 & PN(S3DIS)+Seg & 34.65 & 26.36 \\ \hline 7 & ULIP(BN)+Seg & 56.1 & **32.09** \\ \hline 8 & Scratch+Multiask & & \\ & \(\beta=0.03\) & 51.52 & 29.77 \\ \hline 8’ & Scratch+Multiask & & \\ & \(\beta=0.01\) & 52.14 & 29.78 \\ \hline 9 & ULIP(BN)+Multiask & & \\ & \(\beta=0.03\) & 45.81 & 29.72 \\ \hline 9’ & ULIP(BN)+Multiask & & \\ & \(\beta=0.01\) & 50 & **31.68** \\ \hline \end{tabular} \end{table} Table 5: Summary of segmentation evaluation results. Figure 7: Classification loss comparison between Exp1 and Exp9’. The ULIP framework is agnostic to point cloud backbone architecture. I also train ULIP+MinkNet by finetuning from segmentation baseline checkpoint for only 50 epochs. The result in Table 6 shows that transfer learning from segmentation to classification task may also be useful. ### Final model scale-up The final model is to scale up model to PointNeXt-XL and train a model for the BuildingNet challenge submission. I updated voxel size and sample size to be 0.01 and 40,000, and set ball query radius to 0.025. Due to the time and resource constraint, I only train one PointNeXt-XL model from scratch on 8xA100 multi-gpu in order to submit the test result in time. Table 7 compares between MinkNet and PointNext-XL. Note that both optimizers use momentum and weight decay. Looking at the segmentation visualization of PointNeXt-XL test result (Figure 9), while the prediction result is still noisy, the model can already learn most building parts, such as window, wall, roof, etc. Table 9 compares per-class segmentation IoU between PointNeXt-s (ULIP(BN)+Seg), PointNeXt-XL (Scratch+Seg), and MinkNet7. PointNeXt-s wins 4 classes, PointNeXt-XL wins 11 classes, MinkNet wins 15 classes. Footnote 7: This came from a finetune training from MinkNet baseline checkpoint. ## 6 Conclusion PointNet and SparseConv are two building blocks in the 3d deep learning toolkit. The comparison between PointNeXt and MinkNet show that both can achieve reasonable performance on a challenging dataset like BuildingNet. Multi-task training with multi-modal pretraining consistently achieves top performance on building type classification and building part segmentation. Scaling up PointNeXt model achieves the best model performance in segmentation task. What's more, we can reduce ULIP training time by loading a good segmentation model. Not just labeling on 3D is expansive, but also training on 3D requires significant accelerator resource. There is a lot of head room to continue research in 3D deep learning and find more efficient ways to boost data quality and model performance. \begin{table} \begin{tabular}{l|c|c} \hline & MinkNet & PointNeXt-s \\ \hline Val Top-1 accuracy & 55.61 & 54.54 \\ \hline Val Top-5 accuracy & 85.56 & 85.56 \\ \hline \end{tabular} \end{table} Table 6: ULIP+MinkNet v.s. ULIP+PointNeXt-s. \begin{table} \begin{tabular}{c|c|c} \hline Model Type & MinkNet & PointNeXt-XL \\ \hline Pre-voxelize & \multicolumn{2}{c}{Random rotation} \\ \hline Voxel Size & \multicolumn{2}{c}{0.01} \\ \hline Sample Size & 100,000 & 40,000 \\ \hline Post-voxelize & None & color auto contrast, \\ Data & & random scaling, \\ Augmentation & & jittering, color drop \\ \hline Loop Size & \multicolumn{2}{c}{12} \\ \hline Training Epoch & 200 & 100 \\ \hline Loss & \multicolumn{2}{c}{weighted cross entropy} \\ \hline LR & \multicolumn{2}{c}{0.01} \\ \hline Scheduler & \multicolumn{2}{c}{cosine} \\ \hline Optimizer & SGD & Adam \\ \hline \end{tabular} \end{table} Table 7: Comparison between Minknet and PointNeXt-XL \begin{table} \begin{tabular}{c|c|c|c} \hline & PointNeXt-s & PointNeXt-XL & MinkNet \\ \hline PartIoU & \multicolumn{2}{c|}{} & \multicolumn{1}{c}{} \\ (train) & 56.1 & **74.25** & 65.72 \\ \hline PartIoU & \multicolumn{2}{c|}{} & \multicolumn{1}{c}{} \\ (val) & 32.09 & **34.68** & 34 \\ \hline PartIoU & \multicolumn{2}{c|}{} & \multicolumn{1}{c}{} \\ (test) & 29.56 & **31.33** & 31.2 \\ \hline ShapeIoU & \multicolumn{2}{c|}{} & \multicolumn{1}{c}{} \\ (test) & 21.76 & 22.78 & **24.1** \\ \hline \end{tabular} \end{table} Table 8: PointNeXt-s v.s. PointNeXt-XL v.s. MinkNet Figure 9: Visualization of PointNeXt-XL segmenation results on test split. ## 7 Acknowledgements This Stanford CS231n project is mentored by Alberto Tono, who helped review proposal, milestone, and final report, and provided suggestions during project development. I want to thank Chen Xia, and Juan Miguel Navarro Carranza for discussions and contributions before project milestone. All the code, experiments, and the final report are authored and completed by the author. The codebase exists of 4k+ lines of custom code while reusing PointNeXt8, ULIP9, and buildingnet_dataset10 repos. Footnote 8: [https://github.com/guochengqian/PointNeXt](https://github.com/guochengqian/PointNeXt) Footnote 9: [https://github.com/salesforce/ULIP](https://github.com/salesforce/ULIP) Footnote 10: [https://github.com/buildingnet/buildingnet_dataset](https://github.com/buildingnet/buildingnet_dataset) ## 8 Appendix ### Complete list of building subtypes * castle * cathedral * church * city hall * factory * hotel building * house * monastery * mosque * museum * office building * palace * school building * temple * villa
2304.11421
Monotone energy stability of magnetohydrodynamics Couette and Hartmann flows
We study the monotone nonlinear energy stability of \textit{magnetohydrodynamics plane shear flows, Couette and Hartmann flows}. We prove that the least stabilizing perturbations, in the energy norm, are the two-dimensional spanwise perturbations and give some criti\-cal Reynolds numbers Re$_E$ for some selected Prandtl and Hartmann numbers. This result solves a conjecture given in a recent paper by Falsaperla et al. \cite{FMP.2022} and implies a Squire theorem for nonlinear energy: the less stabilizing perturbations in the \textit{energy norm} are the two-dimensional spanwise perturbations. Moreover, for Reynolds numbers less than Re$_E $ there can be no transient energy growth.
Giuseppe Mulone
2023-04-22T14:37:43Z
http://arxiv.org/abs/2304.11421v1
# Monotone energy stability of magnetohydrodynamics Couette and Hartmann flows ###### Abstract We study the monotone nonlinear energy stability of _magnetohydrodynamics plane shear flows, Couette and Hartmann flows_. We prove that the least stabilizing perturbations, in the energy norm, are the two-dimensional spanwise perturbations and give some critical Reynolds numbers \(\mathrm{Re}_{\mathbf{E}}\) for some selected Prandtl and Hartmann numbers. This result solves a conjecture given in a recent paper by Falsaperla et al. [1] and implies a Squire theorem for nonlinear energy: the less stabilizing perturbations in the _energy norm_ are the two-dimensional spanwise perturbations. Moreover, for Reynolds numbers less than \(\mathrm{Re}_{\mathbf{E}}\) there can be no transient energy growth. Magnetic Couette flow, Hartmann flow, Nonlinear monotone energy stability. **MSC Classification:** 76E05, 76E25 _Dedication._ _This work is dedicated to Prof. Salvatore Rionero, my dearest teacher and mentor. His memory will always remain in my heart forever._ ## 1 Introduction It is well known that the study of the stability of laminar flows in magnetohydrodynamics is important for the numerous applications to different fields: geophysics, astrophysics, industry, biology, in metallurgy, in biofilms, and medicine, see [2] - [20], and the references therein. Many stability problems in magnetohydrodynamics even in the presence of temperature have been studied and some notable results have been obtained by Rionero [21] - [27], also in porous media. In particular, in the work [22], Rionero proves, in the magnetohydrodynamics case, the fundamental existence theorem of the maximum of a functional ratio connected to the Reynolds-Orr energy equation. In a recent paper Falsaperla et al. [1], studied the monotone nonlinear energy stability of Couette and Hartmann motions with respect to three-dimensional perturbations in magnetohydrodynamics. They found that the streamwise perturbations are stabilizing for any Reynolds number. This is in contradiction with the results of Alexakis et al. [2]. In order to solve this contradiction, Falsaperla et al. [1] made a conjecture: the maximum of the functional ratio that comes from the Reynolds-Orr energy equation is obtained in a subspace of the space of kinematically admissible perturbations, the space of physically admissible perturbations competing for the maximum, the two-dimensional spanwise perturbations. The main purpose of this paper is to prove that this conjecture is true: the maximum of the functional ratio that comes from the Reynolds-Orr energy equation, and consequently the critical nonlinear Reynolds number for monotone energy stability, is obtained on two-dimensional perturbations, the spanwise perturbations. To obtain this result, we write the Reynolds-Orr energy equation and, as Lorentz [30], ( see also [31]), has observed in the fluid-dynamics case, we remark that a _scale invariance property_ holds for the terms of the energy equation. Then, we compare two functional ratios and study the maximum obtained with the Euler-Lagrange equations. The plan of the paper is the following. In Section 2 we introduce the basic motions and the perturbation equations. In Section 3 we study the nonlinear energy stability with respect to three-dimensional perturbations and find that the critical Reynolds numbers for monotone energy stability are obtained on the spanwise two-dimensional perturbations. In Section 4 we report some graphs of the critical Reynolds numbers obtained with the Chebyshev collocation method for fixed Prandtl and Hartmann numbers. Finally, in section 5, we draw a conclusion. ## 2 Basic motions and perturbation equations Consider a layer \(\mathcal{D}=\mathbb{R}^{2}\times[-1,1]\) filled with an electrically conducting fluid, [20]. We can write the magnetohydrodynamics system for stationary flows in the non-dimensional form [7; 20], and [16, formula (14)]: \[\left\{\begin{aligned} &\mathbf{v}\!\cdot\!\nabla\mathbf{v}=\mathrm{Ha}^{2} \mathrm{Re}\,^{-1}\mathrm{Rm}\ \mathbf{\hat{B}}\!\cdot\!\nabla\mathbf{\hat{B}}-\nabla\Pi+\mathrm{Re}\,^{-1} \Delta\mathbf{v}\\ &\nabla\!\cdot\!\mathbf{v}=0\\ &\mathbf{v}\!\cdot\!\nabla\mathbf{\hat{B}}-\mathbf{\hat{B}}\! \cdot\!\nabla\mathbf{v}=\mathrm{Rm}\,^{-1}\Delta\mathbf{\hat{B}}\\ &\nabla\!\cdot\!\mathbf{\hat{B}}=0,\end{aligned}\right. \tag{1}\] where \((x,y,z)\in\mathcal{D}\) and \(\mathbf{v}\), \(\mathbf{\hat{B}}\) are the unknown fields, respectively the velocity of the fluid, the magnetic induction field, and \(\Pi\) is the effective pressure (including the magnetic pressure). They are regular fields (at least \(C^{2}(\mathcal{D})\)). The other symbols in (1) are the positive non-dimensional parameters * \(\mathrm{Re}\,=V_{0}d/\nu\), the Reynolds number, * \(\mathrm{Rm}\,=V_{0}d/\eta\), the magnetic Reynolds number, * \(\mathrm{Pm}=\dfrac{\nu}{\eta}=\dfrac{\mathrm{Rm}}{\mathrm{Re}}\), the magnetic Prandtl number, * \(\mathrm{Ha}=\dfrac{B_{0}d}{\sqrt{\rho\nu\mu\eta}}\), the Hartman number. \(V_{0}\) and \(B_{0}\) are a reference velocity (generally the maximum velocity is considered) and a reference magnetic field. \(d\), \(\nu\), \(\eta\), \(\rho\), \(\mu\) are the half width of the layer, the viscosity, the electric resistivity, the density and the magnetic permeability, respectively; they are positive numbers. \(\nabla\) is the gradient operator and \(\Delta\) is the three-dimensional Laplacian. Following [15] we restrict our analysis to \(z\)-dependent laminar solutions of the form (we call them _mean or basic_ solutions) \[\mathbf{v}(z)=(U(z),0,0),\quad\mathbf{\hat{B}}(z)=(\bar{B}(z),0,\mathrm{Rm}\, ^{-1})\] and we choose boundary conditions for plane _Couette and Hartmann flows_ which correspond to rigid conditions for the kinetic field and non-conducting boundaries, (cf. [2]). We also assume that there is no forcing pressure in the channel. We recall the following Theorems (see [2], [7], [8], [15]): **Theorem 2.1**: The basic solution of system (1) satisfying the boundary conditions \[U(-1)=-1,\qquad U(1)=1,\qquad\bar{B}(-1)=\bar{B}(1)=0\] is the magnetic _Couette_ flow \[U(z)=\dfrac{\mathrm{sinh}(\mathrm{Ha}\,z)}{\mathrm{sinh}\,(\mathrm{Ha})}, \qquad\bar{B}(z)=\dfrac{\mathrm{cosh}\,(\mathrm{Ha})-\mathrm{cosh}(\mathrm{ Ha}\,z)}{\mathrm{Ha}\,\mathrm{sinh}\,(\mathrm{Ha})}\] **Theorem 2.2**: The basic solution of system (1) satisfying the boundary conditions \[U(-1)=U(1)=0,\qquad\bar{B}(-1)=\bar{B}(1)=0\] is the _Hartmann_ flow \[U(z)=\dfrac{\mathrm{cosh}(\mathrm{Ha})-\mathrm{cosh}(\mathrm{Ha}\,z)}{ \mathrm{cosh}(\mathrm{Ha})-1},\quad\bar{B}(z)=\dfrac{\mathrm{sinh}(\mathrm{Ha} \,z)-z\,\mathrm{sinh}(\mathrm{Ha})}{\mathrm{Ha}(\mathrm{cosh}(\mathrm{Ha})-1)}.\] We note that, with the given values of \(\mathbf{v}(z)\) and \(\mathbf{\hat{B}}(z)\), the pressure \(\Pi\) can be obtained by solving (1)\({}_{1}\) with respect to \(\Pi\). We want to investigate the nonlinear stability of these basic solutions. To this end, we consider a regular (\(C^{2}(\mathcal{D}\times[0,+\infty))\) disturbance of the stationary solution \[\mathbf{v}+\mathbf{u}=(U(z),0,0)+(u,v,w),\quad\mathbf{\hat{B}}+\mathbf{h}=( \bar{B}(z),0,\mathrm{Rm}\,^{-1})+(h,k,\ell),\quad\Pi+\bar{\pi},\] with \((u,v,w)\), \((h,k,\ell)\) and \(\bar{\pi}\) depending on the variables \(x,y,z\), and \(t\). Denoting with \[A=\mathrm{Ha}^{2}\mathrm{Re}\,^{-1}\mathrm{Rm}\,=\mathrm{Ha}^{2}\mathrm{Pm}, \tag{2}\] the equations which govern the evolution of the _difference fields_\(\mathbf{u},\mathbf{h},\bar{\pi}\) (often such difference fields are improperly called perturbations or disturbances) are: \[\begin{cases}\mathbf{u}_{t}+U(z)\mathbf{u}_{x}+wU^{\prime}(z)\mathbf{i}+ \mathbf{u}\cdot\nabla\mathbf{u}=A[\bar{B}(z)\mathbf{h}_{x}+\frac{\mathbf{h}_{ z}}{\mathrm{Rm}}+\\ +\ell\bar{B}^{\prime}(z)\mathbf{i}+\mathbf{h}\cdot\nabla\mathbf{h}]-\nabla \bar{\pi}+\frac{\Delta\mathbf{u}}{\mathrm{Re}}\\ \mathbf{h}_{t}+w\bar{B}^{\prime}(z)\mathbf{i}+U(z)\mathbf{h}_{x}+\mathbf{u} \cdot\nabla\mathbf{h}-\bar{B}(z)\mathbf{u}_{x}-\frac{\mathbf{u}_{z}}{\mathrm{ Rm}}-\ell U^{\prime}(z)\mathbf{i}+\\ -\mathbf{h}\cdot\nabla\mathbf{u}=\frac{\Delta\mathbf{h}}{\mathrm{Rm}}\\ \nabla\mathbf{\cdot}\mathbf{u}=0,\quad\nabla\mathbf{\cdot}\mathbf{h}=0\,, \end{cases} \tag{3}\] where the suffixes \(t\), \(x\) and \(z\) denote derivatives with respect to the corresponding variables, the superscript denotes first derivative with respect to \(z\). We assume that the perturbations are periodic in the variables \(x\) and \(y\), denote with \(\Omega=[0,\frac{2\pi}{a}]\times[0,\frac{2\pi}{b}]\times[-1,1]\) a periodicity cell [15], and denote with \(L_{2}(\Omega)\) the space of real square-integrable functions in \(\Omega\). We indicate with the symbols \((\cdot,\cdot)\) and \(\|\cdot\|\) the usual scalar product and the norm in the space of square-summable functions in \(\Omega\), \(L_{2}(\Omega)\). The most common boundary conditions for \(\mathbf{u},\mathbf{h}\) on the planes \(z=\pm 1\) are (see Chandrasekhar [19]) 1. rigid (_r_), \(u=v=w=0\) 2. stress-free (_sf_), \(u_{z}=v_{z}=w=0\) 3. non-conducting (_n_), \(h=k=\ell=0\) 4. conducting (_c_), \(h_{z}=k_{z}=\ell=0\). Here we consider only the rigid and non-conducting case. Other boundary conditions will be consider in future papers. We recall the (usual) definitions of streamwise and spanwise perturbations: **Definition 2.1**: The perturbations _streamwise_ (or longitudinal) are perturbations \(\mathbf{u},\mathbf{h},\bar{\pi}\) which do not depend on \(x\). **Definition 2.2**: The perturbations _spanwise_ (or transverse) are perturbations \(\mathbf{u},\mathbf{h},\bar{\pi}\) which do not depend on \(y\). The two-dimensional spanwise perturbations are the spanwise perturbations with \(v=k=0\). ## 3 Nonlinear energy stability First we recall the main nonlinear energy stability definitions. **Definition 3.1**: We define the _energy_ (see [15]) of a disturbance \(\mathbf{u},\mathbf{b}\), \[E(t)=\frac{1}{2}(\|\mathbf{u}\|^{2}+A\|\mathbf{h}\|^{2}),\] with the coupling parameter \(A\) given by (2). **Definition 3.2**: A basic motion \(\mathbf{v}(z)=(U(z),0,0),\quad\mathbf{\hat{B}}(z)=(\bar{B}(z),0,\mathrm{Rm}^{ \,-1})\) is _monotone stable in the energy norm_\(E\) of a disturbance, and \(\mathrm{Re}_{E}\) is the critical Reynolds number, if the time orbital derivative of the energy, \(\dot{E}\), is always less than zero, \[\dot{E}<0, \tag{4}\] when \(\mathrm{Re}<\mathrm{Re}_{E}\). In particular the stability is monotone and exponential decreasing if there is a positive number \(\alpha\) such that \(E(t)\leq E(0)\exp\{-\alpha t\}\) for any \(t\geq 0\) and \(\mathrm{Re}<\mathrm{Re}_{E}\). **Definition 3.3**: A basic motion \(\mathbf{v}(z)=(U(z),0,0),\quad\mathbf{\hat{B}}(z)=(\bar{B}(z),0,\mathrm{Rm}^{ \,-1})\) to the Navier-Stokes magnetohydrodynamics equations is _globally stable_ to perturbations if the perturbation energy \(E\) satisfies \[\lim_{t\to+\infty}\frac{E(t)}{E(0)}=0,\quad\forall\,E(0)>0. \tag{5}\] Now we study (and recall some results in [15]) the nonlinear stability of the shear flows by using the Lyapunov second method with the classical energy (see [15]) \[E(t)=\frac{1}{2}(\|\mathbf{u}\|^{2}+A\|\mathbf{h}\|^{2}).\] Taking the orbital derivative of \(E(t)\) and considering equations (3), the periodicity, the boundary conditions and the solenoidality of \(\mathbf{u}\) and \(\mathbf{h}\), we obtain the Reynolds-Orr [28], [29] equation (see [15]) \[\dot{E}=-(wU^{\prime},u)+A\left[(\ell\bar{B}^{\prime},u)-(w\bar{ B}^{\prime},h)+(\ell U^{\prime},h)\right]+\] \[-\mathrm{Re}^{\,-1}\|\nabla\mathbf{u}\|^{2}-A\,\mathrm{Rm}^{\,-1} \|\nabla\mathbf{h}\|^{2}. \tag{6}\] As in the case of fluid-dynamics (see Lorentz [30], Lamb [31], p. 640) we note that "the relative magnitude of the two terms on the right-hand side is unaffected if we reverse the signs of \(u,v,w\), and of \(h\), \(k\) and \(\ell\) or if we multiply them by any constant factor. The stability of a given state of mean motion should not therefore depend on the _scale_ of the disturbance" (the constant factor must be the same for \(\mathbf{u}\) and \(\mathbf{h}\)). Therefore, in the study of the following maximum problems we will always assume that this _scale invariance_ property holds. ### Nonlinear stability with respect to three-dimensional perturbations Applying classical methods, see [22; 32; 33], we define \[I=-(U^{\prime}w,u)+A\left[(\ell\dot{B}^{\prime},u)-(w\ddot{B}^{\prime},h)+( \ell U^{\prime},h)\right], \tag{7}\] and assume that the perturbations satisfy the conditions \(\mathbf{u}=0\) and \(\mathbf{b}=0\) on the boundaries, are divergence-free, periodic in \(x\) and \(y\), and they satisfy the condition \(\|\nabla\mathbf{u}\|+\|\nabla\mathbf{h}\|>0\), and the scale invariance property. We can write the energy equation in this way \[\dot{E}=I-\mathrm{Re}\,^{-1}\|\nabla\mathbf{u}\|^{2}-A\mathrm{Rm}\,^{-1}\| \nabla\mathbf{h}\|^{2}. \tag{8}\] Introducing the space \(\mathcal{S}\) of the kinematically admissible perturbations \(\mathbf{u}\) and \(\mathbf{h}\) periodic in \(x\) and \(y\), \[\begin{array}{l}\mathcal{S}=\{\mathbf{u},\mathbf{h}\in W^{2,1}(\Omega),\; \mathbf{u}=\mathbf{h}=0\text{ when }z=\pm 1,\nabla\cdot\mathbf{u}=\nabla\cdot \mathbf{h}=0,\\ \|\nabla\mathbf{u}\|+\|\nabla\mathbf{h}\|>0\},\end{array} \tag{9}\] where \(W^{2,1}(\Omega)\) is the Sobolev space defined as the subspace of the space of vector fields with their components \(f_{i}\) (\(i=1,2,3\)) in \(L_{2}(\Omega)\) such that \(f_{i}\) and its weak derivatives up to the first order have a finite \(L_{2}\)-norm. In order to solve the conjecture made in [1], we use the method given in [34]. Firstly, we observe that in the case \(I\leq 0\) we have \(\dot{E}<0\), and the perturbations are monotonically stable. If instead \(I\)_is greater than zero_, then for any perturbation in \(\mathcal{S}\) that satisfy the scale invariance property, we may write (8) in the following way \[\dot{E}=\left[\frac{I}{\|\nabla\mathbf{u}\|^{2}+\mathrm{Ha}^{2}\|\nabla \mathbf{h}\|^{2}}-\mathrm{Re}\,^{-1}\right](\|\nabla\mathbf{u}\|^{2}+\mathrm{ Ha}^{2}\|\nabla\mathbf{h}\|^{2}). \tag{10}\] In the case \(I\)_greater than zero_, for any perturbation \(\mathbf{u},\mathbf{h}\) in \(\mathcal{S}\), we have \[\frac{I}{\|\nabla\mathbf{u}\|^{2}+\mathrm{Ha}^{2}\|\nabla\mathbf{h}\|^{2}} \leq\frac{I}{\|\nabla u\|^{2}+\|\nabla w\|^{2}+\mathrm{Ha}^{2}[\|\nabla h\|^{ 2}+\|\nabla\ell\|^{2}]}. \tag{11}\] Defining \[\mathcal{D}_{1}=\|\nabla u\|^{2}+\|\nabla w\|^{2}+\mathrm{Ha}^{2}[\|\nabla h\|^{2 }+\|\nabla\ell\|^{2}],\] and \[\mathcal{D}=\|\nabla\mathbf{u}\|^{2}+\mathrm{Ha}^{2}[\|\nabla\mathbf{h}\|^{2}],\] we now prove that \[\max_{\mathcal{S}}\frac{I}{\mathcal{D}}=\max_{\mathcal{S}_{0}}\frac{I}{ \mathcal{D}_{1}}, \tag{12}\] where \(\mathcal{S}_{0}\) is the subspace of \(\mathcal{S}\) of the two-dimensional spanwise perturbations. To see this, we choose any element \((\mathbf{u},\mathbf{h})\) in \(\mathcal{S}\), we have \[\frac{I}{\mathcal{D}}\leq\frac{I}{\mathcal{D}_{1}}\leq\max_{\mathcal{S}}\frac {I}{\mathcal{D}_{1}}. \tag{13}\] From this inequality it follows that \(\max_{\mathcal{S}}\frac{I}{\mathcal{D}_{1}}\) is an upper bound of the set of elements \(\frac{I}{\mathcal{D}}\) when \((\mathbf{u},\mathbf{h})\) vary in \(\mathcal{S}\). Therefore, \(\max_{\mathcal{S}}\frac{I}{\mathcal{D}}\) is the _least_ upper bound, and \[\max_{\mathcal{S}}\frac{I}{\mathcal{D}}\leq\max_{\mathcal{S}}\frac{I}{ \mathcal{D}_{1}}.\] Finally, in the next subsection we shall prove that \[\max_{\mathcal{S}}\frac{I}{\mathcal{D}_{1}}=\max_{\mathcal{S}_{0}}\frac{I}{ \mathcal{D}_{1}}. \tag{14}\] This implies (12), because \(\mathcal{S}_{0}\) is a subspace of \(\mathcal{S}\). Assuming \(\mathcal{D}_{1}>0\) and observing that the Poincare's inequality holds, we have that the ratio \(\frac{I}{\mathcal{D}_{1}}\) is bounded from above in \(\mathcal{S}\). A theorem due to Rionero [22] (see also Galdi and Rionero [35]) proves that the functional ratio \[\mathcal{F}=\frac{I}{\mathcal{D}_{1}}=\frac{I}{\|\nabla u\|^{2}+\|\nabla w\|^ {2}+\mathrm{Ha}^{2}[\|\nabla h\|^{2}+\|\nabla\ell\|^{2}]}\] admits a maximum in \(\mathcal{S}\). Denoting this maximum with \[\mathrm{Re}_{E}^{-1}=m=\max_{\mathcal{S}}\frac{-(U^{\prime}w,u)+A\left[(\ell \bar{B}^{\prime},u)-(w\bar{B}^{\prime},h)+(\ell U^{\prime},h)\right]}{\| \nabla u\|^{2}+\|\nabla w\|^{2}+\mathrm{Ha}^{2}[\|\nabla h\|^{2}+\|\nabla\ell \|^{2}]}, \tag{15}\] from (10), (11) and (15), we have the inequality \[\dot{E}\leq(\mathrm{Re}_{E}^{-1}-\mathrm{Re}\,^{-1})[\|\nabla\mathbf{u}\|^{2} +\mathrm{Ha}^{2}\|\nabla\mathbf{h}\|^{2}]. \tag{16}\] From this inequality and the Poincare's inequalities \[\frac{\pi^{2}}{4}\|u\|^{2}\leq\|\nabla u\|^{2},\quad\frac{\pi^{2}}{4}\|v\|^{2 }\leq\|\nabla v\|^{2},\quad\frac{\pi^{2}}{4}\|w\|^{2}\leq\|\nabla w\|^{2},\] \[\frac{\pi^{2}}{4}\|h\|^{2}\leq\|\nabla h\|^{2},\quad\frac{\pi^{2}}{4}\|k\|^{2} \leq\|\nabla k\|^{2},\quad\frac{\pi^{2}}{4}\|\ell\|^{2}\leq\|\nabla\ell\|^{2},\] it follows that condition \[\mathrm{Re}\,<\mathrm{Re}_{E}\] implies nonlinear monotone energy stability of magnetic Couette and Hartmann motions: **Theorem 3.1**: Assuming that the Reynolds number satisfies condition \[\mathrm{Re}\,<\mathrm{Re}_{E},\] the basic magnetic Couette and Hartmann motions are monotone asympotically stable in the energy norm \(E\) according to the inequality \[E(t)\leq E(0)e^{\frac{\pi^{2}}{2}c_{0}(\mathrm{Re}-\mathrm{Re}_{E})t},\] with a positive constant \(c_{0}\) depending on Ha and Pm. ### Nonlinear critical Reynolds number We prove here that the nonlinear critical Reynolds number is obtained on two-dimensional spanwise disturbances (the _Orr perturbations_ in fluid dynamics). In order to compute the critical Reynolds number for the monotone nonlinear energy stability, we have to compute \(\mathrm{Re}_{E}\) or \(m=1/\mathrm{Re}_{E}.\) For this purpose we must write the Euler-Lagrange equations of the functional \(\mathcal{F}.\) The Euler-Lagrange equations of the maximum problem (15) are (see [15], [32]) \[\left\{\begin{aligned} &[-U^{\prime}w\dot{\mathbf{i}}-U^{\prime}u \mathbf{k}+A\bar{B}^{\prime}\ell\dot{\mathbf{i}}-A\bar{B}^{\prime}h\mathbf{k} ]+2m(\Delta u\dot{\mathbf{i}}+\Delta w\mathbf{k})=\nabla\lambda_{1}\\ & A[\bar{B}^{\prime}u\mathbf{k}-w\bar{B}^{\prime}\dot{\mathbf{i}}+ U^{\prime}\ell\dot{\mathbf{i}}+U^{\prime}h\mathbf{k}]+2m\mathrm{Ha}^{2}( \Delta h\dot{\mathbf{i}}+\Delta\ell\mathbf{k})=\nabla\lambda_{2},\end{aligned}\right. \tag{17}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are Lagrange multipliers which depend on \(x,y,z.\) We can write the Euler-Lagrange equations in components \[\left\{\begin{aligned} &-U^{\prime}w+A\bar{B}^{\prime}\ell+2m \Delta u=\frac{\partial\lambda_{1}}{\partial x}\\ & 0=\frac{\partial\lambda_{1}}{\partial y}\\ &-U^{\prime}u-A\bar{B}^{\prime}h+2m\Delta w=\frac{\partial\lambda _{1}}{\partial z}\\ & A[-w\bar{B}^{\prime}+U^{\prime}\ell]+2m\mathrm{Ha}^{2}\Delta h =\frac{\partial\lambda_{2}}{\partial x}\\ & 0=\frac{\partial\lambda_{2}}{\partial y}\\ & A[\bar{B}^{\prime}u+U^{\prime}h]+2m\mathrm{Ha}^{2}\Delta\ell= \frac{\partial\lambda_{2}}{\partial z}\\ & u_{x}+v_{y}+w_{z}=0,\;h_{x}+k_{y}+\ell_{z}=0,\end{aligned}\right. \tag{18}\] therefore the two multipliers \(\lambda_{1}\) and \(\lambda_{2}\) do not depend on \(y\). If \(\lambda_{1}=0\) and \(\lambda_{2}=0\), we take into account the conditions of solenoidality \(u_{x}+v_{y}+w_{z}=0\), \(h_{x}+k_{y}+\ell_{z}=0\) and the boundary conditions \(w=w_{z}=\ell=\ell_{z}=v=k=0\) (the boundary conditions for \(w_{z}\) and for \(\ell_{z}\) are obtained from the solenoidality of \(\mathbf{u}\) and \(\mathbf{b}\) and from the boundary conditions for \(u\), \(v\), \(h\) and \(k\).) Then, we take the successive derivatives with respect to \(z\) of each equation of (18). It is not difficult to prove that \(u,w,h,\ell\) and all their derivatives with respect to \(z\) are zero on the boundary, therefore \(u,w\) and \(h,\ell\) are identically zero. This implies that \(\mathbf{u}=\mathbf{h}=0\) in \(\Omega\) that has been excluded. If \(\lambda_{1}\) and \(\lambda_{2}\) are non-zero functions dependent on \(x\) and \(z\), taking into account that \(m\) is the maximum in (15), it is not difficult to prove (see [34]) that \(v=0\) and \(k=0\), and now the solenoidality conditions are \(u_{x}+w_{z}=h_{x}+\ell_{z}=0\). We derive (18)\({}_{1}\) with respect to \(z\) and (18)\({}_{3}\) with respect to \(x\), we subtract and take into account the conditions of solenoidality (18)\({}_{7}\). Likewise, we derive (18)\({}_{4}\) with respect to \(z\) and (18)\({}_{6}\) with respect to \(x\), we subtract and take into account the conditions of solenoidality (18)\({}_{7}\). We obtain \[\begin{cases}-U^{\prime}w_{z}-U^{\prime\prime}w+A\bar{B}^{\prime\prime}\ell+ A\bar{B}^{\prime}\ell_{z}+2m\Delta u_{z}+U^{\prime}u_{x}+A\bar{B}^{\prime}h_{x}-2m \Delta w_{x}=0\\ AU^{\prime}\ell_{z}+AU^{\prime\prime}\ell-A\bar{B}^{\prime\prime}w-A\bar{B}^{ \prime}w_{z}-A\bar{B}^{\prime}u_{x}-AU^{\prime}h_{x}+2m\text{Ha}^{2}\Delta h_ {z}+\\ -2m\text{Ha}^{2}\Delta\ell_{x}=0.\end{cases} \tag{19}\] By differentiating each of the equations with respect to \(x\) and applying the conditions of solenoidality, we finally have: \[\begin{cases}2U^{\prime}w_{xz}+U^{\prime\prime}w_{x}-A\bar{B}^{\prime\prime} \ell_{x}+2m\Delta(w_{xx}+w_{zz})=0\\ -2AU^{\prime}\ell_{xz}-AU^{\prime\prime}\ell_{x}+A\bar{B}^{\prime\prime}w_{x}+ 2m\text{Ha}^{2}\Delta(\ell_{xx}+\ell_{zz})=0,\end{cases} \tag{20}\] with the boundary conditions \[w=w_{z}=\ell=\ell_{z}=0, \tag{21}\] on \(z=\pm 1\). We observe that, as it is easy to check, the maximum of (15) is obtained when \(u_{y}=0\), \(w_{y}=0\), \(h_{y}=0\) and \(\ell_{y}=0\). Therefore, \(u,w,h,\ell\) depend only on \(x\) and \(z\), and in (20) we have \(\Delta=\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial z^{ 2}}\). Since system (20) is linear, we seek solution of the form (see [19; 32; 33; 36]): \[F(x,y,z)=f(z)e^{iax}\,, \tag{22}\] with \(F=w,\ell\) in the domain \(\Omega\). We have the system \[\begin{cases}2iaU^{\prime}Dw+iaU^{\prime\prime}w-iaA\bar{B}^{\prime\prime} \ell+2m(D^{2}-a^{2})^{2}w=0\\ -2iaAU^{\prime}D\ell-iaAU^{\prime\prime}\ell+iaA\bar{B}^{\prime\prime}w+2m \text{Ha}^{2}(D^{2}-a^{2})^{2}\ell=0,\end{cases} \tag{23}\] with \(D=\frac{d}{dz}\). System (23) is the _Orr system_ for shear flows _in magnetohydrodynamics_. This ordinary linear differential system with coefficients that depend on \(z\) is an eigenvalue problem for \(m\) (or \(\mathrm{Re}_{E}\)). The critical Reynolds numbers we obtain from this system correspond exactly to the critical Reynolds numbers obtained by Orr [29] (see also recent results of Falsaperla et al. [1]) in fluid dynamics (i.e. the critical Reynolds numbers are reached for the _two-dimensional spanwise_ perturbations). The Orr's system in fluid dynamics is formally obtained from (23) by setting therein \(\mathrm{Ha}=0\). This result proves _the relation (14) holds_, and (12) is shown. A consequence of this result is that a Squire theorem, [37], holds for nonlinear energy stability in magnetohydrodynamics: the less stabilizing perturbations in the energy norm are the two-dimensional spanwise perturbations. We observe that in _the linear case_ Takashima, [7], and [8], p. 109, writes "It is evident from Eqs. (2.30)-(2.32) and the boundary conditions (2.33)-(2.35) _that Squire's theorem is valid,_ and therefore we shall hereafter consider only two-dimensional disturbances in the \(z-x\) plane (i.e., \(\beta=0\))." ## 4 Some numerical results We show here some numerical results. These results are obtained solving system (23) with boundary condition (21). Eigenvalue problem (23)-(21) has been solved in [1] with a Chebyshev collocation method, using between 50 and 70 base polynomials. For completeness, we report here their result for spanwise perturbations. We fix \(\mathrm{Pm}=0.1\) and \(\mathrm{Ha}=0.1,1,10,50\). In Fig. 1 we fix \(\mathrm{Pm}=0.1\) and \(\mathrm{Ha}=0.1,1,10,50\) and we obtain the Reynolds number \(\mathrm{Re}\,\) as a function wave number \(a\). For each fixed value of \(\mathrm{Ha}\) the critical Reynolds value, \(\mathrm{Re}\,_{E}\) is found taking the minimum of \(\mathrm{Re}\,\) with respect to the parameter \(a\). Figure 1: Orr-Reynolds critical number \(\mathrm{Re}\) for magnetic Couette flow (left panel) and Hartmann (magnetic Poiseuille) flow (right panel) as a function of the wave number \(a\) and magnetic Prandtl number \(\mathrm{Pm}=0.1\). ## 5 Conclusion In this paper we study the monotone nonlinear energy stability of _magnetohydrodynamics plane Couette and Hartmann_ shear flows with rigid and non-conducting boundaries. We solve the conjecture given in [1] proving that the nonlinear critical Reynolds number is obtained on spanwise perturbations. To prove this we compare two functional ratios and take into account that the second member of the energy equation has a scale invariance property with respect to the fields \(\mathbf{u}\) and \(\mathbf{h}\). We therefore solve the Euler-Lagrange equations and prove that the maximum is obtained on the functions which have \(v=0\), \(k=0\) and do not depend on \(y\). This results implies a Squire theorem for nonlinear stability. Moreover, for Reynolds numbers less than \(\mathrm{Re}_{E}\) there can be no transient energy growth. Acknowledgments.I thank Dr. Carla Perrone for making the graphs in section 4. The research that led to the present paper was partially supported by the following Grants: 2017YBKNCE of national project PRIN of Italian Ministry for University and Research, by a grant: PTR-DMI-53722122113 "Analisi qualitativa per sistemi dinamici finito e infinito dimensionali con applicazioni a biomatematica, meccanica dei continui e termodinamica estesa classica e quantistica" of University of Catania. I also thank the group GNFM of INdAM for financial support. **Declarations** Conflicts of interest: The author states that there is no conflict of interest.
2308.07196
Nonequilibrium phase transition of a one dimensional system reaches the absorbing state by two different ways
We study the nonequilibrium phase transitions from the absorbing phase to the active phase for the model of disease spreading (Susceptible-Infected-Refractory-Susceptible (SIRS)) on a regular one dimensional lattice. In this model, particles of three species (S, I and R) on a lattice react as follows: $S+I\rightarrow 2I$ with probability $\lambda$, $I\rightarrow R$ after infection time $\tau_I$ and $R\rightarrow I$ after recovery time $\tau_R$. In the case of $\tau_R>\tau_I$, this model has been found to has two critical thresholds separate the active phase from absorbing phases \cite{ali1}. The first critical threshold $\lambda_{c1}$ is corresponding to a low infection probability and second critical threshold $\lambda_{c2}$ is corresponding to a high infection probability. At the first critical threshold $\lambda_{c1}$, our Monte Carlo simulations of this model suggest the phase transition to be of directed percolation class (DP). However, at the second critical threshold $\lambda_{c2}$ we observe that, the system becomes so sensitive to initial values conditions which suggests the phase transition to be discontinuous transition. We confirm this result using order parameter quasistationary probability distribution and finite-size analysis for this model at $\lambda_{c2}$. Additionally, the typical space-time evolution of this model at $\lambda_{c2}$ shows that, the spreading of active particles are compact in a behavior which remind us the spreading behavior in the compact directed percolation.14
M. Ali Saif
2023-08-14T15:05:10Z
http://arxiv.org/abs/2308.07196v1
Nonequilibrium phase transition of a one dimensional system reaches the absorbing state by two different ways ###### Abstract We study the nonequilibrium phase transitions from the absorbing phase to the active phase for the model of disease spreading (Susceptible-Infected-Refractory-Susceptible (SIRS)) on a regular one dimensional lattice. In this model, particles of three species (S, I and R) on a lattice react as follows: \(S+I\to 2I\) with probability \(\lambda\), \(I\to R\) after infection time \(\tau_{I}\) and \(R\to I\) after recovery time \(\tau_{R}\). In the case of \(\tau_{R}>\tau_{I}\), this model has been found to has two critical thresholds separate the active phase from absorbing phases [51]. The first critical threshold \(\lambda_{c1}\) is corresponding to a low infection probability and second critical threshold \(\lambda_{c2}\) is corresponding to a high infection probability. At the first critical threshold \(\lambda_{c1}\), our Monte Carlo simulations of this model suggest the phase transition to be of directed percolation class (DP). However, at the second critical threshold \(\lambda_{c2}\) we observe that, the system becomes so sensitive to initial values conditions which suggests the phase transition to be discontinuous transition. We confirm this result using order parameter quasistationary probability distribution and finite-size analysis for this model at \(\lambda_{c2}\). Additionally, the typical space-time evolution of this model at \(\lambda_{c2}\) shows that, the spreading of active particles are compact in a behavior which remind us the spreading behavior in the compact directed percolation. ## 1 Introduction Nonequilibrium phase transition from the active states to the absorbing states has been attracted a lot of scientific community efforts recently [1, 2, 3]. One of the most important efforts in this field is concerned by classify nonequilibrium systems into universal classes. In this sense, the directed percolation class (DP) is the most important class in the nonequilibrium phase transition to the absorbing states. Many models have been found to be belongings to this class, for example a contact process (CP), Domany-Kinzel cellular automaton (DK), Ziff-Gulari-Barshad (ZGB) model, pair-contact process (PCP), threshold transfer process (TTP) and branching annihilating walks with an odd number of offspring [4, 5, 6, 7, 8, 9, 10]. According to Janssen-Grassberger criterion [2, 3, 11] a model should belong to DP universality class if the model satisfies the following conditions, display a continuous transition into a unique absorbing state with a positive one-component order parameter, with short-range interactions and without quenched disorder or additional symmetries. In fact DP class seems to be even more general and has been found that, some systems belong to this universality class even if they are violate some of previous criterion for example in the long-range interactions [12] or certain models with many absorbing state [13, 14, 15, 16] or fluctuating passive states [17]. Another important class of the nonequilibrium phase transitions to absorbing state is the voter universality class. This class has been observed in special case of models with a two symmetric (\(Z_{2}\) symmetry) absorbing state [3, 18, 19, 20]. Models such as compact directed percolation (CDP), the \(2A\to\phi\) and the \(2A\to A\), the cellular automaton version of the nonequilibrium kinetic Ising model and Le\(\check{\mbox{e}}\)vy-flight anomalous diffusion in annihilating random walks belong to this class. Parity-Conserving universality class (PC) [3, 18, 19, 20] is another universality class to absorbing state. This class characterizes those models which conserve the number of particles modulo 2. Examples are the one-dimensional kinetic Ising models which combined finite temperature spin exchange dynamics and zero temperature spin-flip [21], branching and annihilating random walks with even number of offspring [22] and parity-conserving class of surface-catalytic models [23]. Nonequilibrium discontinuous phase transitions from an active state to an absorbing state have been also observed in the dimensions higher than one in many cases. For example in a two dimensional ZGB model and its modifications [4, 6, 7, 24, 25, 26, 27, 28], a two dimensional reaction-diffusion contact-process-like model [29, 30, 31], a two lattice versions of the second Schl\(\ddot{o}\)gl model (SSM) [32, 33], a two dimensional naming game model [34, 35, 36], two and four dimensions deterministic conserved threshold transfer process [37, 38] and the prisoner's dilemma with semi-synchronous updates [39] on two dimension. However, discontinuous phase transition to absorbing states have been rarely seen in one dimension. This is due to the fact that, the fluctuations in low dimensions are strong which make the continuous phase transitions are likely to occur. Hinrichsen [1, 2] argued that, first order phase transition can not occur in one dimensional nonequilibrium systems unless there are extra symmetries, conservation laws, long-range interactions or special boundary conditions. By any means in a one dimension, the first order phase transition has been observed in systems with conserved density [38, 40], models with long-range interactions [41] and in the systems with multi-component [42, 43]. For a two-species reaction-diffusion process on a one dimension the renormalization group methods predict a first order phase transition [44, 45], however the numerical simulations of that model have been yielded results in disagreement with the renormalization group prediction [46, 47]. The model candidate to violate Hinrichsen argumentations is the triplet creation model (TCM). This model is a single component and does not possess a conservation law or long-range interactions. Preceding study by Dickman and Tom\(\acute{e}\)[31] had been suggested the first order phase transition of this model for a high value of diffusion rate (\(D\geq 0.9\)). In sequence, Cardoso and Fontanari modified that value to be \(D\geq 0.95\)[48]. Recently, the simulations results of TCM model by Park [49] have been shown that, the phase transition of this model is continuous for any value of \(D\leq 0.98\). More recent study of this model by \(\acute{O}\)dor and Dickman [50] suggests a continuous phase transition for any value of \(D<1\). In this work we are going to study the phase transition from the absorbing state to the active state of the epidemic spreading model SIRS (Susceptible- Infected- Refractory- Susceptible) on a one dimensional regular network. This model has been proven to has a two critical threshold [51]. We are interested to study the phase transition close to those critical thresholds. This work is organized as follows. In section 2, the model and simulations methods are described. Simulation results close to the first critical point of this model are presented and discussed in section 3. Simulation results close to the second critical point of this model are given and discussed in section 4. Conclusions are given in section 5. Model and Methods The model of epidemic spreading SIRS on the networks, can be described as follows [52]: consider a population of \(N\) particles residing on the sites of a lattice in which each particle is connected to \(k\) of its neighbors. The particles can exist in one stage of three stages, susceptible (\(S\)), infected (\(I\)) and refractory (\(R\)). The interaction between the particles on the lattice is as follows: the particles in state \(I\) on the network can infect any one of their neighbors which are in state \(S\) with probability \(\lambda\) at each time step (\(S+I\to 2I\)). The particles in state \(I\) pass to the \(R\) state after an infection time \(\tau_{I}\) (\(I\to R\)). The particles in state \(R\) return to the \(S\) state after a recovery time \(\tau_{R}\) (\(R\to I\)). During the \(R\) phase, the particles are immune and do not infect. For this model on the networks as it have been proven in Ref. [51], we have to distinguish between the following two cases: The first case happens when \(\tau_{I}\geq\tau_{R}\), where in this case SIRS model has only a one critical threshold \(\lambda_{c}\) separates the active phase from absorbing phase. Second case happens when \(\tau_{I}\leq\tau_{R}\), in this case the SIRS model has a two critical threshold \(\lambda_{c1}\) and \(\lambda_{c2}\) in which the system is active in between them and die outside of them. In the second case, the first critical threshold \(\lambda_{c1}\) is corresponding to the situation where the infection probability \(\lambda\) is low. So in this case, the spreading of infection is limited and local therefore and when \(\lambda<\lambda_{c1}\), the system will evolve to the absorbing state where all particles become susceptible (state \(S\)). In contrast the second critical threshold \(\lambda_{c2}\) corresponds to the situation where the infection probability \(\lambda\) is high. Therefore, in this case the infection will spread globally and quickly through the entire network. Now let us ask this question: What will happen for the system when \(\lambda\) is high enough such that, all the particles in the network become infected during a time which is less than or equal to \(\tau_{I}\)?. In this case where \(\tau_{R}>\tau_{I}\), so all the particles will approach the state \(R\) followed by \(S\) state during a time which is not longer than \(\tau_{R}\). Then this case is also an absorbing state for this model. However this absorbing state is un-stationary where the particles will stay in this state only for a time which is not longer than \(\tau_{R}\) after that the system will approach the stable absorbing state \(S\) (see Fig. 1). Hence we can say that, when \(\tau_{I}\leq\tau_{R}\) the SIRS model has a two absorbing states, even the second absorbing state is not stable but if the system reaches it, the system will evolve surely to the stable absorbing state (first absorbing state \(S\)). As aforementioned, the absorbing state of this model is the state where the lattice becomes free of infected particles, i. e. the state \(S\). SIRS model approaches this absorbing state by two different ways. At \(\lambda_{c1}\) the model reaches absorbing state due to of that, the strength of infection is very low hence, the average number of susceptible particles infected by an already infected one during the time \(\tau_{I}\) is less than one. Whereas at \(\lambda_{c2}\) the strength of infection is high such that, each infected particle infects all of its neighbors during the time \(\tau_{I}\). Second critical threshold is equivalent to the state where all particles reach the state \(I\) during a time which is less than or equal \(\tau_{I}\). In this case and where \(\tau_{R}>\tau_{I}\) then, instantaneously all the particles will approach the state \(R\) followed by the absorbing state \(S\) during a time which is not longer than \(\tau_{R}\). we can consider the state where all particles on the lattice are infected (state \(I\)) as an absorbing state of this model however, this state is un-stationary will end up to the absorbing state \(S\). In Fig. 1 we show the space-time evolution for a one dimensional lattice of 11 sites with periodic boundary condition. In this lattice each particle is connected to its first two neighbors. We set the infection probability \(\lambda=1\), infection time \(\tau_{I}=2\), recovery time \(\tau_{R}=3\) and the infection starts with one particle on the center of lattice. It is clear from the figure that, all particles on lattice will approach the absorbing state \(S\) after 11 time steps. We simulate this model on a regular one dimensional lattice with periodic boundary conditions in which each particle on the lattice is connected to \(k=3\) of its nearest neighbors on each side. The system updates synchronically. In this work, we fix the values of infection time and recovery time to be \(\tau_{I}=7\) and \(\tau_{R}=9\) unless we state different. The order parameter \(\rho(t)\) is the density of infective particles \(I\) (active particles) \[\rho(t)=\frac{\left\langle\sum_{j}I_{j}(t)\right\rangle}{N} \tag{1}\] where \(N\) is the total number of lattice sites and \(\left\langle...\right\rangle\) stands for average over ensembles. Steady state of order parameter \(\rho_{s}\) is the state when \(\rho_{s}\equiv t\stackrel{{ lim}}{{\rightarrow}}\infty\rho(t)\). In Fig. 2, we recreate the steady state of the density of active particle as function of \(\lambda\) given in Ref. [51]. For each point in Fig. 2, we start the simulation from the initial density of active particle \(\rho(0)=0.1\) and averaged over 100 configurations after discarding \(10^{4}\) initial time steps. Figure shows clearly the two critical thresholds which we are interesting to study the phase transition at them. ## 3 Phase Translation at the first critical point \(\lambda_{c1}\) For a general view about the kind of the phase transition at the first critical point, we start the simulation of this model with the typical space-time evolution beside the \(\lambda_{c1}\). In Fig. 3 we show the typical space-time evolution of this model when the simulation starts initially from a single active seed located at the center of lattice for the values of the parameters \(\lambda=0.090\) and \(\lambda=0.094\). Fig. 3 seems to be similar to the typical space-time evolution of the systems which undergo DP phase transition [2]. To confirm if the phase transition in this case is in the DP universality class, we are going to calculate some values of the critical exponents of this model, and before that we will determine the value of critical point \(\lambda_{c1}\). In Fig. 4 we plot the average steady state of density of active particles \(\rho_{s}\) at various values of the infection probability \(\lambda\). In the simulation, we use a lattice of size \(N=10^{4}\) averaged over 200 realizations after discarding the first \(10^{4}\) time steps. Fig. 4 shows clearly that, the system crosses from the absorbing phase to the active phase at specific value of the parameter Figure 1: Space-time evolution of lattice of 11 sites with periodic boundary condition when \(\lambda=1\), \(\tau_{I}=2\), \(\tau_{R}=3\) and \(k=1\) (Blue: \(S\), Red: \(I\) and Green: \(R\)). \(\lambda\). For the best estimations, the value of the critical point seems to converge to \(\lambda_{c1}=0.906\pm 0.004\). Using this result we can determine one of the critical exponents related to this model where, it is known that, for the continuous phase transitions and as the control parameter \(\lambda\) approaches the critical point \(\lambda_{c}\), the stationary value of the order parameter \(\rho_{s}\) vanishes asymptotically according to a power law as follows [1, 2, 3]: \[\rho_{s}\sim(\lambda-\lambda_{c})^{\beta} \tag{2}\] Inset of Fig. 4 shows the logarithmic plot of \(\rho_{s}\) as function of the distance from the critical point \((\lambda-\lambda_{c1})\), which shows clearly the power law behavior. Estimated value of the critical exponent \(\beta\) from the inset of Fig. 4 gives us \(\beta=0.281\pm 0.005\) which consists very well with the value of \(\beta=0.276\) for the \((1+1)\) DP universality class [1, 2, 3]. To extract a further critical exponent, we perform time dependent Monte Carlo simulations of this model starting from a fully occupied lattice. As we know for continuous phase transitions, at the critical point \(\lambda_{c}\), the time evolution of the order parameter \(\rho(t)\) decays asymptotically according to the following power law [1, 2, 3] \[\rho(t)\sim t^{-\delta} \tag{3}\] Where \(\delta\) is the critical exponent which equal to \(0.159464(6)\) for DP universality class in the \(1+1\) dimension [2]. Fig. 5 shows the density of active particle \(\rho(t)\) as function of time on a logarithmic scale. At the critical point the system clearly shows a power law decay of the active particles. For the best fitting, the value of the critical exponent we find to be \(\delta=0.159\pm 0.005\) which is again consistent with the value of the critical exponent for DP universality class in the \(1+1\) dimension. Hence we can confirm that, for the case when \(\tau_{I}<\tau_{R}\) the phase transition from the absorbing state to active state for SIRS model at the first critical point \(\lambda_{c1}\) is of kind DP universality class in \((1+1)\) dimension. Here we should mention to that, close to \(\lambda_{c1}\), the values of \(\lambda\) are low enough in which the stable absorbing state \(S\) is dominated state on the system i. e., it is impossible for the system to approach the un-stationary absorbing state \(I\) in this case. Therefore the accessible absorbing state for the system close to the first critical point only the state \(S\), consequently in this case the system satisfies Janssen Grassberger criterion expect this model is multi-component system. ## 4 Phase Transition at the second critical point \(\lambda_{c2}\) As we increase the value of \(\lambda\) toward the second critical point \(\lambda_{c2}\) we observe that, at a specific value of \(\lambda\) (which is \(\lambda>0.15\) for the system of size \(N=10^{4}\) other parameters as same as the parameters we have used in the previous section) the steady state of average density of infected particles \(\rho(t)\) becomes strongly dependent on its initial values \(\rho(0)\) as the Fig. 6 shows. In Fig. 6 we plot the average values of \(\rho(t)\) as function of time for various values of initial conditions \(\rho(0)\). Fig. 6b, shows three plots of \(\rho(t)\) as function of time for the case when \(\lambda=0.16\), in which we start simulations from different initial values \(\rho(0)\). The figure shows clearly that, for the initial values \(\rho(0)=0.1\) and \(\rho(0)=0.3\) the system attains the same steady state, however when \(\rho(0)=0.6\) the system saturates at a different steady state. Fig. 6c shows the situation for \(\lambda=0.20\) with initial conditions same as in Fig. 6b. In this case the system dependence on its initial values becomes more apparent, where the three plots approach different steady states for the three values of initial conditions. For \(\lambda=0.12\), Fig. 6a is given for comparison, where in this case the curves reach same steady state and the system does not depend on its initial conditions. Here we should mention to that, in Monte Carlo simulations of Fig. 6, we take the average of \(\rho(t)\) over all configurations those survive or not. Another point we mention here is that, the sensitivity of the steady state of the order parameter to initial conditions in the model of disease spreading has been observed in Figure 3: Typical space-time evolutions for \(\lambda=0.090\) (right) and \(\lambda=0.094\) (left). Simulation starts from a single active particle (black) and time increases downward. minimal vaccination-epidemic model [53]. By careful inspection of the time evolution of \(\rho(t)\) we observe that, there are some configurations quickly become trapped in absorbing state during a time which is not longer than \(\tau_{I}+\tau_{R}\) (in our case it is 17 time steps). This time is on the average is exactly the time need it the particles to go through one infection cycle. Whereas some other configurations survive for a longer time. We also observe that, for fixed values of \(\rho(0)\) the density of trapped configurations increases as the value of \(\lambda\) becomes higher. In contrast increasing in the values of \(\rho(0)\) causes increasing in the density of trapped configurations for fixed values of \(\lambda\). Fig. 7 shows the density of trapped configurations (DTCO) as function of density of initial active particles \(\rho(0)\) for the value of \(\lambda=0.16\). In this figure, the trapped configurations are those configurations reach the absorbing state during the time which is less than or equal 17 time steps. Figure shows clearly that, whenever the initial density \(\rho(0)\) of active particles is \(\rho(0)>0.4\), all the configurations reach the absorbing state. However, when the values of \(\rho(0)<0.4\), we can see increasing in the number of surviving configurations as the value of \(\rho(0)\) decreases. We can deduce from the previous results that, the chance for the system to approach the state where all the particles in the network are infected (absorbing state \(I\)) becomes possible as the value of \(\lambda\) goes toward the higher values. This possibility enhances as the density of initial values are increased. We can understand the relation between the values of infection probability \(\lambda\) and the initial values density \(\rho(0)\) of this model from the mean field approximation (see Ref. [56]). Whereas the increasing on the number of \(I\) particles at the next time will proportional to the number of particles of kind \(I\) and \(S\) at this time, i. e. \(I(t+1)\propto I(t)+\lambda S(t)I(t)\), then, if the value of \(\lambda S(I)I(t)\) becomes high enough such that \(I(t+1)=N\) the system will reach the Figure 4: The steady state of density of particles at various values of the \(\lambda\) for the same parameters in Fig. 2. The inset is Log-log plot of the \(\rho_{s}\) as function of the distance to the critical point. state \(I\). Therefore, we can say that, for high values of \(\lambda\) un-stationary absorbing state becomes the dominating state in system. Because of that dependence for the system on its initial values conditions near \(\lambda_{c2}\), we have faced difficulty in determining the kind of the phase transition or even accurately determine the critical point close to \(\lambda_{c2}\). We should mention also to that, we could not get any kind of power law behavior near the expected value of the critical point using the time dependent dynamics starting from a fully occupied lattice or form a single active seed. However, and according to Refs. [46, 57], the initial values dependence is an indicator of a discontinuous phase transition. Therefore, for better understanding about the mechanism of the phase transition in this case, we perform the order parameter quasistationary probability distribution for this model. As claimed by Refs. [27, 29, 47, 58, 59, 60] the order parameter quasistationary probability distribution is bimodal in the neighborhood of a discontinuous phase transition in contrast to a continuous phase transition where there will be only a single pick. Fig. 8 shows the results of our Monte Carlo simulations for the cell-occupancy histogram distribution (P) of our model for cells of 100 sites at the center of lattice of \(N=10^{3}\) particles, at a various values of \(\lambda\). The variable \(n\) in Fig. 8 is the number of active particles. The quasistationary distribution is clearly bimodal distribution. This result enhances assumption the discontinuous phase transition of this model at \(\lambda_{c2}\). For comparison the inset of Fig. 8 shows the cell-occupancy histogram distribution in the vicinity of \(\lambda_{c1}\) (\(\lambda=0.1\)), where the system undergoes a continuous phase transition. It is clear in this case there is only a single pick. The results we find previously suggest strongly that, the phase transition at the second critical point \(\lambda_{c2}\) is discontinuous. However, for a more clarification we study the quasistationary behavior of this system beside the critical point \(\lambda_{c2}\). For that we achieve a finite-size analysis, which is a more reliable procedure as recently proposed in Refs. [33, 36]. According to that procedure, the difference between the pseudotransition point \(\lambda_{N}\) (where \(N\) denotes the system volume) and the transition point \(\lambda_{c2}\) scales with \(N^{-1}\) according to the relation \(\lambda_{N}=\lambda_{c2}+aN^{-1}\). Therefore, in order to determine accurately the value of \(\lambda_{N}\) we use the system order parameter variance \(\chi=N\big{(}\big{\langle}\rho^{2}\big{\rangle}-\langle\rho\rangle^{2}\big{)}\). This quantity has been proven to has a peak at the value of pseudotransition point \(\lambda_{N}\)[33, 36]. We restrict our simulation here, only on surviving configurations. Fig. 9a shows the order parameter variance \(\chi\) as function of \(\lambda\). Whereas Fig. 9b shows how the values of pseudotransition point \(\lambda_{L}\) scale with the values of system size \(N^{-1}\). Extrapolation of \(N\to\infty\) yields the critical point for this model to be \(\lambda_{c2}=0.24\pm 0.01\). Final point we have studied for this model is the space-time evolution of the active particles close to \(\lambda_{c2}\). Fig. 10 shows the infected particles with red color during the time of \(10^{3}\) time steps for a system of \(N=10^{3}\) particles at the value of \(\lambda=0.231\) and \(\lambda=0.241\). In both figures simulation starts at \(t=0\) with all particles are in the state \(S\) expect for a one active particle \(I\) at the center of lattice. Figures clearly show that, the spreading of active particles are compact in a behavior remind us the spreading behavior of CDP models. However, the difference here is that, the particles have a finite time to stay in the active phase. The coexistence of small compact isolated islands of active particles with high regions of inactive ones is again indeed consistent with a discontinuous phase transition. Finally we mention to that, the models of disease spreading such as a minimal vaccination-epidemic model and Susceptible- Infected-Susceptible (SIS) model have been also found to show either a continuous or a discontinuous active to absorbing phase transition [53, 54, 55]. Additionally, we can deduce some similarities between the phase transition in this model and the phase transition of ZGB model on two dimensional lattice where both models have two critical thresholds. In both models the first critical threshold is corresponding to the continuous DP class and the second critical threshold is corresponding to the discontinuous phase transition. ZGB model has two absorbing states, the first one is at small values of adsorption rate (beside the first critical point) and the second one (beside second critical point) is at high values of adsorption rate [4]. SIRS also has a one absorbing state at a low infection rate (beside the first critical point) and unstable absorbing Figure 6: The time evolution of average value of density of particles as function of time at different values of initial conditions for \(N=10^{4}\), \(k=3\), \(\tau_{I}=7\) and \(\tau_{R}=9\): a) for \(\lambda=0.12\) b) for \(\lambda=0.16\) c) for \(\lambda=0.20\). For each curve we averaged over 200 realizations. state at a high infection rate (beside second critical point). ## 5 Conclusions In the summary, we have studied the phase transition from the absorbing phase to active phase for the model of infection spearing SIRS on the one dimensional network. This model has been found to has a two critical points where the infection survives in between those critical points and dies out outside of them. The two critical points correspond to low infection rate and high infection rate. Using Monte Carlo simulations we have found that, whereas the the phase transition at the first critical point is of kind the DP universal class, the phase transition at the second critical point is of kind first order phase transition. In this manner, the presence of continuous and discontinuous phase transitions has been also confirmed in the models of disease spreading suchas a minimal vaccination-epidemic model and SIS model [53, 54, 55]. We can also compare the phase transition in this model with the phase transition in ZGB model. Both models have two critical points, where the phase transition at the first critical point is of kind DP class and the phase transition at the second critical point is discontinuous. However, we should mention here to that, the system we have studied here is a one dimensional system whereas ZGB is a two dimensional system. Figure 7: Density of trapped configurations (DTCO) as function of time at different values of initial values conditions when \(N=10^{4}\), \(k=3\), \(\tau_{I}=7\), \(\tau_{R}=9\), and \(\lambda=0.16\), each point in the figure averaged over 2000 configurations.
2304.01632
Almost sure upper bound for a model problem for multiplicative chaos in number theory
The goal of this work is to prove a new sure upper bound in a setting that can be thought of as a simplified function field analogue. This result is comparable to a recent result of the author concerning almost sure upper bound of random multiplicative functions. Having a simpler quantity allows us to make the proof more accessible.
Rachid Caich
2023-04-04T08:45:33Z
http://arxiv.org/abs/2304.01632v2
# Almost sure upper bound for a model problem for multiplicative chaos in number theory ###### Abstract We give a proof a comparable result to a recent result of the author concerning almost sure upper bound of random multiplicative functions, in a more simplified setting. Having a simpler quantity allows us to make the proof more accessible. **Keywords:** Random multiplicative functions, large fluctuations, law of iterated logarithm, mean values of multiplicative functions, Doob's inequality, Hoeffding's inequality, martingales. **2000 Mathematics Subject Classification:** 11N37, (11K99, 60F15). ## 1 Introduction Let \((X(k))_{k\geqslant 1}\) be a sequence of independent standard complex Gaussian random variables, where the real and imaginary parts of \(X(k)\) are independently distributed like real Gaussian random variables with mean \(0\) and variance \(\frac{1}{2}\). Consider a sequence of random variables \((A(n))_{n\geqslant 0}\) defined by the formal power series identity \[\exp\bigg{(}\sum_{k=1}^{+\infty}\frac{X(k)}{\sqrt{k}}z^{k}\bigg{)}=\sum_{n=0} ^{+\infty}A(n)z^{n}. \tag{1}\] Let \(\mathcal{P}\) be the set of the prime numbers, _a Steinhaus random multiplicative function_ is obtained by letting \((f(p))_{p\in\mathcal{P}}\) be a sequence of independent Steinhaus random variables (i.e distributed uniformly on the unit circle \(\{|z|=1\}\)), and then setting \[f(n):=\prod_{p^{a}||n}f(p)^{a}\text{ for all }n\in\mathbb{N},\] where \(p^{a}||n\) means that \(p^{a}\) is the highest power of \(n\). In two recent papers ([8] and [2]), the sequence of random variables \((A(n))_{n\geqslant 0}\) has been interpreted as an analogue to the Steinhaus random multiplicative function. Recently, there has been much focus regarding the moments and almost sure bounds for the mean values of random multiplicative functions. For the lower bound, Harper [5] proved, using a _Multiplicative Chaos_ techniques, that for any function \(V(x)\) tending to infinity with \(x\), there almost surely exists arbitrarily large values of \(x\) for which \[\big{|}M_{f}(x)\big{|}\gg\frac{\sqrt{x}(\log_{2}x)^{1/4}}{V(x)}. \tag{2}\] Here and in the sequel \(\log_{k}\) denotes the k-fold iterated logarithm. In [4], Harper proved when \(x\to+\infty\) \[\mathbb{E}\bigg{[}\big{|}M_{f}(x)\big{|}\bigg{]}\asymp\frac{\sqrt{x}}{(\log_{ 2}x)^{1/4}}. \tag{3}\] This discrepancy of a factor \(\sqrt{\log_{2}x}\) between the first moment and the almost sure behaviour is similar to the Law Iterated Logarithm for independent random variables. For this reason Harper conjectured that for any fixed \(\varepsilon>0\), we might have almost surely, as \(x\to+\infty\) \[M_{f}(x)\ll\sqrt{x}(\log_{2}x)^{1/4+\varepsilon} \tag{4}\] (see the introduction in [5] for more details). This conjecture has proven by the author in [1]. In [8], Soundararajan and Zaman were motivated by the outcome and examined the moments of \(A(n)\), revealing that they resemble those in the random multiplicative functions. They proved the analogue of (3), that we have \[\mathbb{E}[|A(n)|]\asymp\frac{1}{(\log n)^{1/4}}.\] More recently, in [2], Gerspash, proved the analogue of (2) that for any function \(V(n)\) tending to infinity with \(n\), there almost surely exist arbitrarily large values of \(n\) for which \[|A(n)|\geqslant\frac{(\log n)^{1/4}}{V(n)}.\] The main goal of this paper is to prove the analogue of the almost sure inequality (4). **Theorem 1.1**.: _Let \(\varepsilon>0\). Let \((A(n))_{n\geqslant 0}\) as defined in (1). We have almost surely, as \(n\) tends infinity_ \[A(n)\ll(\log n)^{\frac{1}{4}+\varepsilon}. \tag{5}\] We aim to enhance and simplify the demonstration of theorem 1.1 given in [1], in the case for the Steinhaus and Rademacher multiplicative function. Our objective is to create a gentle proof for those seeking a comprehensible grasp of the theorem in the context of this model. ## 2 Preliminaries ### Notation Let's start by some definitions. Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probabilistic space. We call a _filtration_ every sequence \((\mathcal{F}_{n})_{n\geqslant 1}\) of increasing sub-\(\sigma\)-algebras of \(\mathcal{F}\). We say that a sequence of real random variables \((Z_{n})_{n\geqslant 1}\) is _submartingale_ (resp. _supermartingale_) sequence with respect to the filtration \((\mathcal{F}_{n})_{n\geqslant 1}\), if the following properties are satisfied: - \(Z_{n}\) is \(\mathcal{F}_{n}\) measurable - \(\mathbb{E}[|Z_{n}|]<+\infty\) - \(\mathbb{E}[Z_{n+1}\,|\,\mathcal{F}_{n}]\geqslant Z_{n}\) (resp. \(\mathbb{E}[Z_{n+1}\,|\,\mathcal{F}_{n}]\leqslant Z_{n}\) ) almost surely. We say that \((Z_{n})_{n\geqslant 1}\) is martingale difference sequence with respect to the same filtration \((\mathcal{F}_{n})_{n\geqslant 1}\) if - \(Z_{n}\) is \(\mathcal{F}_{n}\) measurable - \(\mathbb{E}[|Z_{n}|]<+\infty\) - \(\mathbb{E}[Z_{n+1}\,|\,\mathcal{F}_{n}]=0\) almost surely. An event \(E\in\mathcal{F}\) happens _almost surely_ if \(\mathbb{P}[E]=1\). Let \(Z\) be a random variable and let \(\mathcal{H}_{1}\subset\mathcal{H}_{2}\subset\mathcal{F}\) some sub-\(\sigma\)-algebras, we have \[\mathbb{E}\big{[}\mathbb{E}\big{[}Z\,\big{|}\,\mathcal{H}_{2}\big{]}\,\big{|} \,\mathcal{H}_{1}\big{]}=\mathbb{E}\big{[}Z\,\big{|}\,\mathcal{H}_{1}\big{]}.\] ### Some properties We follow the notations of Soundararajan and Zaman in [8]. Let \((X(k))_{k\geqslant 1}\) be a sequence of independent standard complex Gaussian random variables. By a partition \(\lambda\) we mean a non-increasing sequence of non-negative integers \(\lambda_{1}\geqslant\lambda_{2}\geqslant...\), with \(\lambda_{n}=0\) from a certain point onwards. We denote by \(|\lambda|=\lambda_{1}+\lambda_{2}+\lambda_{3}...\), and for each integer \(k\geqslant 1\) we denote by \(m_{k}=m_{k}(\lambda)\) the number of parts in \(\lambda\) that equal to \(k\). With this notations, let \[a(\lambda):=\prod_{k\geqslant 0}\bigg{(}\frac{X(k)}{\sqrt{k}}\bigg{)}^{m_{k}} \frac{1}{m_{k}!}, \tag{6}\] we have then \[\exp\bigg{(}\sum_{k=1}^{+\infty}\frac{X(k)}{\sqrt{k}}z^{k}\bigg{)}=\sum_{ \lambda}a(\lambda)z^{|\lambda|},\] thus, for every \(n\geqslant 0\) \[A(n)=\sum_{|\lambda|=n}a(\lambda).\] Note that for a standard complex Gaussian \(Z\), we have \[\mathbb{E}\big{[}Z^{n}\overline{Z}^{m}\big{]}=\begin{cases}n!&\text{if }n=m,\\ 0&\text{otherwise}.\end{cases}\] It follows that if \(\lambda\) and \(\lambda^{\prime}\) for two different partitions \[\mathbb{E}\big{[}a(\lambda)\overline{(\lambda^{\prime})}\big{]}=0.\] If \(\lambda=\lambda^{\prime}\) then \[\mathbb{E}\big{[}|a(\lambda)|^{2}\big{]}=\prod_{k\geqslant 0}\frac{1}{k^{m_{k} }m_{k}!}\mathbb{E}\big{[}|X(k)|^{2m_{k}}\big{]}=\prod_{k\geqslant 0}\frac{1}{k^{m_ {k}}m_{k}!}.\] Thus \[\mathbb{E}\big{[}|A(n)|^{2}\big{]}=\sum_{|\lambda|=n}\mathbb{E}\big{[}|a( \lambda)|^{2}\big{]}=\sum_{|\lambda|=n}\prod_{k\geqslant 0}\frac{1}{k^{m_{k} }m_{k}!}=1.\] The final step is derived from the well-known formula for calculating the number of permutations in the symmetric group \(S_{n}\) whose cycle decomposition corresponds to the partition \(\lambda\). One can study \(A(n)\) throughout the generating function. Note that by Cauchy's Theorem, for \(n\leqslant R\), we have \[A(n)=\frac{1}{2\pi i}\int_{|z|=1}F_{R}(z)\frac{\mathrm{d}z}{z^{n+1}}\] where \[F_{R}(z):=\exp\bigg{(}\sum_{k\leqslant R}\frac{X(k)}{\sqrt{k}}z^{k}\bigg{)}. \tag{7}\] We start our proof by stating some tools. **Lemma 2.1**.: _(Borel-Cantelli's First Lemma). Let \((A_{n})_{n\geqslant 1}\) be sequence of events. Assuming that \(\sum_{n=1}^{+\infty}\mathbb{P}[A_{n}]<+\infty\) then \(\mathbb{P}[\limsup_{n\rightarrow+\infty}A_{n}]=0\)._ Proof.: See theorem 18.1 in [3]. **Lemma 2.2**.: _(2-Dimension Doob's inequality.) Let \((X_{n,k})_{\begin{subarray}{c}n\geqslant 0\\ 0\leqslant k\leqslant K\end{subarray}}\) be a nonnegative sequence of random variables. Let \((\mathcal{F}_{n})_{n\geqslant 0}\) be a filtration. Let \(\mathcal{S}_{0}\) be a \(\mathcal{F}_{0}\)-measurable event. Let for each \(0\leqslant k\leqslant K\), the sequence \((X_{n,k})_{n\geqslant 0}\) is supermartingale with respect to \((\mathcal{F}_{n})_{n\geqslant 0}\) and we assume that for all \(0\leqslant k\leqslant K\), \(X_{0,k}=X_{0}\), where \(X_{0}\) is a random variable which doesn't depend on \(k\). Then for any \(\lambda>0\) and \(N\geqslant 0\), we have_ \[\lambda\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}0\leqslant k\leqslant K\\ 0\leqslant n\leqslant N\end{subarray}}X_{n,k}>\lambda\,\big{|}\,\mathcal{S}_{ 0}\bigg{]}\leqslant 2\mathbb{E}[X_{0}\,|\,\mathcal{S}_{0}].\] Proof.: See lemma 3.11 in [1]. **Lemma 2.3**.: _Let \(Z=(Z_{n})_{1\leqslant n\leqslant N}\) be a complex martingale difference sequence with respect to a filtration \(\mathcal{F}=(\mathcal{F}_{n})_{1\leqslant n\leqslant N}\). We assume that for each \(n\), \(Z_{n}\) is bounded almost surely (let's say \(|Z_{n}|\leqslant b_{n}\) almost surely, where \(b_{n}\) is some real number). Furthermore, assume that we have \(|Z_{n}|\leqslant S_{n}\) almost surely, where \((S_{n})_{1\leqslant n\leqslant N}\) is a real predictable process with respect to the same filtration. We set the event \(\Sigma:=\bigg{\{}\sum_{1\leqslant n\leqslant N}S_{n}^{2}\leqslant T\bigg{\}}\) where \(T\) is a deterministic constant. Then, for any \(\varepsilon>0\),_ \[\mathbb{P}\bigg{[}\bigg{\{}\bigg{|}\sum_{1\leqslant n\leqslant N}Z_{n}\bigg{|} \geqslant\varepsilon\bigg{\}}\,\bigcap\,\Sigma\bigg{]}\leqslant 2\exp\bigg{(} \frac{-\varepsilon^{2}}{10T}\bigg{)}.\] Proof.: See lemma 3.13 in [1]. **Lemma 2.4**.: _Let \(R\geqslant 1\) be a real number and \(F_{R}(z)\) as in (7). Uniformly for \(1/2\leqslant q\leqslant 1\) and \(1\leqslant r\leqslant\mathrm{e}^{1/R}\), we have_ \[\mathbb{E}\bigg{[}\bigg{(}\frac{1}{2\pi}\int_{0}^{2\pi}|F_{R}(r\mathrm{e}^{i \theta})|^{2}\mathrm{d}\theta\bigg{)}^{q}\bigg{]}\ll\bigg{(}\frac{R}{1+(1-q) \sqrt{\log R}}\bigg{)}^{q}.\] Proof.: See proposition 3.2 in [8]. ## 3 Reduction of the problem The goal of this section is to reduce the problem to something simple to deal with. We want to prove that the event \[\mathcal{A}:=\big{\{}|A(n)|>4(\log n)^{1/4+\varepsilon},\,\text{for infinitely many $n$}\big{\}}\] holds with null probability. We adopt the reasoning from [1] and set \(X_{\ell}\) to be equal to \(2^{\ell^{K}}\), where \(K=\frac{25}{\varepsilon}\). It suffices to prove that the event \[\mathcal{B}:=\bigg{\{}\sup_{X_{\ell-1}<n\leqslant X_{\ell}}\frac{|A(n)|}{( \log n)^{1/4+\varepsilon}}>4,\,\text{for infinitely many $\ell$}\bigg{\}}\] holds with null probability. We set \[\mathcal{B}_{\ell}:=\bigg{\{}\sup_{X_{\ell-1}<n\leqslant X_{\ell}}\frac{|A(n) |}{(\log n)^{1/4+\varepsilon}}>4\bigg{\}}.\] In order to prove Theorem 1.1 using Borel-Cantelli's First Lemma 2.1, it is enough to establish the convergence of the serie \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}\big{]}\). Arguing as Lau-Tenenbaum-Wu in [6] in the proof of lemma 3.1 and recently in [1] at the beginning of Section 5.1, we consider for every \(j\geq 0\) \[y_{j}=\left\lfloor\frac{2\ell^{K}\mathrm{e}^{j/\ell}}{2^{K\ell^{K-1}}}\right\rfloor \text{ and }\widetilde{y}_{j}:=\frac{2\ell^{K}\mathrm{e}^{j/\ell}}{2^{K\ell^{K-1}}}.\] Let \(J\) be minimal under the constraint \(y_{j}\geq X_{\ell}\) which means \[J_{\ell}=J:=\lceil K\ell^{K}\log 2\rceil\ll\ \ell^{K}. \tag{8}\] Note that \(\ell^{K}=\frac{1}{\log 2}\log X_{\ell}\asymp\log n\asymp\log y_{j}\) for any \(n\in]X_{\ell-1},X_{\ell}]\) and \(\ 1\leqslant\ j\leqslant\ J\). Let \(n\in]X_{\ell-1},X_{\ell}]\), we start by splitting \(A(n)\) according to the size of \(\lambda_{1}\) and \(m_{\lambda_{1}}(\lambda)\), we divide \(A(n)\) to \[A(n)=A_{0}(n)+A_{1}(n)+A_{2}(n)+A_{3}(n)\] where \[A_{0}(n):=\sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{1}\leqslant y_{0}\end{subarray}}a(\lambda),\qquad\qquad A_{1}(n):= \sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{1}>y_{0}\\ m_{\lambda_{1}}(\lambda)=1\end{subarray}}a(\lambda),\] \[A_{2}(n):=\sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{1}>y_{0}\\ m_{\lambda_{1}}(\lambda)=2\end{subarray}}a(\lambda)\qquad\text{and}\qquad A_{3 }(n):=\sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{1}>y_{0}\\ m_{\lambda_{1}}(\lambda)\geqslant 3\end{subarray}}a(\lambda).\] Now we set, for each \(r\in\{0,1,2,3\}\) \[\mathcal{B}_{\ell}^{(r)}:=\bigg{\{}\sup_{X_{\ell-1}<n\leqslant X_{\ell}}\frac{ |A_{r}(n)|}{(\log n)^{1/4+\varepsilon}}>1\bigg{\}}.\] Note that \[\mathcal{B}_{\ell}\subset\bigcup_{r=0}^{3}\mathcal{B}_{\ell}^{(r)},\] thus, to prove \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}\big{]}\) converges, it suffices to prove \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(r)}\big{]}\) converges, for all \(r\in\{0,1,2,3\}\). Convergence of \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(0)}]\) and \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(3)}]\). Let's start first by dealing with \(\mathcal{B}_{\ell}^{(0)}\). **Lemma 4.1**.: _The sum \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(0)}]\) converges._ Proof.: We have \[\mathbb{E}\big{[}|A_{0}(n)|^{2}\big{]}=\sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{1}\leqslant y_{0}\end{subarray}}\mathbb{E}\big{[}|a(\lambda)|^{2} \big{]}=\sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{1}\leqslant y_{0}\end{subarray}}\prod_{k}\frac{1}{k^{m_{k}}m_{k}!}. \tag{9}\] Arguing as Soundararajan and Zaman in [8], the right side of (9) is the coefficient of \(z^{n}\) in the generating function \(\exp\big{(}\sum_{k\leqslant y_{0}}z^{k}/k\big{)}\). Since the coefficients of this generating function are all non-negative, for any \(r>0\) we conclude that \[\mathbb{E}\big{[}|A_{0}(n)|^{2}\big{]}=\sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{0}\leqslant y_{0}\end{subarray}}\prod_{k}\frac{1}{k^{m_{k}}m_{k}!} \leqslant\frac{1}{r^{n}}\exp\bigg{(}\sum_{k\leqslant y_{0}}\frac{r^{k}}{k} \bigg{)}. \tag{10}\] By choosing \(r=\exp(1/y_{0})\) and since \(X_{\ell-1}<n\leqslant X_{\ell}\), we get \[\mathbb{E}\big{[}|A_{0}(n)|^{2}\big{]} \leqslant\frac{\exp\bigg{(}\sum_{k\leqslant y_{0}}\frac{\mathrm{e}^ {k/y_{0}}}{k}\bigg{)}}{\mathrm{e}^{n/y_{0}}}\] \[\leqslant\frac{\exp\bigg{(}\sum_{k\leqslant y_{0}}\frac{\mathrm{e }}{k}\bigg{)}}{\exp(X_{\ell-1}/y_{0})}\] \[\leqslant\frac{\exp\big{(}2\mathrm{e}\log y_{0}\big{)}}{\exp(X_{ \ell-1}/y_{0})}\] \[\leqslant\frac{y_{0}^{2\mathrm{e}}}{\exp(X_{\ell-1}/y_{0})} \leqslant\frac{2^{2\mathrm{e}\ell^{K}}}{\exp\big{(}2^{c\,\ell^{K-2}}\big{)}}\] where \(c\) is an absolute constant. Thus, by Markov's inequality \[\mathbb{P}[\mathcal{B}_{\ell}^{(0)}]\leqslant\sum_{X_{\ell-1}<n\leqslant X_{ \ell}}\frac{1}{(\log n)^{1/2+2\varepsilon}}\mathbb{E}\big{[}|A_{0}(n)|^{2} \big{]}\leqslant\frac{2^{(2\mathrm{e}+1)\ell^{K}}}{\exp\big{(}2^{c\,\ell^{K- 2}}\big{)}}.\] It follows that the sum \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(0)}]\) converges. **Lemma 4.2**.: _The sum \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(3)}]\) converges._ Proof.: We have \[\mathbb{E}\big{[}|A_{3}(n)|^{2}\big{]} =\sum_{\begin{subarray}{c}|\lambda|=n\\ m_{\lambda_{1}}(\lambda)\geqslant 3\\ \lambda_{1}>y_{0}\end{subarray}}\mathbb{E}\big{[}|a(\lambda)|^{2}\big{]}\] \[\leqslant\sum_{y_{0}<k\leqslant n/3}\frac{1}{k^{3}}\sum_{ \begin{subarray}{c}|\lambda|=n-3k\\ \lambda_{1}\leqslant k\end{subarray}}\mathbb{E}\big{[}|a(\lambda)|^{2}\big{]}.\] Since, for every \(k,n\geqslant 1\) \[\sum_{\begin{subarray}{c}|\lambda|=n-3k\\ \lambda_{1}\leqslant k\end{subarray}}\mathbb{E}\big{[}|a(\lambda)|^{2}\big{]} \leqslant\sum_{|\lambda|=n-3k}\mathbb{E}\big{[}|a(\lambda)|^{2}\big{]}\leqslant 1,\] we get \[\mathbb{E}\big{[}|A_{3}(n)|^{2}\big{]}\leqslant\sum_{y_{0}<k\leqslant n/3} \frac{1}{k^{3}}\ll\frac{1}{y_{0}^{2}}.\] Thus, \[\mathbb{P}[\mathcal{B}_{\ell}^{(3)}]\leqslant\sum_{X_{\ell-1}<n\leqslant X_{ \ell}}\frac{1}{(\log n)^{1/2+2\varepsilon}}\mathbb{E}\big{[}|A_{3}(n)|^{2} \big{]}\ll 2^{\ell^{K}}\frac{2^{2K\ell^{K-1}}}{2^{2\ell^{K}}}=\frac{2^{2K\ell^{ K-1}}}{2^{\ell^{K}}}.\] It follows that the sum \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(3)}]\) converges. ## 5 Upper bound of \(\mathbb{P}[\mathcal{B}_{\ell}^{(1)}]\) In this subsection, we give a bound of \(\mathbb{P}[\mathcal{B}_{\ell}^{(1)}]\). We consider the filtration \(\big{\{}\mathcal{F}_{k}\big{\}}_{k\geqslant 1}\), where \(\mathcal{F}_{k}\) is the \(\sigma\)-algebra generated by \(\{X(1),X(2),...,X(k-1)\}\). We set the convention \(a(\lambda)=0\) for every \(|\lambda|<0\). We have \[A_{1}(n)=\sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{1}>y_{0}\\ m_{\lambda_{1}}(\lambda)=1\end{subarray}}a(\lambda)=\sum_{y_{0}<k\leqslant n }\frac{X(k)}{\sqrt{k}}\sum_{\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k\end{subarray}}a(\lambda).\] Note that \(\sum_{|\lambda|=n-k}a(\lambda)\) is independent from \(X(k)\) and depend only of \(X(i)\) with \(i<k\). Note, as well \[\mathbb{E}\bigg{[}\frac{X(k)}{\sqrt{k}}\sum_{\begin{subarray}{c}|\lambda|=n-k \\ \lambda_{1}<k\end{subarray}}a(\lambda)\left|\mathcal{F}_{k}\right.\bigg{]}= \mathbb{E}\bigg{[}\frac{X(k)}{\sqrt{k}}\left|\mathcal{F}_{k}\right.\bigg{]} \sum_{\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k\end{subarray}}a(\lambda)=0.\] Thus \(A_{1}(n)\), as defined in section 3, is a sum of martingale differences. We set \[V(n):=\sum_{y_{0}<k\leqslant n}\frac{1}{k}\bigg{|}\sum_{\begin{subarray}{c}| \lambda|=n-k\\ \lambda_{1}<k\end{subarray}}a(\lambda)\bigg{|}^{2}. \tag{11}\] We define \[\widetilde{V}(n):=\sum_{\begin{subarray}{c}1\leqslant j\leqslant J\\ \frac{n}{y_{j}}>\ell^{100K}\end{subarray}}\sum_{y_{j-1}<k\leqslant y_{j}} \frac{1}{k}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k\end{subarray}}a(\lambda)\bigg{|}^{2} \tag{12}\] and we set \[V(n,y_{j}):=\frac{1}{y_{j}}\sum_{y_{j-1}<k\leqslant y_{j}}\bigg{|}\sum_{ \begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k\end{subarray}}a(\lambda)\bigg{|}^{2}. \tag{13}\] Note that the number of \(j\) such that \(n\geqslant y_{j}\) and \(\frac{n}{y_{j}}\leqslant\ell^{100K}\) is less than \(100K\ell\log\ell+1\). We have then \[V(n) \leqslant\widetilde{V}(n)+(100K\ell\log\ell+1)\sup_{\begin{subarray}{c} \frac{n}{y_{j}}\leqslant\ell^{100K}\\ n\geqslant y_{j}\end{subarray}}V(n,y_{j})\] \[\leqslant C_{0}\bigg{(}\widetilde{V}(n)+\ell\log\ell\sup_{1 \leqslant j\leqslant J}V(n,y_{j})\bigg{)}\] where \(C_{0}\) is a constant depend only on \(K\). Let \(T(\ell)\geqslant\ell^{2}\) be a parameter that depend on \(\ell\) and denote \(T_{1}(\ell):=\frac{T(\ell)}{\ell\log\ell}\). We set the events \[\mathcal{T}:=\bigg{\{}\sup_{X_{\ell-1}<n\leqslant X_{\ell}}V(n)\leqslant \frac{2C_{0}T(\ell)}{\ell^{K/2}}\bigg{\}} \tag{14}\] and \[\mathcal{T}_{n}:=\bigg{\{}V(n)\leqslant\frac{2C_{0}T(\ell)}{\ell^{K/2}}\bigg{\}}. \tag{15}\] We define finally the following probabilities \[\mathbb{P}_{\ell}^{(1)}:=\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}X_{\ell- 1}<n\leqslant X_{\ell}\\ 1\leqslant j\leqslant J\end{subarray}}V(n,y_{j})>\frac{T_{1}(\ell)}{\ell^{K/2}} \bigg{]} \tag{16}\] \[\widetilde{\mathbb{P}}^{(1)}_{\ell}:=\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<n\leq X_{ \ell}}\widetilde{V}(n)>\frac{T(\ell)}{\ell^{K/2}}\bigg{]}. \tag{17}\] It is clear that \(\mathbb{P}[\overline{\mathcal{T}}]\leq\mathbb{P}^{(1)}_{\ell}+\widetilde{ \mathbb{P}}^{(1)}_{\ell}\), where \(\overline{\mathcal{T}}\) is the complement of \(\mathcal{T}\) in sample space. ### Bounding \(\widetilde{\mathbb{P}}^{(1)}_{\ell}\) The objective of this section is to establish the convergence of the summation \(\sum_{\ell\geqslant 1}\widetilde{\mathbb{P}}^{(1)}_{\ell}\). **Lemma 5.1**.: _The sum \(\sum_{\ell\geqslant 1}\widetilde{\mathbb{P}}^{(1)}_{\ell}\) converges._ Proof.: Using the same argument as in the inequality (10), we have, for any \(r>0\) \[\mathbb{E}\bigg{[}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k\end{subarray}}a(\lambda)\bigg{|}^{2}\bigg{]}\leq\frac{1}{r^{n-k} }\exp\bigg{(}\sum_{m<k}\frac{r^{m}}{m}\bigg{)}.\] In particular for \(r=\mathrm{e}^{1/k}\), we have \[\mathbb{E}\bigg{[}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k\end{subarray}}a(\lambda)\bigg{|}^{2}\bigg{]}\leq\frac{k^{6}}{ \exp\big{(}\frac{n-k}{k}\big{)}}=\frac{\mathrm{e}k^{6}}{\exp\big{(}\frac{n}{k} \big{)}}.\] By using Markov's inequality and the observation that \(T(\ell)\geqslant 1\), we can derive a bound by utilizing the inequality \(y_{j}\leqslant X_{\ell}^{2}\), we get \[\widetilde{\mathbb{P}}^{(1)}_{\ell} \leqslant\ell^{k/2}\sum_{X_{\ell-1}<n\leqslant X_{\ell}}\sum_{ \begin{subarray}{c}1\leqslant j\leqslant J\\ \frac{n}{y_{j}}>\ell^{100K}\end{subarray}}\sum_{y_{j-1}<k\leqslant y_{j}} \frac{1}{k}\mathbb{E}\bigg{[}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k\end{subarray}}a(\lambda)\bigg{|}^{2}\bigg{]}\] \[\leqslant\ell^{k/2}\sum_{X_{\ell-1}<n\leqslant X_{\ell}}\sum_{ \begin{subarray}{c}1\leqslant j\leqslant J\\ \frac{n}{y_{j}}>\ell^{100K}\end{subarray}}\sum_{y_{j-1}<k\leqslant y_{j}} \frac{\mathrm{e}\mathrm{2}^{10\ell^{K}}}{\mathrm{e}^{\ell^{100K}}}\] \[\ll\frac{\ell^{3K/2}\mathrm{2}^{13\ell^{K}}}{\mathrm{e}^{\ell^{100 K}}}.\] We deduce that the sum \(\sum_{\ell\geqslant 1}\widetilde{\mathbb{P}}^{(1)}_{\ell}\) converges. ### Bounding \(\mathbb{P}^{(1)}_{\ell}\) The goal of this subsection is to give an optimal bound of \(\mathbb{P}^{(1)}_{\ell}\). Our initial step is to modify \(V(n,y_{j})\) to become a supermartingale for every value of \(n\). We have \[V(n,y_{j})\ll U(j,n):=\frac{1}{\widetilde{y}_{j}}\bigg{(}\frac{\widetilde{y}_{ j}}{\widetilde{y}_{0}}\bigg{)}^{-1/\ell^{K}}\sum_{r=0}^{+\infty}\bigg{|}\sum_{ \begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2} \tag{18}\] where \[\begin{cases}g_{j,n}(r)=y_{j}\text{ for }r\leqslant n-y_{j},\\ g_{j,n}(r)=n-r\text{ for }n-y_{j}<r\leqslant n-y_{j-1},\\ g_{j,n}(r)=y_{j-1}\text{ for }r>n-y_{j-1}\,.\end{cases} \tag{19}\] As in [7] and [1], the factor \(\left(\frac{\widetilde{y}_{j}}{\widetilde{y}_{0}}\right)^{-1/\ell^{K}}\) is added for technical reasons. One can see that \(\left(\frac{\widetilde{y}_{j}}{\widetilde{y}_{0}}\right)^{-1/\ell^{K}} \asymp\ 1\). Note that for some \(n,j\) and \(r\), it might happen that \(g_{j,n}(r)\leqslant 0\), in this case we have \(\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)=0\). For each \(j\), \(\mathcal{F}_{y_{j}}\) the \(\sigma\)-algebra generated by \((X(k))_{k<y_{j}}\). We will soon see that the \(U(j,n)\) random variable is a non-negative submartingale sequence over \(y_{j}\) for a fixed \(n\). However, we cannot apply Doob's inequality at this point unless we use an upper bound of the probability by the sum over \(y_{j}\) of the probability of the supremum over \(n\) of \(U(j,n)\), which would result a significant loss (of factor \(\ell^{K}\)). Nevertheless, by observing that \(U(0,n)\) is independent of \(n\) and by utilizing Lemma 2.2, we can provide a robust upper bound for the probability of the supremum over \(n\) and \(y_{j}\) of \(U(j,n)\). Unfortunately, the direct application of this result leads to a weak bound for \(\mathbb{P}_{\ell}^{(1)}\) due to the fact that 2-dimension Doob's inequality 2.2 only relates the probability of the supremum of a submartingale sequence to the expectations of its members and not to their low moments (which we need here, because of the presence of the factors \(\ell^{K/2}\), which are related to the size of the low moments of the random variables). To overcome this, we will first condition on some event that the contribution from the values of \(X(n)\) on the small \(n\) is dominated by the size of its low moments. We start first by showing that for every \(n\), \((U(j,n))_{j\geqslant 1}\) is supermartingale. **Lemma 5.2**.: _For \(\ell\) large enough, for any \(X_{\ell-1}<n\leqslant X_{\ell}\), the sequence \((U(j,n))_{j\geqslant 0}\) is supermartingale with respect to the filtration \((\mathcal{F}_{y_{j}})_{j\geqslant 0}\)._ Proof.: Let \(r\geqslant 0\), note that if \(g_{j,n}(r)\leqslant y_{j-1}\), the sum \(\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\) is \(\mathcal{F}_{y_{j-1}}\)-measurable. Assume, now that \(g_{j,n}(r)>y_{j-1}\), we start by computing \[\mathbb{E}\bigg{[}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}\,\bigg{|}\, \mathcal{F}_{y_{j-1}}\bigg{]}=\mathbb{E}\bigg{[}\bigg{|}\sum_{\begin{subarray} {c}|\lambda|=r\\ \lambda_{1}<g_{j-1}\end{subarray}}a(\lambda)+\sum_{\begin{subarray}{c}|\lambda |=r\\ y_{j-1}\leqslant\lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}\, \bigg{|}\,\mathcal{F}_{y_{j-1}}\bigg{]}.\] Decompose the partition \(\lambda\) into \(\rho\) and \(\sigma\), where \(\rho\) consists of those non-zero parts that lie between \(y_{j-1}\) and \(g_{j,n}(r)\), and \(\sigma\) consists of those non-zero parts of \(\lambda\) that are \(<y_{j-1}\). It follows from (6) that \(a(\lambda)=a(\rho)a(\sigma)\). Thus, with the above understanding, \[\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda) =\sum_{|\rho|+|\sigma|=r}a(\rho)a(\sigma)\] \[=\sum_{\begin{subarray}{c}|\rho|\leqslant r\\ \forall i,y_{j-1}\leqslant\rho_{i}<g_{j,n}(r)\end{subarray}}a(\rho)\sum_{ \begin{subarray}{c}|\sigma|=r-|\rho|\\ \sigma_{1}<y_{j-1}\end{subarray}}a(\sigma).\] Thus \[\mathbb{E}\bigg{[}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}\,\bigg{|}\,\mathcal{ F}_{y_{j-1}}\bigg{]}=\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<y_{j-1}\end{subarray}}a(\lambda)\bigg{|}^{2}+\sum_{ \begin{subarray}{c}|\rho|\leqslant r\\ \forall i,y_{j-1}\leqslant\rho_{i}<g_{j,n}(r)\end{subarray}}\mathbb{E}\big{[}|a( \rho)|^{2}\big{]}\bigg{|}\sum_{\begin{subarray}{c}|\sigma|=r-|\rho|\\ \sigma_{1}<y_{j-1}\end{subarray}}a(\sigma)\bigg{|}^{2}.\] Let \(r_{j-1}:=n-y_{j-1}\), we divide the right hand some in (18) in two sums \[\sum_{r=0}^{+\infty}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}=\sum_{r=0}^{r_{j-1 }}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}+\sum_{r>r_{j-1}} \bigg{|}\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}.\] Since \(g_{j,n}(r)\leqslant y_{j-1}\) for \(r>n-r_{j-1}\), the right hand sum is \(\mathcal{F}_{y_{j-1}}\)-measurable. For the other sum, we have \[\begin{split}\mathbb{E}\bigg{[}\sum_{r=0}^{r_{j-1}}\bigg{|}\sum_{ \begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}& \bigg{|}\mathcal{F}_{y_{j-1}}\bigg{]}=\sum_{r=0}^{r_{j-1}} \bigg{|}\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<y_{j-1}\end{subarray}}a(\lambda)\bigg{|}^{2}\\ &+\sum_{r=0}^{r_{j-1}}\sum_{\begin{subarray}{c}|\rho|\leqslant r \\ \forall i,y_{j-1}\leqslant\rho_{i}<g_{j,n}(r)\end{subarray}}\mathbb{E}\big{[}|a( \rho)|^{2}\big{]}\bigg{|}\sum_{\begin{subarray}{c}|\sigma|=r-|\rho|\\ \sigma_{1}<y_{j-1}\end{subarray}}a(\sigma)\bigg{|}^{2}.\end{split} \tag{20}\] Let's give an upper bound of the right hand side of the above equality \[\begin{split}\sum_{\begin{subarray}{c}r=0\\ y_{j-1}\leqslant\rho_{i}<g_{j,n}(r)\end{subarray}}^{r_{j-1}}\sum_{\begin{subarray} {c}|\rho|\leqslant r\\ \sigma_{1}<y_{j-1}\end{subarray}}\mathbb{E}\big{[}|a(\rho)|^{2}\big{]}\bigg{|} \sum_{\begin{subarray}{c}|\sigma|=r-|\rho|\\ \sigma_{1}<y_{j-1}\end{subarray}}a(\sigma)\bigg{|}^{2}\leqslant\sum_{r=0}^{r_{j- 1}}\sum_{\begin{subarray}{c}|\rho|\geqslant y_{j-1}\\ \forall i,y_{j-1}\leqslant\rho_{i}<y_{j}\end{subarray}}\mathbb{E}\big{[}|a( \rho)|^{2}\big{]}\bigg{|}\sum_{\begin{subarray}{c}|\sigma|=r-|\rho|\\ \sigma_{1}<y_{j-1}\end{subarray}}a(\sigma)\bigg{|}^{2}\\ =\sum_{r=0}^{r_{j-1}-|\rho|}\sum_{\begin{subarray}{c}|\rho|\geqslant y _{j-1}\\ \forall i,y_{j-1}\leqslant\rho_{i}<y_{j}\end{subarray}}\mathbb{E}\big{[}|a( \rho)|^{2}\big{]}\bigg{|}\sum_{\begin{subarray}{c}|\sigma|=r\\ \sigma_{1}<y_{j-1}\end{subarray}}a(\sigma)\bigg{|}^{2}\\ \leqslant\sum_{\begin{subarray}{c}|\rho|\geqslant y_{j-1}\\ \forall i,y_{j-1}\leqslant\rho_{i}<y_{j}\end{subarray}}\mathbb{E}\big{[}|a( \rho)|^{2}\big{]}\sum_{r=0}^{r_{j-1}}\bigg{|}\sum_{\begin{subarray}{c}|\sigma |=r\\ \sigma_{1}<y_{j-1}\end{subarray}}a(\sigma)\bigg{|}^{2}.\end{split}\] Hence, we deduced that the expectation denoted in (20) is bounded by \[\mathbb{E}\bigg{[}\sum_{r=0}^{r_{j-1}}\bigg{|}\sum_{\begin{subarray}{c}|\lambda |=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}\bigg{|}\mathcal{F}_{y _{j-1}}\bigg{]}\leqslant\bigg{(}1+\sum_{\begin{subarray}{c}|\rho|\geqslant y _{j-1}\\ \forall i,y_{j-1}\leqslant\rho_{i}<y_{j}\end{subarray}}\mathbb{E}\big{[}|a( \rho)|^{2}\big{]}\bigg{)}\sum_{r=0}^{r_{j-1}}\bigg{|}\sum_{\begin{subarray}{c}| \sigma|=r\\ \sigma_{1}<y_{j-1}\end{subarray}}a(\sigma)\bigg{|}^{2}.\] We deduce then \[\mathbb{E}\bigg{[}\sum_{r=0}^{+\infty}\bigg{|}\sum_{\begin{subarray}{c}|\lambda |=r\\ \lambda_{1}<g_{j,n}(r)\end{subarray}}a(\lambda)\bigg{|}^{2}\bigg{|}\mathcal{F}_{y _{j-1}}\bigg{]}\leqslant\bigg{(}1+\sum_{\begin{subarray}{c}|\rho|\geqslant y _{j-1}\\ \forall i,y_{j-1}\leqslant\rho_{i}<y_{j}\end{subarray}}\mathbb{E}\big{[}|a( \rho)|^{2}\big{]}\bigg{)}\sum_{r=0}^{+\infty}\bigg{|}\sum_{\begin{subarray}{c}| \sigma|=r\\ \sigma_{1}<g_{j-1,n}(r)\end{subarray}}a(\sigma)\bigg{|}^{2}.\] In the sake of readability, we set \[b_{j}:=\bigg{(}1+\sum_{\begin{subarray}{c}|\rho|\geqslant y_{j-1}\\ \forall i,y_{j-1}\leqslant\rho_{i}<y_{j}\end{subarray}}\mathbb{E}\big{[}|a( \rho)|^{2}\big{]}\bigg{)},\] the problem is reduced to prove that for \(\ell\) large enough \[\mathrm{e}^{-1/\ell}\bigg{(}\frac{\widetilde{y}_{j}}{\widetilde{y}_{j-1}} \bigg{)}^{-1/\ell^{k}}b_{j}=\mathrm{e}^{-1/\ell-1/\ell^{k+1}}b_{j}\leqslant 1.\] In fact, one can see that \[b_{j}\leqslant\prod_{y_{j-1}\leqslant k<y_{j}}\bigg{(}1-\frac{1}{k}\bigg{)}^{-1}= \exp\bigg{(}\sum_{y_{j-1}\leqslant k<y_{j}}\frac{1}{k}+O\bigg{(}\frac{1}{y_{j}} \bigg{)}\bigg{)}=\exp\bigg{(}\frac{1}{\ell}+O\bigg{(}\frac{1}{y_{j}}\bigg{)} \bigg{)}.\] We deduce then for \(\ell\) large enough \[\mathrm{e}^{-1/\ell-1/\ell^{K+1}}b_{j}\leqslant\exp\bigg{(}-\frac{1}{\ell^{K+1 }}+O\bigg{(}\frac{1}{y_{0}}\bigg{)}\bigg{)}\leqslant 1.\] This ends the proof. **Remark 1**.: _Note that for all \(r\geqslant 0\), \(g_{0,n}(r)=y_{0}\) and by applying Parseval identity, we have_ \[I_{0}:=U(0,n)=\frac{1}{\widetilde{y}_{0}}\sum_{r=0}^{+\infty}\bigg{|}\sum_{ \begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<y_{0}\end{subarray}}a(\lambda)\bigg{|}^{2}=\frac{1}{2\pi \widetilde{y}_{0}}\int_{0}^{2\pi}\big{|}F_{y_{0}}(\mathrm{e}^{i\vartheta}) \big{|}^{2}\mathrm{d}\vartheta\] _which doesn't depend on \(n\)._ **Lemma 5.3**.: _For sufficiently large \(\ell\), we have_ \[\mathbb{P}_{\ell}^{(1)}\ll\frac{1}{T_{1}(\ell)^{1/3}}. \tag{21}\] _Furthermore, for \(T_{1}(\ell)\geqslant\ell^{4}\), the sum \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{(1)}\) converges._ Proof.: We set first the event \[\mathcal{S}:=\bigg{\{}I_{0}\leqslant\frac{T_{1}(\ell)^{1/2}}{\ell^{K/2}} \bigg{\}}.\] Note that \(\mathcal{S}\) is \(\mathcal{F}_{y_{0}}\)-measurable. By Markov inequality followed by Lemma 2.4 for \(R=y_{0}\) and \(q=2/3\), we have \[\mathbb{P}\big{[}\overline{\mathcal{S}}\big{]}=\mathbb{P}\bigg{[}I_{0}>\frac{ T_{1}(\ell)^{1/2}}{\ell^{K/2}}\bigg{]}\leqslant\frac{\ell^{\frac{K}{3}} \mathbb{E}[I_{0}^{\frac{2}{3}}]}{T_{1}(\ell)^{\frac{1}{3}}}\ll\frac{1}{T_{1}( \ell)^{1/3}}.\] On the other hand, by the inequality (18), there exists an absolute constant \(C_{1}\) such that \[\mathbb{P}_{\ell}^{(1)} \leqslant\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}X_{\ell-1}<n \leqslant X_{\ell}\\ 1\leqslant j\leqslant J\end{subarray}}U(j,n)>\frac{C_{1}T_{1}(\ell)}{\ell^{K/2}} \bigg{]}\] \[\leqslant\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}X_{\ell-1}<n \leqslant X_{\ell}\\ 1\leqslant j\leqslant J\end{subarray}}U(j,n)>\frac{C_{1}T_{1}(\ell)}{\ell^{K/2} }\bigg{|}\mathcal{S}\bigg{]}+\mathbb{P}\big{[}\overline{\mathcal{S}}\big{]}.\] Note that for each \(X_{\ell-1}<n\leqslant X_{\ell}\), the sequence \((U(j,n))_{j\geqslant 0}\) is a nonnegative supermartingale (see Lemma 5.2) and \(U(0,n)=I_{0}\) for all \(n\). Note as well, that \(\mathcal{S}\) is \(\mathcal{F}_{0}\)-measurable. Then all assumptions of Lemma 2.2 are satisfied, we have then \[\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}X_{\ell-1}<n\leqslant X_{\ell}\\ 1\leqslant j\leqslant J\end{subarray}}U(j,n)>\frac{C_{1}T_{1}(\ell)}{\ell^{K/2 }}\bigg{|}\mathcal{S}\bigg{]}\ll\frac{\ell^{K/2}}{T_{1}(\ell)}\mathbb{E} \big{[}I_{0}\,\big{|}\,\mathcal{S}\big{]}.\] Since \(\mathbb{P}[\overline{\mathcal{S}}]\ll\frac{1}{T_{1}(\ell)^{1/3}}\). We get then \[\mathbb{P}_{\ell}^{(1)}\ll\frac{\ell^{K/2}}{T_{1}(\ell)}\mathbb{E}\big{[}I_{0} \,\big{|}\,\mathcal{S}\big{]}+\frac{1}{T_{1}(\ell)^{1/3}}.\] Since \(\mathbb{E}\big{[}I_{0}\,\big{|}\,\mathcal{S}\big{]}\leqslant\frac{T(\ell)^{1/2}}{ \ell^{K/2}}\) (by definition of the event \(\mathcal{S}\)) we deduce then \[\mathbb{P}^{(1)}_{\ell}\ll\frac{1}{T_{1}(\ell)^{1/2}}+\frac{1}{T_{1}(\ell)^{1/ 3}}\ll\frac{1}{T_{1}(\ell)^{1/3}}.\] For \(T_{1}(\ell)\geqslant\ell^{4}\), it's obvious that \(\sum_{\ell\geqslant 1}\mathbb{P}^{(1)}_{\ell}\) converges. This ends the proof. **Lemma 5.4**.: _For \(T(\ell)\geqslant\ell^{6}\), the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\,\overline{\mathcal{T}}\,\big{]}\) converges_ Proof.: For \(T(\ell)\geqslant\ell^{6}\), we have by Lemma 5.3, the sum \(\sum_{\ell\geqslant 1}\mathbb{P}^{(1)}_{\ell}\) converges. From Lemma 5.1 we have the sum \(\sum_{\ell\geqslant 1}\mathbb{P}^{(1)}_{\ell}\) converges. Thus the sum of \(\mathbb{P}\big{[}\,\overline{\mathcal{T}}\,\big{]}\leqslant\mathbb{P}^{(1)}_{ \ell}+\widetilde{\mathbb{P}}^{(1)}_{\ell}\) converges. ### Convergence of \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}^{(1)}_{\ell}\big{]}\) In this section, we follow the same steps as in [1] in section 6.8. **Proposition 5.5**.: _The sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}^{(1)}_{\ell}\big{]}\) converges._ Proof.: We have \[\mathbb{P}\big{[}\mathcal{B}^{(1)}_{\ell}\big{]} \leqslant\mathbb{P}\bigg{[}\bigcup_{X_{\ell-1}<n\leqslant X_{\ell }}\bigg{\{}\frac{|A_{1}(n)|}{(\log n)^{1/4+\varepsilon}}>1\bigg{\}}\bigcap \mathcal{T}_{n}\bigg{]}+\mathbb{P}\big{[}\,\overline{\mathcal{T}}\,\big{]}\] \[\leqslant\sum_{X_{\ell-1}<n\leqslant X_{\ell}}\mathbb{P}\bigg{[} \bigg{\{}\frac{|A_{1}(n)|}{(\log n)^{1/4+\varepsilon}}>1\bigg{\}}\bigcap \mathcal{T}_{n}\bigg{]}+\mathbb{P}\big{[}\,\overline{\mathcal{T}}\,\big{]}\] We fix \(T(\ell)=\ell^{6}\), which gives the convergence of the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\,\overline{\mathcal{T}}\,\big{]}\) by Lemma 5.2. By applying the Lemma 2.3, and since by assumption \(K\varepsilon=25\), we have then \[\mathbb{P}\bigg{[}\bigg{\{}\frac{|A_{r}(n)|}{(\log n)^{1/4+ \varepsilon}}>1\bigg{\}}\bigcap\mathcal{T}_{n}\bigg{]} \leqslant 2\exp\bigg{(}\frac{-C_{2}\ell^{K/2+2\varepsilon K}\ell^{K/2} }{T(\ell)}\bigg{)}\] \[\leqslant 2\exp\bigg{(}-C_{2}\ell^{K+44}\bigg{)}\] where \(C_{2}>0\) is an absolute constant. Finally, by using Lemma, we get \[\mathbb{P}\big{[}\mathcal{B}^{(1)}_{\ell}\big{]}\ll\exp\bigg{(}\log 2\ell^{K} -C_{2}\,\ell^{K}\ell^{44}\bigg{)}+\mathbb{P}\big{[}\,\overline{\mathcal{T}}\, \big{]}\] Thus the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}^{(1)}_{\ell}\big{]}\) converges. ## 6 Upper bound of \(\mathbb{P}[\mathcal{B}^{(2)}_{\ell}]\) In this subsection, we give an upper bound of \(\mathbb{P}[\mathcal{B}^{(2)}_{\ell}]\). ### Preliminaries. We start by some results. \[A_{2}(n)=\sum_{\begin{subarray}{c}|\lambda|=n\\ \lambda_{1}>y_{0}\\ m_{\lambda_{1}}(\lambda)=2\end{subarray}}a(\lambda)=\sum_{y_{0}<k\leqslant n/2} \frac{X(k)^{2}}{2k}\sum_{\begin{subarray}{c}|\lambda|=n-2k\\ \lambda_{1}<k\end{subarray}}a(\lambda).\] Note, as in Section 5, \(A_{2}(n)\) is a sum of martingale difference with respect to the same filtration \((\mathcal{F}_{k})_{k\geqslant 1}\). By following the same steps as in Section 5, the problem is reduced to study \[W(n) :=\sum_{y_{0}<k\leqslant n/2}\frac{1}{2k^{2}}\bigg{|}\sum_{ \begin{subarray}{c}|\lambda|=n-2k\\ \lambda_{1}<k\end{subarray}}a(\lambda)\bigg{|}^{2}\] \[\leqslant\frac{1}{2y_{0}}\sum_{y_{0}<k\leqslant n}\frac{1}{k} \bigg{|}\sum_{\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k/2\end{subarray}}a(\lambda)\bigg{|}^{2}.\] We set \[V^{(2)}(n):=\sum_{y_{0}<k\leqslant n}\frac{1}{k}\bigg{|}\sum_{\begin{subarray} {c}|\lambda|=n-k\\ \lambda_{1}<k/2\end{subarray}}a(\lambda)\bigg{|}^{2}.\] One can see that \(V^{(2)}(n)\) is similar to \(V(n)\) introduced in (11) with a little difference over the sum. We define the analogues of \(\widetilde{V}(n)\) and \(V(n,y_{j})\): \[\widetilde{V}^{(2)}(n):=\sum_{\begin{subarray}{c}1\leqslant j\leqslant J\\ \frac{n}{y_{j}}>t^{100K}\end{subarray}}\sum_{y_{j-1}<k\leqslant y_{j}}\frac{1} {k}\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k/2\end{subarray}}a(\lambda)\bigg{|}^{2}\] and \[V^{(2)}(n,y_{j}):=\frac{1}{y_{j}}\sum_{y_{j-1}<k\leqslant y_{j}}\bigg{|}\sum_ {\begin{subarray}{c}|\lambda|=n-k\\ \lambda_{1}<k/2\end{subarray}}a(\lambda)\bigg{|}^{2}.\] We have then as it was done in (5) \[V^{(2)}(n)\leqslant C_{0}\bigg{(}\widetilde{V}^{(2)}(n)+\ell\log\ell\sup_{1 \leqslant j\leqslant J}V^{(2)}(n,y_{j})\bigg{)}\] where \(C_{0}\) is an absolute constant. We set the events \[\mathcal{T}^{(2)}:=\bigg{\{}\sup_{X_{\ell-1}<n\leqslant X_{\ell}}V^{(2)}(n) \leqslant\frac{2C_{0}T(\ell)}{\ell^{K/2}}\bigg{\}} \tag{22}\] and \[\mathcal{T}^{(2)}_{n}:=\bigg{\{}V^{(2)}(n)\leqslant\frac{2C_{0}T(\ell)}{\ell^ {K/2}}\bigg{\}}. \tag{23}\] We define finally the analogue probabilities as in (16) and (17) \[\mathbb{P}^{(2)}_{\ell}:=\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}X_{\ell-1 }<n\leqslant X_{\ell}\\ 1\leqslant j\leqslant J\end{subarray}}V^{(2)}(n,y_{j})>\frac{T_{1}(\ell)}{ \ell^{K/2}}\bigg{]} \tag{24}\] \[\widetilde{\mathbb{P}}_{\ell}^{(2)}:=\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<n\leqslant X _{\ell}}\widetilde{V}^{(2)}(n)>\frac{T(\ell)}{\ell^{K/2}}\bigg{]}. \tag{25}\] It is clear that \(\mathbb{P}[\overline{\mathcal{T}}]\leqslant\mathbb{P}_{\ell}^{(2)}+\widetilde{ \mathbb{P}}_{\ell}^{(2)}\). **Lemma 6.1**.: _The sum \(\sum_{\ell\geqslant 1}\widetilde{\mathbb{P}}_{\ell}^{(2)}\) converges._ Proof.: Is the same proof as Lemma 5.1. On the other hand, we have \[V^{(2)}(n,y_{j})\ll U^{(2)}(j,n):=\frac{1}{\widetilde{y}_{j}}\bigg{(}\frac{ \widetilde{y}_{j}}{\widetilde{y}_{0}}\bigg{)}^{-1/\ell^{K}}\sum_{r=0}^{+\infty }\bigg{|}\sum_{\begin{subarray}{c}|\lambda|=r\\ \lambda_{1}<g_{j,n}(r)/2\end{subarray}}a(\lambda)\bigg{|}^{2}\] where \(g_{j,n}(r)\) as defined in (19). By following, the same argument as Section 5.2, one can prove easily that for every \(n\), \(\big{(}U^{(2)}(j,n)\big{)}_{j\geqslant 1}\) is supermartingale with respect to the filtration \((\mathcal{F}_{y_{j}/2})_{j\geqslant 1}\), where \(\mathcal{F}_{y_{j}/2}\) is the \(\sigma\)-algebra generated by \(\{X(k),k<y_{j}/2\}\). Note that \(U^{(2)}(0,n)\) doesn't depend on \(n\). Thus, by following exactly the same steps of Section 5.2 we get the analogue of Lemma 5.3. **Lemma 6.2**.: _For sufficiently large \(\ell\), we have_ \[\mathbb{P}_{\ell}^{(2)}\ll\frac{1}{T_{1}(\ell)^{1/3}}. \tag{26}\] _Furthermore, for \(T_{1}(\ell)\geqslant\ell^{4}\), the sum \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{(2)}\) converges._ We have, as well, the analogue of the Lemma 5.4. **Lemma 6.3**.: _For \(T(\ell)\geqslant\ell^{6}\), the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\,\overline{\mathcal{T}^{(2)}}\,\big{]}\) converges._ ### Convergence of \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\big{]}\) This section is similar to 5.3. **Proposition 6.4**.: _The sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\big{]}\) converges._ Proof.: We have \[\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\big{]} \leqslant\mathbb{P}\bigg{[}\bigcup_{X_{\ell-1}<n\leqslant X_{\ell }}\bigg{\{}\frac{|A_{2}(n)|}{(\log n)^{1/4+\varepsilon}}>1\bigg{\}}\bigcap \mathcal{T}_{n}^{(2)}\bigg{]}+\mathbb{P}\big{[}\,\overline{\mathcal{T}^{(2)}} \,\big{]}\] \[\leqslant\sum_{X_{\ell-1}<n\leqslant X_{\ell}}\mathbb{P}\bigg{[} \bigg{\{}\frac{|A_{2}(n)|}{(\log n)^{1/4+\varepsilon}}>1\bigg{\}}\bigcap \mathcal{T}_{n}^{(2)}\bigg{]}+\mathbb{P}\big{[}\,\overline{\mathcal{T}^{(2)}} \,\big{]}.\] We fix \(T(\ell)=\ell^{6}\), which gives the convergence of the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\,\overline{\mathcal{T}^{(2)}}\,\big{]}\) by Lemma 6.3. By applying the Lemma 2.3, and since \(W(n)\leqslant V^{(2)}(n)/y_{0}\), we have then \[\mathbb{P}\bigg{[}\bigg{\{}\frac{|A_{2}(n)|}{(\log n)^{1/4+ \varepsilon}}>1\bigg{\}}\bigcap\mathcal{T}_{n}^{(2)}\bigg{]} \leqslant 2\exp\bigg{(}\frac{-C_{2}y_{0}\ell^{K/2+2\varepsilon K} \ell^{K/2}}{T(\ell)}\bigg{)}\] \[\leqslant 2\exp\bigg{(}-C_{2}y_{0}\ell^{K+44}\bigg{)}\] where \(C_{2}>0\) is an absolute constant. We have at the end \[\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\big{]}\ll\exp\bigg{(}\log 2\ell^{K}-C_{2} \,y_{0}\ell^{K}\ell^{44}\bigg{)}+\mathbb{P}\big{[}\,\overline{\mathcal{T}^{(2)} }\,\big{]}.\] Thus the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\big{]}\) converges. ## Acknowledgement The author would like to thank his supervisor Regis de la Breteche for his patient guidance, encouragement and the judicious advices he has provided throughout the work that led to this paper.
2307.03575
Multimodal Deep Learning for Personalized Renal Cell Carcinoma Prognosis: Integrating CT Imaging and Clinical Data
Renal cell carcinoma represents a significant global health challenge with a low survival rate. This research aimed to devise a comprehensive deep-learning model capable of predicting survival probabilities in patients with renal cell carcinoma by integrating CT imaging and clinical data and addressing the limitations observed in prior studies. The aim is to facilitate the identification of patients requiring urgent treatment. The proposed framework comprises three modules: a 3D image feature extractor, clinical variable selection, and survival prediction. The feature extractor module, based on the 3D CNN architecture, predicts the ISUP grade of renal cell carcinoma tumors linked to mortality rates from CT images. A selection of clinical variables is systematically chosen using the Spearman score and random forest importance score as criteria. A deep learning-based network, trained with discrete LogisticHazard-based loss, performs the survival prediction. Nine distinct experiments are performed, with varying numbers of clinical variables determined by different thresholds of the Spearman and importance scores. Our findings demonstrate that the proposed strategy surpasses the current literature on renal cancer prognosis based on CT scans and clinical factors. The best-performing experiment yielded a concordance index of 0.84 and an area under the curve value of 0.8 on the test cohort, which suggests strong predictive power. The multimodal deep-learning approach developed in this study shows promising results in estimating survival probabilities for renal cell carcinoma patients using CT imaging and clinical data. This may have potential implications in identifying patients who require urgent treatment, potentially improving patient outcomes. The code created for this project is available for the public on: \href{https://github.com/Balasingham-AI-Group/Survival_CTplusClinical}{GitHub}
Maryamalsadat Mahootiha, Hemin Ali Qadir, Jacob Bergsland, Ilangko Balasingham
2023-07-07T13:09:07Z
http://arxiv.org/abs/2307.03575v1
Multimodal Deep Learning for Personalized Renal Cell Carcinoma Prognosis: Integrating CT Imaging and Clinical Data ###### Abstract **Background and Objective**: Renal cell carcinoma represents a significant global health challenge characterized by a low survival rate. The aim of this research was to devise a comprehensive deep-learning model capable of predicting survival probabilities in patients with renal cell carcinoma by integrating CT imaging and clinical data and addressing the limitations observed in prior studies. The aim is to facilitate the identification of patients requiring urgent treatment. **Methods**: The proposed framework comprises three modules: a 3D image feature extractor, clinical variable selection, and survival prediction. The feature extractor module, based on the 3D CNN architecture, predicts the ISUP grade of renal cell carcinoma tumors linked to mortality rates from CT images. A selection of clinical variables is systematically chosen using the Spearman score and random forest importance score as criteria. A deep learning based network, trained with discrete LogisticHaard-based loss, performs the survival prediction. Nine distinct experiments are performed, with varying numbers of clinical variables determined by different thresholds of the Spearman and importance scores. **Results**: Our findings demonstrate that the proposed strategy surpasses the current literature on renal cancer prognosis based on CT scans and clinical factors. The best-performing experiment yielded a concordance index of 0.84 and an area under the curve value of 0.8 on the test cohort, which suggests strong predictive power. **Conclusions**: The multimodal deep-learning approach developed in this study shows promising results in estimating survival probabilities for renal cell carcinoma patients using CT imaging and clinical data. This may have potential implications in identifying patients who require urgent treatment, potentially improving patient outcomes. The code created for this project is available for the public on: GitHub ## 1 Introduction ### Overview Renal cell carcinoma (RCC) is a prevalent malignancy in adults and constitutes around 90% of all kidney tumors (Saad et al., 2019). RCC develops in the tubules that filter blood and produce urine in the kidney (Saad et al., 2019). If not detected and treated early, RCC can metastasize to other organs, such as lungs and bones, and become life-threatening (Sung et al., 2021). The global incidence of RCC has been rising, which may be attributable to the easy availability of more improved diagnostic modalities, greater use of medical imaging, and changes in lifestyle factors (Siegel et al., 2020; Znaor et al., 2015). Treating RCC early is crucial for improving patient outcomes and enhancing both survival rates and quality of life (Znaor et al., 2015). Survival analysis is a statistical technique used to investigate the time duration until a critical event occurs, such as death or disease recurrence, and is widely used in oncology. The analysis involves examining time-to-event data to estimate the probability of an event occurring over a specified period while accounting for censoring. This statistical technique allows for the inclusion of individuals who did not experience the event of interest by the end of the study period (Lee and Wang, 2003). Survival analysis is vital for RCC patients as it informs treatment decisions and enables clinicians to determine the optimal course of action, including therapy type, the intensity of treatment, and the need for palliative care or supportive measures (Hui et al., 2019). Radiological data is essential for cancer survival analysis and prognosis, revealing tumor features, heterogeneity, therapy planning, and response evaluation. Clinicians can use this data to improve patient outcomes and survival prospects (Lambin et al., 2012). Clinical experts may make erroneous predictions or misinterpret medical images, which can result in incorrect prognosis and treatment decisions. In fact, approximately 20 million radiology reports are estimated to contain clinically significant errors annually (Brady, 2017). Furthermore, there may be a shortage of expert radiologists in certain regions or healthcare settings. Therefore, the implementation of artificial intelligence (AI) technologies can potentially aid in addressing these issues (Liu et al., 2019). AI has the potential to improve the accuracy and efficiency of medical image analysis, particularly through the utilization of convolutional neural networks (CNN), which can capture patterns and features that may not be easily detectable by human observers (Coppola et al., 2021). These algorithms can analyze large amounts of data quickly and accurately, reducing the potential for human error and improving diagnostic accuracy (n Montero et al., 2021). The use of AI in survival analysis has also shown promise since it has the potential to enhance the precision of prognostic models and facilitate personalized treatment (Wang et al., 2019). This study seeks to devise a multimodal AI-driven algorithm capable of predicting personalized survival probabilities utilizing CT images and clinical data, addressing challenges such as potential inaccuracies by clinicians and the scarcity of experts in radiological image interpretation. Our objective is to utilize a multimodal survival analysis strategy to achieve enhanced precision in forecasting survival probabilities. To investigate this, we classify RCC tumors in CT images according to the International Society of Urological Pathology (ISUP) grading (Srigley et al., 2013) system. This system serves as a means to evaluate cancer severity by examining the morphological characteristics of tumor cells under microscopic observation, and it is closely associated with mortality rates (Samaratunga et al., 2014). After the classification process, radiomic features are extracted and subsequently incorporated as input factors within our proposed survival model. Additional inputs encompass pertinent clinical variables pertaining to individual patients. By integrating radiomic features and clinical variables, we endeavor to estimate survival probabilities employing a methodology that is non-linear and non-proportional, offering a more robust, realistic, and accurate survival estimation. ### Related Work In statistics, the Cox proportional hazards (CPH) model (Cox, 1972) is the gold standard for modeling survival analysis using censored observations. CPH is limited by its linear nature, which fails to capture non-linear relationships between input data and the risk of an event occurring, e.g., death. However, the advent of AI and deep learning (DL) has opened new avenues for modeling survival analysis, allowing for the exploration of complex, non-linear relationships. DL-based models, such as Cox-nnet (Ching et al., 2018) and DeepSurv (Katzman et al., 2018), have been developed to address the limitations of the CPH model and enable the identification of novel prognostic factors. But they still face a fundamental constraint imposed by the proportional hazards assumption of CPH. CPH assumes that the effect of a patient's covariates on the risk of death remains constant over time, resulting in proportional predictions for all patients. However, this assumption may not be reflective of the true clinical situation, leading to survival curves that do not intersect. Recent developments in statistical modeling have led to innovative solutions to address the limitations of the CPH model in survival analysis. Two important methods that have been proposed to address the linearity and proportionality constraints of CPH are multivariate time-to-event logistic regression (MTLR) (Fotso, 2018), and Nnet-survival (Gensheimer and Narasimhan, 2019). MTLR is a method that extends logistic regression to time-to-event data by modeling the joint probability of multiple events. This approach allows for the incorporation of time-dependent covariates and can handle non-proportional hazards, making it a valuable tool for survival analysis. Nnet-survival, on the other hand, involves calculating the discrete conditional hazard rate at each time period. This concept has been established for several decades (Brown et al., 1997) and was recently applied to contemporary DL approaches, leading to the development of Nnet-survival. This approach makes it possible to have non-proportional hazard probability curves for different patients. Multimodal DL (Ngiam et al., 2011), a framework that leverages DL techniques to learn from multiple data modalities, including tabular, images, and audio, can be particularly useful in medical applications. With the availability of diverse data types such as clinical information, radiological images, and medication records, the application of multimodal DL can help capture complex relationships between the model inputs and outputs. Previous studies have employed various approaches to conduct survival analysis, focusing on using radiological images or integrating radiological images with clinical variables to enhance survival estimation. Mukherjee et al. (2020) developed a shallow CNN in conjunction with Cox loss to predict the prognosis of lung cancer patients using computed tomography (CT) image data alone. Wang et al. (2019) presented a CNN autoencoder-based survival model incorporating Cox loss for predicting recurrence in patients with high-grade serous ovarian cancer, relying solely on CT scans. Wu et al. (2021) developed a regression-based survival model for non-small cell lung cancer patients, effectively integrating imaging and clinical data to enhance the accuracy of survival predictions by employing the mean squared error (MSE) loss function. Zhang et al. (2020) introduced a risk prediction model for assessing overall survival in gastric cancer patients, incorporating both CT images and clinical variables as inputs and utilizing a specialized loss function. Zhong et al. (2020) presented a CNN-based model using Cox survival loss to predict survival outcomes in patients diagnosed with stage T3N1M0 nasopharyngeal carcinoma using magnetic resonance (MR) imaging and clinical variables. Lastly, Chaddad et al. (2017) explored the potential of radiomic features and clinical variables in predicting the survival group of lung cancer patients. The authors employed image analysis techniques, rather than DL methods, to extract radiomic features, and utilized a random forest classifier. ### Our Contributions This study differs from the previous study by presenting a novel multimodal approach to predicting nonlinear and non-proportional survival curves for patients afflicted with RCC by employing both CT images and clinical data. Moreover, our study is distinguished as the first to systematically explore the impact of varying combinations of clinical variables and CT images on survival prediction performance, thereby shedding light on the importance of selecting appropriate data sources for accurate survival estimations. Our proposed survival model offers several notable advantages over previous studies, which can be delineated in the following manner: 1) By incorporating 3D inputs and 3D convolutional layers, our model retains comprehensive information from the data, mitigating any potential loss of critical details pertaining to the interface between tumor and healthy tissue. 2) Our methodology enables the forecasting of non-proportional survival analyses, producing outcomes that are more relevant to clinical situations. 3) In comparison to previously reported literature, our survival model demonstrates superior performance indices, highlighting its efficacy. 4) A key feature of the proposed model is its ability to generate individualized survival curves for each patient, allowing for a more personalized assessment. 5) To elucidate the nuances of survival model performance, we conduct an analysis of varying combinations of clinical variables and CT images, providing valuable insights into the optimization of survival estimation. 6) In addition to conventional metrics for evaluating survival models, we also employ the violin diagram to visualize the distribution of survival probabilities in our survival model's outputs. ## 2 Methods Fig. 1 illustrates our entire approach for modeling survival analysis. It takes as inputs two data modalities: 1) CT volumes and 2) clinical variables. Motivated by the success of CNNs in image analysis and cancer prognosis, we present a CNN-based architecture for CT image feature extraction relevant to prognosis in our methodology. We utilize 3D CNNs to extract features from the three dimensions within the tumor volume motivated by Zhu et al. (2018). Subsequently, we integrate clinical information with the CT image features for survival analysis. Our method comprises three modules: (1) CT image feature extraction, (2) clinical variables selection, and (3) survival prediction. Within the scope of our scholarly investigation, the feature extractor network and the survival network are subjected to independent training processes as opposed to being trained concurrently. ### Radiomic Feature Extraction from CT Volumes We suggest classifying RCC tumors in CT images into ISUP grades (1, 2, 3, and 4) to obtain radiomic features relevant to prognosis. The CT volumes go through a 3D CNN feature extractor network to pull out these features. After that, we can integrate the clinical variables with the extracted radiomic features. We choose ISUP grade for classification as it has been shown to have a strong correlation to tumor recurrence, metastasis, and mortality (Warren and Harrison, 2018). Higher ISUP grades are indicative of a worse prognosis and higher mortality rate, whereas lower grades are associated with a better prognosis, and lower mortality rate (Costantini et al., 2021). For the feature extractor network, the classifier, in our study, we select EfficientNet (Tan and Le, 2019), which is a state-of-the-art CNN architecture developed by Google researchers for image classification. This architecture employs the compound coefficient method to scale up models efficiently. The largest model, EfficientNet B7, achieved the best performance compared to other variants. The EfficientNet layers utilize MBConv (Sandler et al., 2018), a type of convolutional block that can capture complex features in images while using fewer parameters and less computation compared to traditional convolutional blocks. To accommodate three-dimensional (3D) image data such as CT volumes, we adapt the exact architecture of EfficientNet B7 and transform it into a 3D CNN model. By doing so, features are extracted in all three-dimensional directions within the tumor volume, taking the third-dimensional spatial information into account. Hence, the employed feature extraction network operates in a three-dimensional (3D) domain, wherein the input comprises image volumes that have undergone preprocessing and concatenated with the annotations of tumor segmentations. This network classifies the RCC tumors into four ISUP grades. Our group has undertaken a separate, comprehensive study focused on the classification of RCC according to ISUP grading systems (Mahootiha et al., 2022). The architecture encompasses a combination of convolutional layers, MBConv layers, an Adaptive Average Pooling layer, and a series of fully connected (FC) layers, respectively. The Adaptive Average Pooling layer, which acts as a bridge between CNN and FC layers, can be used for feature extraction. This layer reduces the number of parameters and computational complexity required for classification while preserving crucial information about image features (Russakovsky et al., 2015). We extract the outputs from the Adaptive Average Pooling layer to create feature vectors for every patient. Subsequently, the output of the Adaptive Average Pooling layer is flattened, and the resulting image features are converted to feature vectors. The initial feature vector dimension is 2560, and our objective is to reduce it to 1000 to streamline integration with clinical variables. We attempt to achieve this reduction by employing an FC layer with 2560 input features and 1000 output features. These vectors are then saved as a CSV file for feeding to the survival network. Following the feature extraction and storage in a CSV file, normalization is performed to standardize the data based on the mean and standard deviation. ### Clinical Variables Selection Our objective is not to incorporate all clinical variables with CT image features for the purpose of survival prediction. Rather, we intend to explore the feasibility of using a smaller subset of variables (those that are more relevant to prognosis) in conjunction with CT image features to achieve improved results in survival prediction. To this end, we aim to evaluate various combinations of clinical variables. In order to identify the most relevant clinical variables for predicting survival times, we employ two well-established methods: (1) the Spearman correlation score (Pirie, 2006) and (2) the importance score of a random forest regressor (Wehenkel et al., 2018). These approaches help us to identify the most informative clinical variables to include in our survival model and achieve more accurate predictions of survival outcomes for each patient. Spearman's rank correlation coefficient is a non-parametric statistical method that quantifies the strength and direction of the relationship between two variables. It is primarily used to assess the existence of a monotonic association between two variables and is less sensitive to non-linear relationships and non-normal distributions compared to the parametric Pearson correlation coefficient. We calculate the Spearman correlation coefficients between clinical variables and survival times, forming a correlation matrix. This matrix represents the pairwise correlation coefficients between each clinical variable and survival times. The correlation coefficient values vary from -1 to 1, with -1 representing a strong negative connection, 1 representing a strong positive connection, and 0 representing no correlation. On the other hand, random forest regression can generate an importance score for each predictor variable. To acquire importance scores, we develop a random forest model consisting of 100 decision trees to estimate survival times based on clinical variables. The importance scores originate from the average decrease in the model's prediction error due to each feature, considering all the random forest's decision trees. Subsequently, the clinical variables are ranked according to their importance scores to discern the most influential variables for survival time prediction. Higher importance Figure 1: The comprehensive framework presented herein is composed of three primary modules. Module 1 encompasses feature extraction from CT volumes, wherein features are derived through the classification of CT images based on ISUP grades. Subsequently, a fully connected layer consisting of 1000 neurons is employed to reduce the radiomic feature size from 2560. Module 2 focuses on the judicious selection of clinical variables, which are merged with CT image features utilizing the Spearman correlation score and random forest importance score. Module 3 pertains to the survival network, which accepts input from three sources: CT image features, clinical variables, and a combination of both. The survival network’s output consists of survival probabilities for 15 discrete time intervals, which are subsequently converted into 1500 time points through interpolation. This process facilitates the visualization of continuous survival curves for individual patients. scores signify a more substantial impact of a variable on the model's predictive performance. ### Modeling Survival Estimation In this subsection, we focus on the critical aspects of modeling survival estimation, an essential component of our proposed method. We have organized this subsection into three parts: 1) survival network, where we describe the architecture and design choices for the survival network, which is responsible for estimating survival probabilities; 2) input to the survival network, which details the features and data used as input to the network, such as clinical variables and radiomic features extracted from the 3D CNN feature extractor; 3) loss function for modeling survival estimation, in which we discuss the choice of loss function employed to optimize the survival network. #### Survival Network The survival network, shown in Fig. 1, consists of three FC layers, comprising 500, 100, and 15 neurons, respectively. As our model is a discrete-time survival model, the final layer contains 15 neurons representing survival probabilities for 15 distinct time intervals. The network utilizes a rectified linear unit (ReLU) activation function in the intermediate layers and a sigmoid activation function in the last layer. In an effort to enhance the generalization capabilities of the model, a dropout rate of 0.3 is incorporated, accompanied by the implementation of batch normalization subsequent to the initial two FC layers. Subsequently, linear interpolation with 100 points is employed to transform the outputs into a set of 1500 values, enabling the generation of continuous survival curves for patients. We achieve the optimal architecture through a grid search of hyperparameters to find the best evaluation metrics for survival analysis. #### Input to the Survival Network The inputs to the survival network are derived from one of three sources: CT image features, clinical variables, or a combination of CT image features and clinical variables. In this study, we do a series of nine experiments, each using one of these three sources for survival prediction. In Section 3.4, a full explanation of these experiments will be given. #### Loss Function for Modeling Survival Estimation We adapt our survival model loss function based on discrete logistic hazards similar to the loss used in Nnet survival (Gensheimer and Narasimhan, 2019) to predict survival probabilities over M days (weeks, months, or years) which M is the maximum follow-up period. In order to employ the discretized hazard function, it is essential to convert continuous survival times into discrete intervals. To achieve this, a judicious selection of appropriate time intervals is undertaken to discretize the continuous survival times, with the preferred choice being equidistant intervals. Subsequently, each observed survival time is allocated to its respective time interval, effectively transforming the continuous data into a discrete format. We developed a loss function that used a vectorized form of likelihoods for censored and uncensored patients. The loss function is given by: \[\text{L}=-\sum_{x=1}^{p}\sum_{i=1}^{n}\left(\begin{array}{c}\ln\left(1+ \text{surv}_{f}(x)(i)\cdot\left(\text{surv}_{\text{pred}}\left(x\right)(i)-1 \right)\right)\\ +\ln\left(1-\text{surv}_{f}(x)(i)\cdot\text{surv}_{\text{pred}}\left(x\right)( i)\right)\end{array}\right),\] where \(p\) denotes the number of patients in a batch, and \(n\) represents the number of discrete time intervals (15). \(\text{surv}_{\text{pred}}(x)(i)\) signifies the predicted outcome of the survival model for patient \(x\) at time interval \(i\), which can be either 0 for a patient who died during interval \(i\) or 1 for a patient who remained alive in interval \(i\). Each patient's death or censoring time, \(t\), is determined based on the ground truth survival time given in a dataset. The ground truth vectors \(\text{surv}_{\text{s}}\) and \(\text{surv}_{\text{f}}\) for the survival model are of length \(n\) for every patient. Vector \(\text{surv}_{\text{s}}\) corresponds to the time intervals when the patient survived, while vector \(\text{surv}_{\text{f}}\) denotes the specific time interval when the death occurred. For uncensored patients in the time interval \(i\): \[\text{surv}_{s}(x)(i) =\begin{cases}1,&\text{if }t_{x}\geq t_{i}\\ 0,&\text{otherwise}\end{cases}\] \[\text{surv}_{f}(x)(i) =\begin{cases}1,&\text{if }t_{i-1}\leq t_{x}<t_{i}\\ 0,&\text{otherwise}\end{cases}\] for censored patients in the time interval \(i\): \[\text{surv}_{s}(x)(i)=\begin{cases}1,&\text{if }t_{x}\geq\frac{1}{2}\left(t_{i-1}+t_{i }\right)\\ 0,&\text{otherwise}\end{cases}\] and \[\text{surv}_{f}(x)(i)=0.\] The dot product within the loss function assesses the similarities between the predicted vector and the ground truth vector. We trained the survival networks with the help of pycox v0.2.0.3 library 1. Footnote 1: [https://github.com/havak/pycox](https://github.com/havak/pycox) ## 3 Experimental Setup In this section, we describe the experimental setup employed in our study, which is divided into four main parts: experimental dataset, training the 3D CNN feature extractor network, training the survival network, and the experiments conducted. First, we present the datasets used in our study and discuss their characteristics, source, and any preprocessing steps undertaken. Next, we outline the process of training the 3D CNN feature extractor network, followed by the training of the survival network. Finally, we describe the experiments conducted. A comprehensive experimental setup ensures the reproducibility of our results and allows for a fair comparison with other studies in the field. ### Experimental Dataset The selection of appropriate datasets and their preparation plays a crucial role in the evaluation of our proposed method. In this subsection, we provide an overview of the dataset used in our experiments and the steps taken to prepare the data for our study. We have divided this subsection into three parts: the KiTS21 dataset, dataset splitting, and clinical data preparation. First, we discuss the KiTS21 dataset, its characteristics, and its source. Next, we describe the dataset-splitting process, explaining the rationale behind the chosen method and the proportions used for training, validation, and testing. Finally, we detail the clinical data preparation, including any necessary preprocessing and data normalization procedures. #### 3.1.1 KiTS21 Dataset We used the KiTS21 (Heller et al., 2021) dataset to train and test our proposed framework. The dataset comprises 300 patients who underwent either partial or complete nephrectomy for suspected kidney cancer between 2010 and 2020 at the M Health Fairview or Cleveland Clinic medical facility and includes both clinical data and CT scans with manually annotated kidneys and tumors (ground-truth labels). The primary objective of collecting this dataset was to apply segmentation algorithms. We selected this dataset for its comprehensive clinical information, precise annotations, and ample subject numbers. The dataset contains three files, including CT scan volumes (NIFTI format), annotation volumes (NIFTI format), and clinical data (JSON format). The annotation volumes consist of manual segmentations of the kidneys, tumor(s), and cyst(s). In this study, we used 41 clinical variables from this JSON file. All critical clinical information, such as pathology results, is included in this file (Heller et al., 2019). Notably, this data was originally obtained from the Cancer Imaging Archive in DICOM format, while the clinical data was provided in a single CSV file. #### 3.1.2 Dataset Splitting To train the classify network that can be used as the radiomic feature extraction for survival prediction, we excluded 56 patients with empty ISUP grade values from the original dataset. The remaining dataset contained 244 patients, of which 32 had dead events and 212 had censored time. The maximum observation time was 3000 days (which refers to the M variable in Section 2.3.3), and the median observation time was 644 days. We performed three-fold cross-validation for the ISUP grading classification to create three different subsets for training, validation, and testing. The division of the dataset into three folds was based on the number of deceased and censored patients to ensure that each subset contained the same proportion of deceased individuals. Each fold included 57% of the total dataset for training, 10% for validation, and 33% for testing. The training subset had 10% of patients who died, the validation subset had 33%, and the test subset had 13%. After dividing the dataset into three folds, we increased the number of samples in each train and validation subset by doing multiple augmentations (discussed in 3.2.1). The optimal fold for the classification model was determined based on the F1-score, as delineated in 3.2.3. This selected fold was subsequently employed for training, validation, and testing within the survival network, excluding the utilization of augmented samples. Two distinct networks were employed for ISUP grade classification and survival analysis; however, they were trained using identical subjects within the training, validation, and test datasets. This approach was adapted to preclude the introduction of the classification network's training data as the validation or test dataset for the survival analysis network, thereby avoiding the overestimation of the survival analysis network's performance due to heightened accuracy in detecting ISUP grades within the training dataset. #### 3.1.3 Clinical Data Preparation The clinical data used in training the survival network consisted of 38 variables classified into two categories: continuous numerical and categorical. In order to facilitate their usage in the survival model, the categorical variables were transformed into discrete numerical values, such as gender. In contrast, the continuous numerical variables, such as pathologic size, were normalized based on the mean and standard deviation to facilitate effective interpretation by the survival model. ### Training the 3D CNN Feature Extractor In this subsection, we elaborate on the process of training the 3D CNN feature extractor, a critical component in our proposed method. This subsection is divided into three parts: 1) preprocessing of CT image volumes, which is a necessary step before training the 3D CNN feature extractor to guarantee consistent input data and enhance the network's performance; 2) training details of the classifier, encompassing aspects such as the chosen loss function, number of epochs, optimizer, and learning rate; 3) best fold selection for radiomic feature extraction, a crucial step following the training of the 3D CNN feature extractor, which involves selecting the optimal fold to ensure the highest quality features for the subsequent survival network. #### 3.2.1 Preprocessing of CT Image Volumes Before commencing the preprocessing phase for CT volumes, image augmentations were implemented as a strategy to address the inherent imbalance in the dataset, as well as the paucity of training samples. A combination of positional augmentations, such as flipping, rotation, and affine transformations, along with noise augmentations, including Gaussian noise, Gibbs noise, and space spike noise, were employed to enhance the diversity and generalizability of the dataset. Before the ISUP grade classification, image preprocessing is applied to improve the quality of the input images and their radiomic features for better interpretation of the input (Perez-Garcia et al., 2021; Akar et al., 2017). As recommended in the MIT challenge2, all volumes were resized to \(128\times 128\times 128\). We also resampled the volumes based on one millimeter isotropic voxel size, which has been recommended as a standard voxel size by previous studies in medical imaging (Alom et al., 2019; Vankdothu and Hameed, 2022). Additionally, all volumes were reoriented to the RAS (Right, Anterior, and Superior) orientation, which is the most commonly used orientation in medical images (Alom et al., 2019; Vankdothu and Hameed, 2022; Litjens et al., 2017). We utilized intensity normalization based on the Z-score in medical imaging (Perez-Garcia et al., 2021; Tustison et al., 2010). For kidney image and tumor segmentation, identical image preprocessing steps were employed, with the exception that intensity normalization was not applied for tumor segmentation. As part of our image preprocessing pipeline, we employed a concatenation step to enhance the performance of our 3D EfficientNet-B7 model in identifying kidney tumors. Specifically, we combined the extracted kidney images with their corresponding manual tumor segmentations to enable the model to focus on the surface patterns of the tumors (Akar et al., 2017). This image concatenation approach serves to enrich the input volume with additional information pertaining to the location and size of the tumors. If the model were to be trained solely on the kidney images without the inclusion of tumor location data, it could potentially pick up on irrelevant features and perform poorly on previously unseen data. Thus, the concatenation step helps to improve the model's generalizability and overall accuracy. #### 3.2.2 Training Details To validate the robustness of the radiomic feature extractor network, we conducted three-fold cross-validation with three distinct train, validation, and test subsets, while maintaining the same hyperparameters for each training iteration. For training the 3D CNN feature extractor, we used the ADAM optimizer (Kingma and Ba, 2014) with a fixed learning rate of \(1\times 10^{-4}\), and 50 epochs were run to optimize the network parameters. In addition, we employed the Cross-Entropy loss given by: \[L=-\sum_{i=1}^{n}t_{i}\times log(p_{i}), \tag{1}\] where \(t_{i}\) is the true ISUP class and \(p_{i}\) is the softmax probability for the \(ith\) class, and \(n\) is the number of ISUP classes (4 in this study). The 3D feature extractor was trained using PyTorch v1.11.0 on a workstation equipped with an Nvidia GeForce RTX 3090 GPU, an AMD Ryzen 7 5800X 8-Core Processor, and 32 GB of RAM. #### 3.2.3 Best Fold Selection for Radiomic Feature Extraction We used precision, recall, and F-score in the evaluation of our feature extractor network, as these fundamental metrics are indispensable for assessing classification model performance. Precision, also known as the positive predictive value, quantifies the fraction of true positives out of the total instances predicted as positive by the model. Mathematically, precision can be defined as: \[\text{Precision}=\frac{\text{TP}}{(\text{TP}+\text{FP})}, \tag{2}\] where TP denotes true positives and FP denotes false positives. Recall, alternatively referred to as sensitivity or true positive rate, measures the fraction of true positive instances among the total number of actual positive instances within the dataset. Recall can be mathematically represented as: \[\text{Recall}=\frac{\text{TP}}{(\text{TP}+\text{FN})}, \tag{3}\] where FN denotes false negatives. The F-score, specifically the F1-score, constitutes the harmonic mean of precision and recall, delivering a single metric that balances both measures. The F1-score is particularly advantageous in situations with uneven class distributions, as it accounts for the trade-off between precision and recall. The F1-score can be calculated using the following equation: \[\text{F1-score}=2*\frac{(\text{Precision}*\text{Recall})}{(\text{Precision}+ \text{Recall})}. \tag{4}\] We calculated the average of four Precision, Recall, and F-scores that we gained for each ISUP class. We repeated this process three times for each of our three folds, giving us three average Precision, Recall, and F-scores. The second fold, with an average F-score of 0.84, was the best and selected as our final radiomic feature extractor that can be used the the input for the survival network. ### Training the Survival Network In the present study, we used a total of 500 epochs for training the survival network. To prevent overfitting and enhance generalization, early stopping was implemented with a patience level of 10. The model was optimized utilizing the Adam optimizer, accompanied by a learning rate of 0.01. The optimal learning rate selection was determined by applying the method put forth by Smith (Smith, 2017). ### Experiments In our study to demonstrate the performance improvement of our proposed survival analysis framework, we conduct nine distinct experiments with different combinations of inputs. The first experiment involves solely CT image features, the second only involves clinical variables, and the third combines CT image features and clinical variables. The remaining six experiments are created by applying three distinct thresholds for each the Spearman correlation and the random forest regression importance score. The clinical variables are selected based on the thresholds in the last six experiments and then fed to the survival network. These experiments are then compared to each other to evaluate their effectiveness in predicting survival outcomes. Further details on the results of these experiments will be presented in Section 4.2. ## 4 Results In this section, we present the evaluation of our survival model's performance, the experimental results, and a comparison with related previous studies. We have organized this section into four parts: 1) metrics for survival model performance evaluation, where we describe the evaluation metrics used to assess the performance of our proposed survival model; 2) experimental results from nine different experiments, in which we report and analyze the results obtained from a series of nine distinct experiments conducted to evaluate our method; 3) plotting violin diagram for survival distribution, which involves the visualization of survival distribution data using violin diagrams to provide a comprehensive understanding of the results; 4) discussion, where we compare our findings with those from related previous studies, highlighting the improvements and contributions made by our proposed method. ### Metrics for Performance Evaluation To assess the performance of our survival model, we used two key metrics: the time-dependent concordance index (\(C^{td}\)) and the cumulative dynamic area under the curve (AUC). \(C^{td}\) extends Harrell's concordance index (Harrell et al., 1982), a widely utilized measure for evaluating the discriminative power of survival models. The time-dependent C-index is specifically designed to address situations in which a model's predictive accuracy may vary over time. It gauges the model's capacity to accurately rank the predicted survival probabilities of subject pairs at a specific time point, taking censoring into account. The computation of \(C^{td}\)(r) involves dividing the count of accurately ordered pairs by the total count of comparable pairs. The \(C^{td}\) range between 0 and 1, where values approaching 1 signify superior predictive accuracy, while those nearing 0.5 indicate the model possesses no greater discriminative power than random chance. It has been established that the concordance index is excessively optimistic, particularly with an increasing number of censored patients in the dataset (Uno et al.). The cumulative dynamic AUC (Lambert and Chevret, 2016) extends the conventional AUC metric, a prominent measure for assessing binary classification models. This extension is tailored to specifically address censored data and time-varying predictions in the realm of survival analysis. Within this context, the cumulative dynamic AUC is computed for a designated time point t, quantifying the model's discriminatory capacity to distinguish subjects experiencing the event of interest by time t from those who do not. The cumulative dynamic AUC represents the area under the time-dependent Receiver Operating Characteristic (ROC) curve, which delineates the sensitivity (true positive rate) against 1-specificity (false positive rate) for different time points. Ranging from 0 to 1, the cumulative dynamic AUC reveals greater predictive accuracy as it approaches 1, while values nearing 0.5 indicate that the model's discriminatory power is no better than random chance. In addition to standard metrics, we use violin plots, a novel approach, to observe survival model output distributions. This is the first study proposing the application of violin plots for the evaluation of survival models. High evaluation metrics may be misleading, as predicted survival probabilities may not match ground truth times of death. Violin plots serve as a valuable tool in visualizing model performance by exhibiting the distribution of predicted probabilities at the time of mortality for deceased individuals, as well as the distribution of predicted probabilities at the ultimate time point for censored subjects. For example, a distribution approximating zero for deceased patients signifies satisfactory model training, which consequently yields probability predictions in close proximity to zero. ### Experimental Results One of our study aims to investigate the impact of various combinations of clinical variables on the prediction of survival outcomes in patients with RCC. Specifically, we seek to identify the clinical features that contribute most significantly to the accurate prediction of patients' survival times. Initially, we conducted two independent analyses to evaluate the effectiveness of CT image features and clinical data individually with respect to their impact on the performance of our survival model. Subsequently, we explore the impact of merging CT image features with various combinations of selected clinical variables on the performance of the survival model. To this end, we developed nine distinct experiments (Exp). Table 1 shows the difference between these nine experiments in terms of their inputs and thresholds used for choosing the combination of clinical variables. Table 1 also reports the C-index and AUC obtained on the test subset from each experiment. We used the same survival network architecture in the nine experiments for a fair comparison. From experiment 4 to experiment 9, we applied different thresholds for the Spearman correlation score (S_score) and random forest regression importance score (I_score). We adjusted three different thresholds for Spearman's correlation coefficient. As the threshold values decreased, we incorporated more clinical variables with weaker correlations to the patient survival time into the survival model. In contrast, we utilized three different thresholds for the importance score of the decision tree regressor. By lowering these threshold values, we gradually incorporated less important clinical variables in predicting survival times into the survival model. According to Table 1, the best evaluation metrics were obtained in experiment 8, in which the C-index and AUC are 0.84 and 0.8, respectively. The inputs to experiment 8 are the followings: CT images features, Localized Solid Tumor, Age at Nephrectomy, Congestive Heart Failure, Body Mass Index, Uncomplicated Diabetes Mellitus, Pathologic Size, Myocardial Infarction, Radiographic Size, Metastatic Solid Tumor, Hospitalization, Mild Liver Disease, Smoking History, Surgery Type, Gender, Tumor Histologic Subtype, Pathology T Stage, and Surgical Approach. In order to evaluate the effectiveness of the survival model, ten unique individuals from the test cohort were selected, of which five had deceased from RCC, and five had censoring time to event. Subsequently, the survival curves for these patients were plotted, utilizing the survival probabilities derived from experiment 8. Fig. 2 illustrates five distinct survival curves generated by our survival model, corresponding to five different patients from the test cohort with events equal to one (deceased). Based on the ground truth survival time, patient 1 died after 645 days, patient four after 688 days, patient three after 102 days, patient four after 2,000 days, and patient five after 39 days. At the time of their respective deaths, the model predicted survival probabilities of 0.42, 0.15, 0.3, 0.05, and 0.5 for patients 1 through 5. These values indicate varying degrees of accuracy in predicting the survival probabilities at the actual time of death, with patient 4 exhibiting the lowest probability and patient 5 the highest. At 500 days, the model's survival probability predictions for patients 1 to 5 were 0.57, 0.2, 0.06, 0.3, and 0.05, respectively. At 1000 days, these probabilities decreased to 0.1, 0.07, 0, 0.18, and 0 for the same patients. At 1500 days, all survival probability predictions reached 0, except for patient 4, whose probability reached 0 at 2000 days. The above findings suggest that the model demonstrates varying performance in predicting survival probabilities for the five patients at different time points. Some predictions align closely with the ground truth survival times, while others exhibit a bit of discrepancy. Fig. 3 illustrates five distinct survival curves generated by our survival model for five different patients from the test cohort with events equal to zero (censored) and censoring time greater than 2000 days. Based on the ground truth survival time, their censoring times are 2473 days for patient 6, 2045 days for patient 7, 2900 days for patient 8, 2600 days for patient 9, and 2298 days for patient 10. For patient 6, the model indicates a high probability of survival (0.95) at the censoring time of 2473, while patient 7 has a slightly lower survival probability of 0.9 at the censoring time of 2045. Patients 8, 9, and 10 exhibit survival probabilities of 0.75, 0.68, and 0.87 at their censoring times of 2900, 2600, and 2298, respectively. These predictions suggest that patient 6 has the highest likelihood of survival at their censoring time, followed by patients 7 and 10. Conversely, patients 8 and 9 possess relatively lower survival probabilities, with patient 9 exhibiting the lowest probability of survival among the five patients at their respective censoring times. ### Violin Diagram for Survival Distribution Fig. 4 presents the violin plot for censored and uncensored subjects in the testing subset, showcasing the survival probability on the vertical axis for Exp8, which emerged as the optimal experimental outcome. As we mentioned in Section 4.1, with violin plots, we can comprehend the distribution of survival probabilities predicted by our survival model. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Exp** & **Inputs** & **Thescholds** & **C-** & **AUC** \\ & & & **index** & \\ \hline \hline Exp1 & CT images Features & & 0.72 & 0.73 \\ \hline \hline Exp2 & 38 clinical variables & & 0.72 & 0.74 \\ \hline \hline Exp3 & \begin{tabular}{l} CT images Features \\ 38 clinical variables \\ \end{tabular} & & 0.82 & 0.74 \\ \hline \hline Exp4 & \begin{tabular}{l} CT images Features \\ 4 clinical variables \\ \end{tabular} & \begin{tabular}{l} 0.79 \\ \(\mid\)S\(\_\)score\(\mid\)\(\geqslant\) 0.1 \\ \end{tabular} & 0.79 & 0.76 \\ \hline \hline Exp5 & \begin{tabular}{l} CT images Features \\ 13 clinical variables \\ \end{tabular} & \begin{tabular}{l} 0.83 \\ \(\mid\)S\(\_\)score\(\mid\)\(\geqslant\) 0.05 \\ \end{tabular} & 0.83 & 0.75 \\ \hline \hline Exp6 & \begin{tabular}{l} CT images Features \\ 30 clinical variables \\ \end{tabular} & \begin{tabular}{l} 0.81 \\ \(\mid\)S\(\_\)score\(\mid\)\(\geqslant\) 0.01 \\ \end{tabular} & 0.81 & 0.77 \\ \hline \hline Exp7 & \begin{tabular}{l} CT images Features \\ 4 clinical variables \\ \end{tabular} & \begin{tabular}{l} 0.77 \\ \(\mid\)S\(\_\)score\(\mid\)\(\geqslant\) 0.1 \\ \end{tabular} & 0.77 & 0.74 \\ \hline \hline Exp8 & \begin{tabular}{l} CT images \\ Features \\ 17 clinical variables \\ \end{tabular} & \begin{tabular}{l} 0.84 \\ \(\mid\)S\(\_\)score\(\mid\)\(\geqslant\)0.01 \\ \end{tabular} & 0.84 & 0.8 \\ \hline \hline Exp9 & \begin{tabular}{l} CT images Features \\ 29 clinical variables \\ \end{tabular} & \begin{tabular}{l} 0.84 \\ \(\mid\)S\(\_\)score\(\mid\)\(\geqslant\)0.001 \\ \end{tabular} & 0.76 \\ \hline \end{tabular} \end{table} Table 1: Differences of Experiments used for RCC survival analysis. Figure 3: Survival Probabilities for five patients in the test cohort who had censored events. Figure 2: Survival Probabilities for five patients in the test cohort who died. Censored Test relates to the patients who did not experience the event in the test subset. Regarding the Censored_Test, we are uncertain about the outcomes at the final time point (whether death occurred or not). Based on the median, it can be inferred that for half of the subjects, a survival probability lower than 0.45 would be predicted, with a higher concentration around 0.1. Conversely, for the remaining half, a survival probability greater than 0.45 would be anticipated, with a greater distribution around 0.8. Given the symmetrical distribution around the median for Censored_Test, the model predicts that half of the censored patients would exhibit high survival probability at the last observation time. In contrast, the other half would demonstrate low survival probability. Dead_Test refers to patients who died within the test subset. This group's ideal output survival probabilities distribution is at zero. The median survival probability predicted by our survival model is around 0.3. Our survival model accurately predicted near-zero survival probabilities for half of the patients whose predicted probabilities were below the median. The other half of the patients with predicted probabilities higher than the median had distributions mostly near the median. Those nearer to the median had accurate survival predictions but with a small time shift. Those close to 1 are those patients whose survival probabilities were not accurately calculated. Upon analyzing the violin plots of the test subset for both censored and deceased patients, it can be concluded that our proposed multimodal survival model yields satisfactory outcomes that mostly align closely with the actual follow-up times of patients. ## 5 Discussion The hypotheses underlying our study were twofold. Firstly, we aimed to investigate whether the selective provision of the most relevant clinical variables to the model would enhance the performance evaluation of survival analysis, as opposed to indiscriminately supplying all clinical variables. As evidenced by Table 1 in Section 4.2, our findings revealed that the most favorable results were obtained in Exp 8, wherein clinical variables were judiciously chosen. In contrast, Exp 3, which involved the inclusion of all clinical variables, yielded a lower C-index (by 0.02) and a reduced AUC (by 0.06). Our second hypothesis posited that multimodal survival analysis would yield superior results when compared to single-modality approaches. In support of this hypothesis, Table 1 in Section 4.2 demonstrates that using single-modality data, such as solely clinical data or CT image features, led to lower performance metrics. In contrast, Exp 3 through 9, which incorporated a combination of clinical data and CT image features, resulted in significantly improved performance outcomes. To demonstrate that the integration of clinical data and CT image features results in superior performance compared to using CT image features or clinical data alone; we selected a single patient from the test cohort whose survival curve was incorrectly plotted in Exp 1 and Exp 2, in which both used a single data modality. This patient had an ISUP grade of 4 and a survival duration of 2,000 days. Subsequently, we generated survival curves for this patient from our nine defined experiments as illustrated in Fig. 5. The estimated survival probabilities for the selected patient at the time of death (2,000 days) were approximately 0.77 and 0.82 for Exp 1 and Exp 2, respectively. In contrast, the survival probabilities at the time of death for Exp 3 through 9 were as follows 0.18 for Exp 3, 0.6 for Exp 4, 0.61 for Exp 5, 0.19 for Exp 6, 0.55 for Exp 7, 0.05 for Exp 8, and 0 for Exp 9. This result demonstrates that multimodal data can yield superior results compared to single-modality experiments. In the context of our study, we sought to draw comparisons with other studies that employed radiological images and clinical variables as inputs for their survival models. A summary of these methodologies can be found in Section 1.2. Table 2 presents a comparison between our approach and previous studies, focusing on the C-index and AUC metrics. Our method outperforms the others in terms of both C-index and AUC, as demonstrated in Table 2. Our methodology, utilizing 17 clinical variables, yielded the highest C-index and AUC values, demonstrating its superior performance. As indicated in the second row of the table 2 for Exp4, our approach's effectiveness remains evident even when only four clinical variables are employed. The C-index and AUC values in Exp4 scenario continue to surpass those Figure 4: Violin plots for censored & deceased events in train & test sets. Figure 5: Survival Probabilities from 9 different experiments for one patient. of alternative methods, despite the constrained number of clinical variables utilized. Additionally, it is worth noting that none of the aforementioned studies provided a methodology capable of generating non-proportional individualized survival curves for distinct patients. Furthermore, these studies relied on traditional methodologies that were susceptible to proportionality issues. In contrast, our approach not only yielded superior performance in terms of C-index and AUC but also addressed the limitations inherent in previous studies. In addition to the benefits of our method, our study has a number of limitations. Firstly, for Experiment 8, which achieved the highest C-index and AUC, 17 clinical variables were employed during the training process. In order to generate survival predictions for a new patient, it is essential to obtain all 17 clinical variables to ensure the accuracy of the survival estimation. Secondly, precise feature extraction necessitates not only whole abdomen images but also segmentation annotations of the target organ and associated tumors. Thirdly, to generalize this study's findings to other types of cancer, it is essential to pinpoint a clinical variable comparable to the ISUP grade, enabling tumor classification in relation to survival estimation. In future research, we aim to explore the feasibility of integrating RCC ISUP grade classification and survival prediction within a unified training framework, eliminating the need for separate tumor grading. Furthermore, we intend to investigate innovative approaches for feature extraction that circumvent the necessity for organ and tumor annotations, thereby enhancing the applicability and efficiency of the proposed methodology. ## 6 Conclusion This study presents a novel multimodal AI-based framework for predicting individualized survival probabilities of patients with renal cell carcinoma. The proposed framework utilizes CT imaging and clinical data as inputs. We demonstrated that relevant features for survival estimation could be extracted from CT scans and combined with clinical data to improve performance. Our proposed framework can generate personalized, non-linear, and non-proportional survival probability curves for different patients, achieving higher accuracy and outperforming previously published methods. We showed that using a multimodal strategy for survival analysis leads to higher accuracy than a single-modality approach. Moreover, we presented that carefully selecting significant clinical factors as inputs to the survival model can further enhance the performance of survival prediction. This study has the path for enhanced clinical decision-making for renal cell carcinoma patients, allowing for more precise and individualized therapy options based on the combination of radiological imaging and clinical data. Future research in this field may build upon these findings, resulting in even more complex and reliable survival prediction models. ## 7 Acknowledgement The authors acknowledge the CIRCLE grant no. 287112 and the Health South-East Trust grant no. 2023069 for funding this study. We thank Havard Kvamme, a previous Ph.D. student at the University of Oslo, for his invaluable guidance in effectively utilizing the pycox library he created.